Pandas PDF
Pandas PDF
toolkit
Release 0.23.4
i
ii
pandas: powerful Python data analysis toolkit, Release 0.23.4
PDF Version
Zipped HTML
Date: Aug 06, 2018 Version: 0.23.4
Binary Installers: https://pypi.org/project/pandas
Source Repository: http://github.com/pandas-dev/pandas
Issues & Ideas: https://github.com/pandas-dev/pandas/issues
Q&A Support: http://stackoverflow.com/questions/tagged/pandas
Developer Mailing List: http://groups.google.com/group/pydata
pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with
“relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing
practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful
and flexible open source data analysis / manipulation tool available in any language. It is already well on its way
toward this goal.
pandas is well suited for many different kinds of data:
• Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet
• Ordered and unordered (not necessarily fixed-frequency) time series data.
• Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels
• Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed
into a pandas data structure
The two primary data structures of pandas, Series (1-dimensional) and DataFrame (2-dimensional), handle the
vast majority of typical use cases in finance, statistics, social science, and many areas of engineering. For R users,
DataFrame provides everything that R’s data.frame provides and much more. pandas is built on top of NumPy
and is intended to integrate well within a scientific computing environment with many other 3rd party libraries.
Here are just a few of the things that pandas does well:
• Easy handling of missing data (represented as NaN) in floating point as well as non-floating point data
• Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
• Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the user can
simply ignore the labels and let Series, DataFrame, etc. automatically align the data for you in computations
• Powerful, flexible group by functionality to perform split-apply-combine operations on data sets, for both ag-
gregating and transforming data
• Make it easy to convert ragged, differently-indexed data in other Python and NumPy data structures into
DataFrame objects
• Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
• Intuitive merging and joining data sets
• Flexible reshaping and pivoting of data sets
• Hierarchical labeling of axes (possible to have multiple labels per tick)
• Robust IO tools for loading data from flat files (CSV and delimited), Excel files, databases, and saving / loading
data from the ultrafast HDF5 format
• Time series-specific functionality: date range generation and frequency conversion, moving window statistics,
moving window linear regressions, date shifting and lagging, etc.
CONTENTS 1
pandas: powerful Python data analysis toolkit, Release 0.23.4
Many of these principles are here to address the shortcomings frequently experienced using other languages / scientific
research environments. For data scientists, working with data is typically divided into multiple stages: munging and
cleaning data, analyzing / modeling it, then organizing the results of the analysis into a form suitable for plotting or
tabular display. pandas is the ideal tool for all of these tasks.
Some other notes
• pandas is fast. Many of the low-level algorithmic bits have been extensively tweaked in Cython code. However,
as with anything else generalization usually sacrifices performance. So if you focus on one feature for your
application you may be able to create a faster specialized tool.
• pandas is a dependency of statsmodels, making it an important part of the statistical computing ecosystem in
Python.
• pandas has been used extensively in production in financial applications.
Note: This documentation assumes general familiarity with NumPy. If you haven’t used NumPy much or at all, do
invest some time in learning about NumPy first.
See the package overview for more detail about what’s in the library.
2 CONTENTS
CHAPTER
ONE
WHAT’S NEW
This is a minor bug-fix release in the 0.23.x series and includes some small regression fixes and bug fixes. We
recommend that all users upgrade to this version.
Warning: Starting January 1, 2019, pandas feature releases will support Python 3 only. See Plan for dropping
Python 2.7 for more.
• Fixed Regressions
• Bug Fixes
• Python 3.7 with Windows gave all missing values for rolling variance calculations (GH21813)
Groupby/Resample/Rolling
• Bug where calling DataFrameGroupBy.agg() with a list of functions including ohlc as the non-initial
element would raise a ValueError (GH21716)
• Bug in roll_quantile caused a memory leak when calling .rolling(...).quantile(q) with q in
(0,1) (GH21965)
Missing
• Bug in Series.clip() and DataFrame.clip() cannot accept list-like threshold containing NaN
(GH19992)
3
pandas: powerful Python data analysis toolkit, Release 0.23.4
This release fixes a build issue with the sdist for Python 3.7 (GH21785) There are no other changes.
1.3 v0.23.2
This is a minor bug-fix release in the 0.23.x series and includes some small regression fixes and bug fixes. We
recommend that all users upgrade to this version.
Note: Pandas 0.23.2 is first pandas release that’s compatible with Python 3.7 (GH20552)
Warning: Starting January 1, 2019, pandas feature releases will support Python 3 only. See Plan for dropping
Python 2.7 for more.
DataFrame.all() and DataFrame.any() now accept axis=None to reduce over all axes to a scalar
(GH19976)
In [2]: df.all(axis=None)
Out[2]: False
This also provides compatibility with NumPy 1.15, which now dispatches to DataFrame.all. With NumPy 1.15
and pandas 0.23.1 or earlier, numpy.all() will no longer reduce over every axis:
With pandas 0.23.2, that will correctly return False, as it did with NumPy < 1.15.
• The source and binary distributions no longer include test data files, resulting in smaller download sizes. Tests
relying on these data files will be skipped when using pandas.test(). (GH19320)
Conversion
• Bug in constructing Index with an iterator or generator (GH21470)
• Bug in Series.nlargest() for signed and unsigned integer dtypes when the minimum value is present
(GH21426)
Indexing
• Bug in Index.get_indexer_non_unique() with categorical key (GH21448)
• Bug in comparison operations for MultiIndex where error was raised on equality / inequality comparison
involving a MultiIndex with nlevels == 1 (GH21149)
• Bug in DataFrame.drop() behaviour is not consistent for unique and non-unique indexes (GH21494)
• Bug in DataFrame.duplicated() with a large number of columns causing a ‘maximum recursion depth
exceeded’ (GH21524).
I/O
• Bug in read_csv() that caused it to incorrectly raise an error when nrows=0, low_memory=True, and
index_col was not None (GH21141)
• Bug in json_normalize() when formatting the record_prefix with integer columns (GH21536)
Categorical
• Bug in rendering Series with Categorical dtype in rare conditions under Python 2.7 (GH21002)
Timezones
1.3. v0.23.2 5
pandas: powerful Python data analysis toolkit, Release 0.23.4
• Bug in Timestamp and DatetimeIndex where passing a Timestamp localized after a DST transition
would return a datetime before the DST transition (GH20854)
• Bug in comparing DataFrame`s with tz-aware :class:`DatetimeIndex columns with a DST
transition that raised a KeyError (GH19970)
Timedelta
• Bug in Timedelta where non-zero timedeltas shorter than 1 microsecond were considered False (GH21484)
1.4 v0.23.1
This is a minor bug-fix release in the 0.23.x series and includes some small regression fixes and bug fixes. We
recommend that all users upgrade to this version.
Warning: Starting January 1, 2019, pandas feature releases will support Python 3 only. See Plan for dropping
Python 2.7 for more.
• Fixed Regressions
• Performance Improvements
• Bug Fixes
Groupby/Resample/Rolling
• Bug in DataFrame.agg() where applying multiple aggregation functions to a DataFrame with duplicated
column names would cause a stack overflow (GH21063)
1.4. v0.23.1 7
pandas: powerful Python data analysis toolkit, Release 0.23.4
• Bug in concat() where error was raised in concatenating Series with numpy scalar and tuple names
(GH21015)
• Bug in concat() warning message providing the wrong guidance for future behavior (GH21101)
Other
• Tab completion on Index in IPython no longer outputs deprecation warnings (GH21125)
• Bug preventing pandas being used on Windows without C++ redistributable installed (GH21106)
This is a major release from 0.22.0 and includes a number of API changes, deprecations, new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Highlights include:
• Round-trippable JSON format with ‘table’ orient.
• Instantiation from dicts respects order for Python 3.6+.
• Dependent column arguments for assign.
• Merging / sorting on a combination of columns and index levels.
• Extending Pandas with custom types.
• Excluding unobserved categories from groupby.
• Changes to make output shape of DataFrame.apply consistent.
Check the API Changes and deprecations before updating.
Warning: Starting January 1, 2019, pandas feature releases will support Python 3 only. See Plan for dropping
Python 2.7 for more.
• New features
– JSON read/write round-trippable with orient='table'
– .assign() accepts dependent arguments
– Merging on a combination of columns and index levels
– Sorting by a combination of columns and index levels
– Extending Pandas with Custom Types (Experimental)
– New observed keyword for excluding unobserved categories in groupby
– Rolling/Expanding.apply() accepts raw=False to pass a Series to the function
– DataFrame.interpolate has gained the limit_area kwarg
– get_dummies now supports dtype argument
– Timedelta mod method
– Sparse
– Reshaping
– Other
A DataFrame can now be written to and subsequently read back via JSON while preserving metadata through usage
of the orient='table' argument (see GH18912 and GH9146). Previously, none of the available orient values
guaranteed the preservation of dtypes and index names, amongst other metadata.
In [1]: df = pd.DataFrame({'foo': [1, 2, 3, 4],
...: 'bar': ['a', 'b', 'c', 'd'],
...: 'baz': pd.date_range('2018-01-01', freq='d', periods=4),
...: 'qux': pd.Categorical(['a', 'b', 'c', 'c'])
...: }, index=pd.Index(range(4), name='idx'))
...:
In [2]: df
Out[2]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [3]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
In [6]: new_df
Out[6]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [7]: new_df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
foo int64
(continues on next page)
Please note that the string index is not supported with the round trip format, as it is used by default in write_json
to indicate a missing index name.
In [11]: new_df
Out[11]:
foo bar baz qux
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [12]: new_df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
The DataFrame.assign() now accepts dependent keyword arguments for python version later than 3.6 (see also
PEP 468). Later keyword arguments may now refer to earlier ones if the argument is a callable. See the documentation
here (GH14207)
In [14]: df
Out[14]:
A
0 1
1 2
2 3
Warning: This may subtly change the behavior of your code when you’re using .assign() to update an
existing column. Previously, callables referring to other variables being updated would get the “old” values
Previous Behavior:
In [2]: df = pd.DataFrame({"A": [1, 2, 3]})
New Behavior:
In [16]: df.assign(A=df.A+1, C= lambda df: df.A* -1)
Out[16]:
A C
0 2 -2
1 3 -3
2 4 -4
Strings passed to DataFrame.merge() as the on, left_on, and right_on parameters may now refer to either
column names or index level names. This enables merging DataFrame instances on a combination of index levels
and columns without resetting indexes. See the Merge on columns and levels documentation section. (GH14355)
Strings passed to DataFrame.sort_values() as the by parameter may now refer to either column names or
index level names. This enables sorting DataFrame instances by a combination of index levels and columns without
resetting indexes. See the Sorting by Indexes and Values documentation section. (GH14353)
# Build MultiIndex
In [22]: idx = pd.MultiIndex.from_tuples([('a', 1), ('a', 2), ('a', 2),
....: ('b', 2), ('b', 1), ('b', 1)])
....:
# Build DataFrame
In [24]: df_multi = pd.DataFrame({'A': np.arange(6, 0, -1)},
....: index=idx)
....:
In [25]: df_multi
Out[25]:
A
first second
a 1 6
2 5
2 4
b 2 3
1 2
1 1
A
first second
b 1 1
1 2
a 1 6
b 2 3
a 2 4
2 5
Pandas now supports storing array-like objects that aren’t necessarily 1-D NumPy arrays as columns in a DataFrame or
values in a Series. This allows third-party libraries to implement extensions to NumPy’s types, similar to how pandas
implemented categoricals, datetimes with timezones, periods, and intervals.
As a demonstration, we’ll use cyberpandas, which provides an IPArray type for storing ip addresses.
IPArray isn’t a normal 1-D NumPy array, but because it’s a pandas ~pandas.api.extension.ExtensionArray, it can be
stored properly inside pandas’ containers.
In [4]: ser
Out[4]:
0 0.0.0.0
1 192.168.1.1
2 2001:db8:85a3::8a2e:370:7334
dtype: ip
Notice that the dtype is ip. The missing value semantics of the underlying array are respected:
In [5]: ser.isna()
Out[5]:
0 True
1 False
2 False
dtype: bool
For more, see the extension types documentation. If you build an extension array, publicize it on our ecosystem page.
Grouping by a categorical includes the unobserved categories in the output. When grouping by multiple categorical
columns, this means you get the cartesian product of all the categories, including combinations where there are no
observations, which can result in a large number of groups. We have added a keyword observed to control this
behavior, it defaults to observed=False for backward-compatiblity. (GH14942, GH8138, GH15217, GH17594,
GH8669, GH20583, GH20902)
In [31]: df
Out[31]:
A B values C
0 a c 1 foo
1 a d 2 bar
2 b c 3 foo
3 b d 4 bar
For pivotting operations, this behavior is already controlled by the dropna keyword:
In [34]: cat1 = pd.Categorical(["a", "a", "b", "b"],
....: categories=["a", "b", "z"], ordered=True)
....:
In [37]: df
Out[37]:
A B values
0 a c 1
1 a d 2
2 b c 3
3 b d 4
values
A B
a c 1.0
d 2.0
y NaN
b c 3.0
d 4.0
y NaN
z c NaN
d NaN
y NaN
In [41]: s
Out[41]:
1 0
2 1
3 2
4 3
5 4
dtype: int64
Pass a Series:
In [44]: ser = pd.Series([np.nan, np.nan, 5, np.nan, np.nan, np.nan, 13, np.nan, np.
˓→nan])
In [45]: ser
Out[45]:
0 NaN
1 NaN
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 NaN
8 NaN
dtype: float64
The get_dummies() now accepts a dtype argument, which specifies a dtype for the new columns. The default
remains uint8. (GH18330)
In [49]: df = pd.DataFrame({'a': [1, 2], 'b': [3, 4], 'c': [5, 6]})
mod (%) and divmod operations are now defined on Timedelta objects when operating with either timedelta-like
or with numeric arguments. See the documentation here. (GH19365)
In [52]: td = pd.Timedelta(hours=37)
In previous versions, .rank() would assign inf elements NaN as their ranks. Now ranks are calculated properly.
(GH6945)
In [55]: s
Out[55]:
0 -inf
1 0.000000
2 1.000000
3 NaN
4 inf
dtype: float64
Previous Behavior:
In [11]: s.rank()
Out[11]:
0 1.0
1 2.0
2 3.0
3 NaN
4 NaN
dtype: float64
Current Behavior:
In [56]: s.rank()
Out[56]:
0 1.0
1 2.0
2 3.0
3 NaN
4 4.0
dtype: float64
Furthermore, previously if you rank inf or -inf values together with NaN values, the calculation won’t distinguish
NaN from infinity when using ‘top’ or ‘bottom’ argument.
In [58]: s
Out[58]:
0 NaN
1 NaN
2 -inf
3 -inf
dtype: float64
Previous Behavior:
In [15]: s.rank(na_option='top')
Out[15]:
0 2.5
1 2.5
2 2.5
3 2.5
dtype: float64
Current Behavior:
In [59]: s.rank(na_option='top')
Out[59]:
0 1.5
1 1.5
2 3.5
3 3.5
dtype: float64
Previously, Series.str.cat() did not – in contrast to most of pandas – align Series on their index before
concatenation (see GH18657). The method has now gained a keyword join to control the manner of alignment, see
examples below and here.
In v.0.23 join will default to None (meaning no alignment), but this default will change to 'left' in a future version
of pandas.
In [62]: s.str.cat(t)
Out[62]:
0 ab
1 bd
2 ce
3 dc
dtype: object
DataFrame.astype() can now perform column-wise conversion to Categorical by supplying the string
'category' or a CategoricalDtype. Previously, attempting this would raise a NotImplementedError.
See the Object Creation section of the documentation for more details and examples. (GH12860, GH18099)
Supplying the string 'category' performs column-wise conversion, with only labels appearing in a given column
set as categories:
In [65]: df = df.astype('category')
In [66]: df['A'].dtype
Out[66]: CategoricalDtype(categories=['a', 'b', 'c'], ordered=False)
In [67]: df['B'].dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[67]:
˓→CategoricalDtype(categories=['b', 'c', 'd'], ordered=False)
Supplying a CategoricalDtype will make the categories in each column consistent with the supplied dtype:
In [71]: df = df.astype(cdt)
In [72]: df['A'].dtype
Out[72]: CategoricalDtype(categories=['a', 'b', 'c', 'd'], ordered=True)
In [73]: df['B'].dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[73]:
˓→CategoricalDtype(categories=['a', 'b', 'c', 'd'], ordered=True)
• Unary + now permitted for Series and DataFrame as numeric operator (GH16073)
• Better support for to_excel() output with the xlsxwriter engine. (GH16149)
• pandas.tseries.frequencies.to_offset() now accepts leading ‘+’ signs e.g. ‘+1h’. (GH18171)
• MultiIndex.unique() now supports the level= argument, to get unique values from a specific index
level (GH17896)
• pandas.io.formats.style.Styler now has method hide_index() to determine whether the index
will be rendered in output (GH14194)
• pandas.io.formats.style.Styler now has method hide_columns() to determine whether
columns will be hidden in output (GH14194)
• Improved wording of ValueError raised in to_datetime() when unit= is passed with a non-convertible
value (GH14350)
• Series.fillna() now accepts a Series or a dict as a value for a categorical dtype (GH17033)
• pandas.read_clipboard() updated to use qtpy, falling back to PyQt5 and then PyQt4, adding compati-
bility with Python3 and multiple python-qt bindings (GH17722)
• Improved wording of ValueError raised in read_csv() when the usecols argument cannot match all
columns. (GH17301)
• DataFrame.corrwith() now silently drops non-numeric columns when passed a Series. Before, an ex-
ception was raised (GH18570).
• IntervalIndex now supports time zone aware Interval objects (GH18537, GH18538)
• Series() / DataFrame() tab completion also returns identifiers in the first level of a MultiIndex().
(GH16326)
• read_excel() has gained the nrows parameter (GH16645)
• DataFrame.append() can now in more cases preserve the type of the calling dataframe’s columns (e.g. if
both are CategoricalIndex) (GH18359)
• DataFrame.to_json() and Series.to_json() now accept an index argument which allows the
user to exclude the index from the JSON output (GH17394)
• IntervalIndex.to_tuples() has gained the na_tuple parameter to control whether NA is returned
as a tuple of NA, or NA itself (GH18756)
• Categorical.rename_categories, CategoricalIndex.rename_categories and Series.
cat.rename_categories can now take a callable as their argument (GH18862)
• Interval and IntervalIndex have gained a length attribute (GH18789)
• Resampler objects now have a functioning pipe method. Previously, calls to pipe were diverted to the
mean method (GH17905).
• is_scalar() now returns True for DateOffset objects (GH18943).
• DataFrame.pivot() now accepts a list for the values= kwarg (GH17160).
• Added pandas.api.extensions.register_dataframe_accessor(), pandas.
api.extensions.register_series_accessor(), and pandas.api.extensions.
register_index_accessor(), accessor for libraries downstream of pandas to register custom
accessors like .cat on pandas objects. See Registering Custom Accessors for more (GH14781).
• IntervalIndex.astype now supports conversions between subtypes when passed an IntervalDtype
(GH19197)
• IntervalIndex and its associated constructor methods (from_arrays, from_breaks,
from_tuples) have gained a dtype parameter (GH19262)
• Added pandas.core.groupby.SeriesGroupBy.is_monotonic_increasing() and pandas.
core.groupby.SeriesGroupBy.is_monotonic_decreasing() (GH17015)
• For subclassed DataFrames, DataFrame.apply() will now preserve the Series subclass (if defined)
when passing the data to the applied function (GH19822)
• DataFrame.from_dict() now accepts a columns argument that can be used to specify the column names
when orient='index' is used (GH18529)
• Added option display.html.use_mathjax so MathJax can be disabled when rendering tables in
Jupyter notebooks (GH19856, GH19824)
• DataFrame.replace() now supports the method parameter, which can be used to specify the replacement
method when to_replace is a scalar, list or tuple and value is None (GH19632)
• Timestamp.month_name(), DatetimeIndex.month_name(), and Series.dt.
month_name() are now available (GH12805)
• Timestamp.day_name() and DatetimeIndex.day_name() are now available to return day names
with a specified locale (GH12806)
• DataFrame.to_sql() now performs a multivalue insert if the underlying connection supports itk
rather than inserting row by row. SQLAlchemy dialects supporting multivalue inserts include: mysql,
postgresql, sqlite and any dialect with supports_multivalues_insert. (GH14315, GH8953)
• read_html() now accepts a displayed_only keyword argument to controls whether or not hidden ele-
ments are parsed (True by default) (GH20027)
• read_html() now reads all <tbody> elements in a <table>, not just the first. (GH20690)
• quantile() and quantile() now accept the interpolation keyword, linear by default
(GH20497)
• zip compression is supported via compression=zip in DataFrame.to_pickle(), Series.
to_pickle(), DataFrame.to_csv(), Series.to_csv(), DataFrame.to_json(), Series.
to_json(). (GH17778)
• WeekOfMonth constructor now supports n=0 (GH20517).
• DataFrame and Series now support matrix multiplication (@) operator (GH10259) for Python>=3.5
• Updated DataFrame.to_gbq() and pandas.read_gbq() signature and documentation to reflect
changes from the Pandas-GBQ library version 0.4.0. Adds intersphinx mapping to Pandas-GBQ library.
(GH20564)
• Added new writer for exporting Stata dta files in version 117, StataWriter117. This format supports
exporting strings with lengths up to 2,000,000 characters (GH16450)
• to_hdf() and read_hdf() now accept an errors keyword argument to control encoding error handling
(GH20835)
• cut() has gained the duplicates='raise'|'drop' option to control whether to raise on duplicated
edges (GH20947)
• date_range(), timedelta_range(), and interval_range() now return a linearly spaced index if
start, stop, and periods are specified, but freq is not. (GH20808, GH20983, GH20976)
We have updated our minimum supported versions of dependencies (GH15184). If installed, we now require:
1.5.2.2 Instantiation from dicts preserves dict insertion order for python 3.6+
Until Python 3.6, dicts in Python had no formally defined ordering. For Python version 3.6 and later, dicts are ordered
by insertion order, see PEP 468. Pandas will use the dict’s insertion order, when creating a Series or DataFrame
from a dict and you’re using Python version 3.6 or higher. (GH19884)
Previous Behavior (and current behavior if on Python < 3.6):
pd.Series({'Income': 2000,
'Expenses': -1500,
'Taxes': -200,
'Net result': 300})
Expenses -1500
Income 2000
Net result 300
Taxes -200
dtype: int64
Notice that the Series is now ordered by insertion order. This new behavior is used for all relevant pandas types
(Series, DataFrame, SparseSeries and SparseDataFrame).
If you wish to retain the old behavior while using Python >= 3.6, you can use .sort_index():
Panel was deprecated in the 0.20.x release, showing as a DeprecationWarning. Using Panel will now show a
FutureWarning. The recommended way to represent 3-D data are with a MultiIndex on a DataFrame via the
to_frame() or with the xarray package. Pandas provides a to_xarray() method to automate this conversion.
For more details see Deprecate Panel documentation. (GH13563, GH18324).
In [76]: p = tm.makePanel()
In [77]: p
Out[77]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 3 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to D
In [78]: p.to_frame()
Out[78]:
ItemA ItemB ItemC
major minor
2000-01-03 A 1.474071 -0.964980 -1.197071
B 0.781836 1.846883 -0.858447
C 2.353925 -1.717693 0.384316
D -0.744471 0.901805 0.476720
2000-01-04 A -0.064034 -0.845696 -1.066969
B -1.071357 -1.328865 0.306996
C 0.583787 0.888782 1.574159
D 0.758527 1.171216 0.473424
2000-01-05 A -1.282782 -1.340896 -0.303421
B 0.441153 1.682706 -0.028665
C 0.221471 0.228440 1.588931
D 1.729689 0.520260 -0.242861
In [79]: p.to_xarray()
Out[79]:
<xarray.DataArray (items: 3, major_axis: 3, minor_axis: 4)>
array([[[ 1.474071, 0.781836, 2.353925, -0.744471],
[-0.064034, -1.071357, 0.583787, 0.758527],
[-1.282782, 0.441153, 0.221471, 1.729689]],
The following error & warning messages are removed from pandas.core.common (GH13634, GH19769):
• PerformanceWarning
• UnsupportedFunctionCall
• UnsortedIndexError
• AbstractMethodError
These are available from import from pandas.errors (since 0.19.0).
DataFrame.apply() was inconsistent when applying an arbitrary user-defined-function that returned a list-like
with axis=1. Several bugs and inconsistencies are resolved. If the applied function returns a Series, then pandas will
return a DataFrame; otherwise a Series will be returned, this includes the case where a list-like (e.g. tuple or list
is returned) (GH16353, GH17437, GH17970, GH17348, GH17892, GH18573, GH17602, GH18775, GH18901,
GH18919).
In [81]: df
Out[81]:
A B C
0 1 2 3
1 1 2 3
2 1 2 3
3 1 2 3
4 1 2 3
5 1 2 3
Previous Behavior: if the returned shape happened to match the length of original columns, this would return a
DataFrame. If the return shape did not match, a Series with lists was returned.
New Behavior: When the applied function returns a list-like, this will now always return a Series.
0 [1, 2]
1 [1, 2]
2 [1, 2]
3 [1, 2]
4 [1, 2]
5 [1, 2]
dtype: object
To broadcast the result across the original columns (the old behaviour for list-likes of the correct length), you can use
result_type='broadcast'. The shape must match the original columns.
Returning a Series allows one to control the exact return structure and column names:
In a future version of pandas pandas.concat() will no longer sort the non-concatenation axis when it is not
already aligned. The current behavior is the same as the previous (sorting), but now a warning is issued when sort is
not specified and the non-concatenation axis is not aligned (GH4588).
In [87]: df1 = pd.DataFrame({"a": [1, 2], "b": [1, 2]}, columns=['b', 'a'])
To keep the previous behavior (sorting) and silence the warning, pass sort=True
• Building pandas for development now requires cython >= 0.24 (GH18613)
• Building from source now explicitly requires setuptools in setup.py (GH18113)
• Updated conda recipe to be in compliance with conda-build 3.0+ (GH18002)
Division operations on Index and subclasses will now fill division of positive numbers by zero with np.inf, division
of negative numbers by zero with -np.inf and 0 / 0 with np.nan. This matches existing Series behavior.
(GH19322, GH19347)
Previous Behavior:
In [7]: index / 0
Out[7]: Int64Index([0, 0, 0], dtype='int64')
# Previous behavior yielded different results depending on the type of zero in the
˓→divisor
In [11]: pd.RangeIndex(1, 5) / 0
ZeroDivisionError: integer division or modulo by zero
Current Behavior:
In [91]: index = pd.Int64Index([-1, 0, 1])
# division by zero gives -infinity where negative, +infinity where positive, and NaN
˓→for 0 / 0
In [92]: index / 0
Out[92]: Float64Index([-inf, nan, inf], dtype='float64')
# The result of division by zero should not depend on whether the zero is int or float
In [93]: index / 0.0
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[93]: Float64Index([-inf,
˓→nan, inf], dtype='float64')
In [96]: pd.RangeIndex(1, 5) / 0
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[96]: Float64Index([inf, inf,
˓→inf, inf], dtype='float64')
By default, extracting matching patterns from strings with str.extract() used to return a Series if a sin-
gle group was being extracted (a DataFrame if more than one group was extracted). As of Pandas 0.23.0 str.
extract() always returns a DataFrame, unless expand is set to False. Finallay, None was an accepted value
for the expand parameter (which was equivalent to False), but now raises a ValueError. (GH11386)
Previous Behavior:
In [1]: s = pd.Series(['number 10', '12 eggs'])
In [3]: extracted
Out [3]:
0 10
1 12
dtype: object
In [4]: type(extracted)
Out [4]:
pandas.core.series.Series
New Behavior:
In [99]: extracted
Out[99]:
0
0 10
1 12
In [100]: type(extracted)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[100]: pandas.core.frame.DataFrame
In [103]: extracted
Out[103]:
0 10
1 12
dtype: object
In [104]: type(extracted)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[104]: pandas.core.series.Series
The default value of the ordered parameter for CategoricalDtype has changed from False to None to allow
updating of categories without impacting ordered. Behavior should remain consistent for downstream objects,
such as Categorical (GH18790)
In previous versions, the default value for the ordered parameter was False. This could potentially lead
to the ordered parameter unintentionally being changed from True to False when users attempt to update
categories if ordered is not explicitly specified, as it would silently default to False. The new behavior
for ordered=None is to retain the existing value of ordered.
New Behavior:
In [107]: cat
Out[107]:
[a, b, c, a, b, a]
Categories (3, object): [c < b < a]
In [109]: cat.astype(cdt)
Out[109]:
(continues on next page)
Notice in the example above that the converted Categorical has retained ordered=True. Had the default
value for ordered remained as False, the converted Categorical would have become unordered, despite
ordered=False never being explicitly specified. To change the value of ordered, explicitly pass it to the new
dtype, e.g. CategoricalDtype(categories=list('cbad'), ordered=False).
Note that the unintenional conversion of ordered discussed above did not arise in previous versions due to separate
bugs that prevented astype from doing any type of category to category conversion (GH10696, GH18593). These
bugs have been fixed in this release, and motivated changing the default value of ordered.
Previously, the default value for the maximum number of columns was pd.options.display.
max_columns=20. This meant that relatively wide data frames would not fit within the terminal width, and pandas
would introduce line breaks to display these 20 columns. This resulted in an output that was relatively difficult to read:
If Python runs in a terminal, the maximum number of columns is now determined automatically so that the printed data
frame fits within the current terminal width (pd.options.display.max_columns=0) (GH17023). If Python
runs as a Jupyter kernel (such as the Jupyter QtConsole or a Jupyter notebook, as well as in many IDEs), this value
cannot be inferred automatically and is thus set to 20 as in previous versions. In a terminal, this results in a much nicer
output:
Note that if you don’t like the new default, you can always set this option yourself. To revert to the old setting, you
can run this line:
pd.options.display.max_columns = 20
• The default Timedelta constructor now accepts an ISO 8601 Duration string as an argument
(GH19040)
• Subtracting NaT from a Series with dtype='datetime64[ns]' returns a Series with
dtype='timedelta64[ns]' instead of dtype='datetime64[ns]' (GH18808)
• Addition or subtraction of NaT from TimedeltaIndex will return TimedeltaIndex instead of
DatetimeIndex (GH19124)
• DatetimeIndex.shift() and TimedeltaIndex.shift() will now raise NullFrequencyError
(which subclasses ValueError, which was raised in older versions) when the index object frequency is None
(GH19147)
• Addition and subtraction of NaN from a Series with dtype='timedelta64[ns]' will raise a
TypeError instead of treating the NaN as NaT (GH19274)
• NaT division with datetime.timedelta will now return NaN instead of raising (GH17876)
• Operations between a Series with dtype dtype='datetime64[ns]' and a PeriodIndex will cor-
rectly raises TypeError (GH18850)
• Subtraction of Series with timezone-aware dtype='datetime64[ns]' with mis-matched timezones
will raise TypeError instead of ValueError (GH18817)
• Timestamp will no longer silently ignore unused or invalid tz or tzinfo keyword arguments (GH17690)
• Timestamp will no longer silently ignore invalid freq arguments (GH5168)
• CacheableOffset and WeekDay are no longer available in the pandas.tseries.offsets module
(GH17830)
• pandas.tseries.frequencies.get_freq_group() and pandas.tseries.frequencies.
DAYS are removed from the public API (GH18034)
• Series.truncate() and DataFrame.truncate() will raise a ValueError if the index is not sorted
instead of an unhelpful KeyError (GH17935)
• Series.first and DataFrame.first will now raise a TypeError rather than
NotImplementedError when index is not a DatetimeIndex (GH20725).
• Series.last and DataFrame.last will now raise a TypeError rather than
NotImplementedError when index is not a DatetimeIndex (GH20725).
• Restricted DateOffset keyword arguments. Previously, DateOffset subclasses allowed arbitrary keyword
arguments which could lead to unexpected behavior. Now, only valid arguments will be accepted. (GH17176,
GH18226).
• pandas.merge() provides a more informative error message when trying to merge on timezone-aware and
timezone-naive columns (GH15800)
• For DatetimeIndex and TimedeltaIndex with freq=None, addition or subtraction of integer-dtyped
array or Index will raise NullFrequencyError instead of TypeError (GH19895)
• Timestamp constructor now accepts a nanosecond keyword or positional argument (GH18898)
• DatetimeIndex will now raise an AttributeError when the tz attribute is set after instantiation
(GH3746)
• DatetimeIndex with a pytz timezone will now return a consistent pytz timezone (GH18595)
• Series.astype() and Index.astype() with an incompatible dtype will now raise a TypeError
rather than a ValueError (GH18231)
• Series construction with an object dtyped tz-aware datetime and dtype=object specified, will now
return an object dtyped Series, previously this would infer the datetime dtype (GH18231)
• A Series of dtype=category constructed from an empty dict will now have categories of
dtype=object rather than dtype=float64, consistently with the case in which an empty list is passed
(GH18515)
• All-NaN levels in a MultiIndex are now assigned float rather than object dtype, promoting consistency
with Index (GH17929).
• Levels names of a MultiIndex (when not None) are now required to be unique: trying to create a
MultiIndex with repeated names will raise a ValueError (GH18872)
• Both construction and renaming of Index/MultiIndex with non-hashable name/names will now raise
TypeError (GH20527)
• Index.map() can now accept Series and dictionary input objects (GH12756, GH18482, GH18509).
• DataFrame.unstack() will now default to filling with np.nan for object columns. (GH12815)
• IntervalIndex constructor will raise if the closed parameter conflicts with how the input data is inferred
to be closed (GH18421)
• Inserting missing values into indexes will work for all types of indexes and automatically insert the correct type
of missing value (NaN, NaT, etc.) regardless of the type passed in (GH18295)
• When created with duplicate labels, MultiIndex now raises a ValueError. (GH17464)
• Series.fillna() now raises a TypeError instead of a ValueError when passed a list, tuple or
DataFrame as a value (GH18293)
• pandas.DataFrame.merge() no longer casts a float column to object when merging on int and
float columns (GH16572)
• pandas.merge() now raises a ValueError when trying to merge on incompatible data types (GH9780)
• The default NA value for UInt64Index has changed from 0 to NaN, which impacts methods that mask with
NA, such as UInt64Index.where() (GH18398)
• Refactored setup.py to use find_packages instead of explicitly listing out all subpackages (GH18535)
• Rearranged the order of keyword arguments in read_excel() to align with read_csv() (GH16672)
• wide_to_long() previously kept numeric-like suffixes as object dtype. Now they are cast to numeric if
possible (GH17627)
• In read_excel(), the comment argument is now exposed as a named parameter (GH18735)
• Rearranged the order of keyword arguments in read_excel() to align with read_csv() (GH16672)
• The options html.border and mode.use_inf_as_null were deprecated in prior versions, these will
now show FutureWarning rather than a DeprecationWarning (GH19003)
• IntervalIndex and IntervalDtype no longer support categorical, object, and string subtypes
(GH19016)
• IntervalDtype now returns True when compared against 'interval' regardless of subtype, and
IntervalDtype.name now returns 'interval' regardless of subtype (GH18980)
• KeyError now raises instead of ValueError in drop(), drop(), drop(), drop() when dropping a
non-existent element in an axis with duplicates (GH19186)
• Series.to_csv() now accepts a compression argument that works in the same way as the
compression argument in DataFrame.to_csv() (GH18958)
• Set operations (union, difference. . . ) on IntervalIndex with incompatible index types will now raise a
TypeError rather than a ValueError (GH19329)
• DateOffset objects render more simply, e.g. <DateOffset: days=1> instead of <DateOffset:
kwds={'days': 1}> (GH19403)
• Categorical.fillna now validates its value and method keyword arguments. It now raises when both
or none are specified, matching the behavior of Series.fillna() (GH19682)
• pd.to_datetime('today') now returns a datetime, consistent with pd.Timestamp('today'); pre-
viously pd.to_datetime('today') returned a .normalized() datetime (GH19935)
• Series.str.replace() now takes an optional regex keyword which, when set to False, uses literal
string replacement rather than regex replacement (GH16808)
• DatetimeIndex.strftime() and PeriodIndex.strftime() now return an Index instead of a
numpy array to be consistent with similar accessors (GH20127)
• Constructing a Series from a list of length 1 no longer broadcasts this list when a longer index is specified
(GH19714, GH20391).
• DataFrame.to_dict() with orient='index' no longer casts int columns to float for a DataFrame
with only int and float columns (GH18580)
• A user-defined-function that is passed to Series.rolling().aggregate(), DataFrame.
rolling().aggregate(), or its expanding cousins, will now always be passed a Series, rather
than a np.array; .apply() only has the raw keyword, see here. This is consistent with the signatures of
.aggregate() across pandas (GH20584)
• Rolling and Expanding types raise NotImplementedError upon iteration (GH11704).
1.5.3 Deprecations
• Warnings against the obsolete usage Categorical(codes, categories), which were emitted for in-
stance when the first two arguments to Categorical() had different dtypes, and recommended the use of
Categorical.from_codes, have now been removed (GH8074)
• The levels and labels attributes of a MultiIndex can no longer be set directly (GH4039).
• pd.tseries.util.pivot_annual has been removed (deprecated since v0.19). Use pivot_table
instead (GH18370)
• pd.tseries.util.isleapyear has been removed (deprecated since v0.19). Use .is_leap_year
property in Datetime-likes instead (GH18370)
• pd.ordered_merge has been removed (deprecated since v0.19). Use pd.merge_ordered instead
(GH18459)
• The SparseList class has been removed (GH14007)
• The pandas.io.wb and pandas.io.data stub modules have been removed (GH13735)
• Categorical.from_array has been removed (GH13854)
• The freq and how parameters have been removed from the rolling/expanding/ewm methods of
DataFrame and Series (deprecated since v0.18). Instead, resample before calling the methods. (GH18601
& GH18668)
• DatetimeIndex.to_datetime, Timestamp.to_datetime, PeriodIndex.to_datetime, and
Index.to_datetime have been removed (GH8254, GH14096, GH14113)
• read_csv() has dropped the skip_footer parameter (GH13386)
• read_csv() has dropped the as_recarray parameter (GH13373)
• read_csv() has dropped the buffer_lines parameter (GH13360)
Thanks to all of the contributors who participated in the Pandas Documentation Sprint, which took place on March
10th. We had about 500 participants from over 30 locations across the world. You should notice that many of the API
docstrings have greatly improved.
There were too many simultaneous contributions to include a release note for each improvement, but this GitHub
search should give you an idea of how many docstrings were improved.
Special thanks to Marc Garcia for organizing the sprint. For more information, read the NumFOCUS blogpost recap-
ping the sprint.
• Changed spelling of “numpy” to “NumPy”, and “python” to “Python”. (GH19017)
• Consistency when introducing code samples, using either colon or period. Rewrote some sentences for greater
clarity, added more dynamic references to functions, methods and classes. (GH18941, GH18948, GH18973,
GH19017)
• Added a reference to DataFrame.assign() in the concatenate section of the merging documentation
(GH18665)
1.5.7.1 Categorical
Warning: A class of bugs were introduced in pandas 0.21 with CategoricalDtype that affects the correctness
of operations like merge, concat, and indexing when comparing multiple unordered Categorical arrays
that have the same categories, but in a different order. We highly recommend upgrading or manually aligning your
categories before doing these operations.
• Bug in Categorical.equals returning the wrong result when comparing two unordered Categorical
arrays with the same categories, but in a different order (GH16603)
• Bug in pandas.api.types.union_categoricals() returning the wrong result when for unordered
categoricals with the categories in a different order. This affected pandas.concat() with Categorical data
(GH19096).
• Bug in pandas.merge() returning the wrong result when joining on an unordered Categorical that had
the same categories but in a different order (GH19551)
• Bug in CategoricalIndex.get_indexer() returning the wrong result when target was an un-
ordered Categorical that had the same categories as self but in a different order (GH19551)
• Bug in Index.astype() with a categorical dtype where the resultant index is not converted to a
CategoricalIndex for all types of index (GH18630)
• Bug in Series.astype() and Categorical.astype() where an existing categorical data does not get
updated (GH10696, GH18593)
• Bug in Series.str.split() with expand=True incorrectly raising an IndexError on empty strings
(GH20002).
• Bug in Index constructor with dtype=CategoricalDtype(...) where categories and ordered
are not maintained (GH19032)
• Bug in Series constructor with scalar and dtype=CategoricalDtype(...) where categories and
ordered are not maintained (GH19565)
• Bug in Categorical.__iter__ not converting to Python types (GH19909)
• Bug in pandas.factorize() returning the unique codes for the uniques. This now returns a
Categorical with the same dtype as the input (GH19721)
• Bug in pandas.factorize() including an item for missing values in the uniques return value
(GH19721)
• Bug in Series.take() with categorical data interpreting -1 in indices as missing value markers, rather than
the last element of the Series (GH20664)
1.5.7.2 Datetimelike
1.5.7.3 Timedelta
• Bug in Timedelta.__mul__() where multiplying by NaT returned NaT instead of raising a TypeError
(GH19819)
• Bug in Series with dtype='timedelta64[ns]' where addition or subtraction of TimedeltaIndex
had results cast to dtype='int64' (GH17250)
• Bug in Series with dtype='timedelta64[ns]' where addition or subtraction of TimedeltaIndex
could return a Series with an incorrect name (GH19043)
• Bug in Timedelta.__floordiv__() and Timedelta.__rfloordiv__() dividing by many incom-
patible numpy objects was incorrectly allowed (GH18846)
• Bug where dividing a scalar timedelta-like object with TimedeltaIndex performed the reciprocal operation
(GH19125)
1.5.7.4 Timezones
• Bug in creating a Series from an array that contains both tz-naive and tz-aware values will result in a Series
whose dtype is tz-aware instead of object (GH16406)
• Bug in comparison of timezone-aware DatetimeIndex against NaT incorrectly raising TypeError
(GH19276)
• Bug in DatetimeIndex.astype() when converting between timezone aware dtypes, and converting from
timezone aware to naive (GH18951)
• Bug in comparing DatetimeIndex, which failed to raise TypeError when attempting to compare
timezone-aware and timezone-naive datetimelike objects (GH18162)
• Bug in localization of a naive, datetime string in a Series constructor with a datetime64[ns, tz] dtype
(GH174151)
• Timestamp.replace() will now handle Daylight Savings transitions gracefully (GH18319)
• Bug in tz-aware DatetimeIndex where addition/subtraction with a TimedeltaIndex or array with
dtype='timedelta64[ns]' was incorrect (GH17558)
• Bug in DatetimeIndex.insert() where inserting NaT into a timezone-aware index incorrectly raised
(GH16357)
• Bug in DataFrame constructor, where tz-aware Datetimeindex and a given column name will result in an
empty DataFrame (GH19157)
• Bug in Timestamp.tz_localize() where localizing a timestamp near the minimum or maximum valid
values could overflow and return a timestamp with an incorrect nanosecond value (GH12677)
• Bug when iterating over DatetimeIndex that was localized with fixed timezone offset that rounded nanosec-
ond precision to microseconds (GH19603)
• Bug in DataFrame.diff() that raised an IndexError with tz-aware values (GH18578)
• Bug in melt() that converted tz-aware dtypes to tz-naive (GH15785)
1.5.7.5 Offsets
• Bug in WeekOfMonth and Week where addition and subtraction did not roll correctly (GH18510, GH18672,
GH18864)
• Bug in WeekOfMonth and LastWeekOfMonth where default keyword arguments for constructor raised
ValueError (GH19142)
• Bug in FY5253Quarter, LastWeekOfMonth where rollback and rollforward behavior was inconsistent
with addition and subtraction behavior (GH18854)
• Bug in FY5253 where datetime addition and subtraction incremented incorrectly for dates on the year-end
but not normalized to midnight (GH18854)
• Bug in FY5253 where date offsets could incorrectly raise an AssertionError in arithmetic operatons
(GH14774)
1.5.7.6 Numeric
• Bug in Series constructor with an int or float list where specifying dtype=str, dtype='str' or
dtype='U' failed to convert the data elements to strings (GH16605)
• Bug in Index multiplication and division methods where operating with a Series would return an Index
object instead of a Series object (GH19042)
• Bug in the DataFrame constructor in which data containing very large positive or very large negative numbers
was causing OverflowError (GH18584)
• Bug in Index constructor with dtype='uint64' where int-like floats were not coerced to UInt64Index
(GH18400)
• Bug in DataFrame flex arithmetic (e.g. df.add(other, fill_value=foo)) with a fill_value
other than None failed to raise NotImplementedError in corner cases where either the frame or other
has length zero (GH19522)
• Multiplication and division of numeric-dtyped Index objects with timedelta-like scalars returns
TimedeltaIndex instead of raising TypeError (GH19333)
• Bug where NaN was returned instead of 0 by Series.pct_change() and DataFrame.pct_change()
when fill_method is not None (GH19873)
1.5.7.7 Strings
• Bug in Series.str.get() with a dictionary in the values and the index not in the keys, raising KeyError
(GH20671)
1.5.7.8 Indexing
• Bug in indexing a datetimelike Index that raised ValueError instead of IndexError (GH18386).
• Index.to_series() now accepts index and name kwargs (GH18699)
• DatetimeIndex.to_series() now accepts index and name kwargs (GH18699)
• Bug in indexing non-scalar value from Series having non-unique Index will return value flattened
(GH17610)
• Bug in indexing with iterator containing only missing keys, which raised no error (GH20748)
• Fixed inconsistency in .ix between list and scalar keys when the index has integer dtype and does not include
the desired keys (GH20753)
• Bug in __setitem__ when indexing a DataFrame with a 2-d boolean ndarray (GH18582)
• Bug in str.extractall when there were no matches empty Index was returned instead of appropriate
MultiIndex (GH19034)
• Bug in IntervalIndex where empty and purely NA data was constructed inconsistently depending on the
construction method (GH18421)
• Bug in IntervalIndex.symmetric_difference() where the symmetric difference with a non-
IntervalIndex did not raise (GH18475)
• Bug in IntervalIndex where set operations that returned an empty IntervalIndex had the wrong dtype
(GH19101)
• Bug in DataFrame.drop_duplicates() where no KeyError is raised when passing in columns that
don’t exist on the DataFrame (GH19726)
• Bug in Index subclasses constructors that ignore unexpected keyword arguments (GH19348)
• Bug in Index.difference() when taking difference of an Index with itself (GH20040)
• Bug in DataFrame.first_valid_index() and DataFrame.last_valid_index() in presence
of entire rows of NaNs in the middle of values (GH20499).
• Bug in IntervalIndex where some indexing operations were not supported for overlapping or non-
monotonic uint64 data (GH20636)
• Bug in Series.is_unique where extraneous output in stderr is shown if Series contains objects with
__ne__ defined (GH20661)
• Bug in .loc assignment with a single-element list-like incorrectly assigns as a list (GH19474)
• Bug in partial string indexing on a Series/DataFrame with a monotonic decreasing DatetimeIndex
(GH19362)
• Bug in performing in-place operations on a DataFrame with a duplicate Index (GH17105)
• Bug in IntervalIndex.get_loc() and IntervalIndex.get_indexer() when used with an
IntervalIndex containing a single interval (GH17284, GH20921)
• Bug in .loc with a uint64 indexer (GH20722)
1.5.7.9 MultiIndex
• Bug in MultiIndex.__contains__() where non-tuple keys would return True even if they had been
dropped (GH19027)
• Bug in MultiIndex.set_labels() which would cause casting (and potentially clipping) of the new labels
if the level argument is not 0 or a list like [0, 1, . . . ] (GH19057)
• Bug in MultiIndex.get_level_values() which would return an invalid index on level of ints with
missing values (GH17924)
• Bug in MultiIndex.unique() when called on empty MultiIndex (GH20568)
• Bug in MultiIndex.unique() which would not preserve level names (GH20570)
• Bug in MultiIndex.remove_unused_levels() which would fill nan values (GH18417)
• Bug in MultiIndex.from_tuples() which would fail to take zipped tuples in python3 (GH18434)
• Bug in MultiIndex.get_loc() which would fail to automatically cast values between float and int
(GH18818, GH15994)
• Bug in MultiIndex.get_loc() which would cast boolean to integer labels (GH19086)
• Bug in MultiIndex.get_loc() which would fail to locate keys containing NaN (GH18485)
• Bug in MultiIndex.get_loc() in large MultiIndex, would fail when levels had different dtypes
(GH18520)
• Bug in indexing where nested indexers having only numpy arrays are handled incorrectly (GH19686)
1.5.7.10 I/O
• read_html() now rewinds seekable IO objects after parse failure, before attempting to parse with a new
parser. If a parser errors and the object is non-seekable, an informative error is raised suggesting the use of a
different parser (GH17975)
• DataFrame.to_html() now has an option to add an id to the leading <table> tag (GH8496)
• Bug in read_msgpack() with a non existent file is passed in Python 2 (GH15296)
• Bug in read_csv() where a MultiIndex with duplicate columns was not being mangled appropriately
(GH18062)
• Bug in read_csv() where missing values were not being handled properly when
keep_default_na=False with dictionary na_values (GH19227)
• Bug in read_csv() causing heap corruption on 32-bit, big-endian architectures (GH20785)
• Bug in read_sas() where a file with 0 variables gave an AttributeError incorrectly. Now it gives an
EmptyDataError (GH18184)
• Bug in DataFrame.to_latex() where pairs of braces meant to serve as invisible placeholders were es-
caped (GH18667)
• Bug in DataFrame.to_latex() where a NaN in a MultiIndex would cause an IndexError or incor-
rect output (GH14249)
• Bug in DataFrame.to_latex() where a non-string index-level name would result in an
AttributeError (GH19981)
• Bug in DataFrame.to_latex() where the combination of an index name and the index_names=False
option would result in incorrect output (GH18326)
• Bug in DataFrame.to_latex() where a MultiIndex with an empty string as its name would result in
incorrect output (GH18669)
• Bug in DataFrame.to_latex() where missing space characters caused wrong escaping and produced
non-valid latex in some cases (GH20859)
• Bug in read_json() where large numeric values were causing an OverflowError (GH18842)
• Bug in DataFrame.to_parquet() where an exception was raised if the write destination is S3 (GH19134)
• Interval now supported in DataFrame.to_excel() for all Excel file types (GH19242)
• Timedelta now supported in DataFrame.to_excel() for all Excel file types (GH19242, GH9155,
GH19900)
• Bug in pandas.io.stata.StataReader.value_labels() raising an AttributeError when
called on very old files. Now returns an empty dict (GH19417)
• Bug in read_pickle() when unpickling objects with TimedeltaIndex or Float64Index created
with pandas prior to version 0.20 (GH19939)
• Bug in pandas.io.json.json_normalize() where subrecords are not properly normalized if any sub-
records values are NoneType (GH20030)
• Bug in usecols parameter in read_csv() where error is not raised correctly when passing a string.
(GH20529)
• Bug in HDFStore.keys() when reading a file with a softlink causes exception (GH20523)
• Bug in HDFStore.select_column() where a key which is not a valid store raised an AttributeError
instead of a KeyError (GH17912)
1.5.7.11 Plotting
• Better error message when attempting to plot but matplotlib is not installed (GH19810).
• DataFrame.plot() now raises a ValueError when the x or y argument is improperly formed (GH18671)
• Bug in DataFrame.plot() when x and y arguments given as positions caused incorrect referenced columns
for line, bar and area plots (GH20056)
• Bug in formatting tick labels with datetime.time() and fractional seconds (GH18478).
• Series.plot.kde() has exposed the args ind and bw_method in the docstring (GH18461). The argu-
ment ind may now also be an integer (number of sample points).
• DataFrame.plot() now supports multiple columns to the y argument (GH19699)
1.5.7.12 Groupby/Resample/Rolling
• Bug when grouping by a single column and aggregating with a class like list or tuple (GH18079)
• Fixed regression in DataFrame.groupby() which would not emit an error when called with a tuple key not
in the index (GH18798)
• Bug in DataFrame.resample() which silently ignored unsupported (or mistyped) options for label,
closed and convention (GH19303)
• Bug in DataFrame.groupby() where tuples were interpreted as lists of keys rather than as keys (GH17979,
GH18249)
• Bug in DataFrame.groupby() where aggregation by first/last/min/max was causing timestamps to
lose precision (GH19526)
• Bug in DataFrame.transform() where particular aggregation functions were being incorrectly cast to
match the dtype(s) of the grouped data (GH19200)
• Bug in DataFrame.groupby() passing the on= kwarg, and subsequently using .apply() (GH17813)
• Bug in DataFrame.resample().aggregate not raising a KeyError when aggregating a non-existent
column (GH16766, GH19566)
1.5.7.13 Sparse
• Bug in which creating a SparseDataFrame from a dense Series or an unsupported type raised an uncon-
trolled exception (GH19374)
• Bug in SparseDataFrame.to_csv causing exception (GH19384)
• Bug in SparseSeries.memory_usage which caused segfault by accessing non sparse elements
(GH19368)
• Bug in constructing a SparseArray: if data is a scalar and index is defined it will coerce to float64
regardless of scalar’s dtype. (GH19163)
1.5.7.14 Reshaping
• Bug in DataFrame.merge() in which merging using Index objects as vectors raised an Exception
(GH19038)
• Bug in DataFrame.stack(), DataFrame.unstack(), Series.unstack() which were not return-
ing subclasses (GH15563)
• Bug in timezone comparisons, manifesting as a conversion of the index to UTC in .concat() (GH18523)
• Bug in concat() when concatting sparse and dense series it returns only a SparseDataFrame. Should be
a DataFrame. (GH18914, GH18686, and GH16874)
• Improved error message for DataFrame.merge() when there is no common merge key (GH19427)
• Bug in DataFrame.join() which does an outer instead of a left join when being called with multiple
DataFrames and some have non-unique indices (GH19624)
• Series.rename() now accepts axis as a kwarg (GH18589)
• Bug in rename() where an Index of same-length tuples was converted to a MultiIndex (GH19497)
• Comparisons between Series and Index would return a Series with an incorrect name, ignoring the
Index’s name attribute (GH19582)
• Bug in qcut() where datetime and timedelta data with NaT present raised a ValueError (GH19768)
• Bug in DataFrame.iterrows(), which would infers strings not compliant to ISO8601 to datetimes
(GH19671)
• Bug in Series constructor with Categorical where a ValueError is not raised when an index of dif-
ferent length is given (GH19342)
• Bug in DataFrame.astype() where column metadata is lost when converting to categorical or a dictionary
of dtypes (GH19920)
• Bug in cut() and qcut() where timezone information was dropped (GH19872)
• Bug in Series constructor with a dtype=str, previously raised in some cases (GH19853)
• Bug in get_dummies(), and select_dtypes(), where duplicate column names caused incorrect behav-
ior (GH20848)
• Bug in isna(), which cannot handle ambiguous typed lists (GH20675)
• Bug in concat() which raises an error when concatenating TZ-aware dataframes and all-NaT dataframes
(GH12396)
• Bug in concat() which raises an error when concatenating empty TZ-aware series (GH18447)
1.5.7.15 Other
• Improved error message when attempting to use a Python keyword as an identifier in a numexpr backed query
(GH18221)
• Bug in accessing a pandas.get_option(), which raised KeyError rather than OptionError when
looking up a non-existant option key in some cases (GH19789)
• Bug in testing.assert_series_equal() and testing.assert_frame_equal() for Series or
DataFrames with differing unicode data (GH20503)
This is a major release from 0.21.1 and includes a single, API-breaking change. We recommend that all users upgrade
to this version after carefully reading the release note (singular!).
Pandas 0.22.0 changes the handling of empty and all-NA sums and products. The summary is that
• The sum of an empty or all-NA Series is now 0
• The product of an empty or all-NA Series is now 1
• We’ve added a min_count parameter to .sum() and .prod() controlling the minimum number of valid
values for the result to be valid. If fewer than min_count non-NA values are present, the result is NA. The
default is 0. To return NaN, the 0.21 behavior, use min_count=1.
Some background: In pandas 0.21, we fixed a long-standing inconsistency in the return value of all-NA series de-
pending on whether or not bottleneck was installed. See Sum/Prod of all-NaN or empty Series/DataFrames is now
consistently NaN. At the same time, we changed the sum and prod of an empty Series to also be NaN.
Based on feedback, we’ve partially reverted those changes.
In [1]: pd.Series([]).sum()
Out[1]: nan
In [2]: pd.Series([np.nan]).sum()
Out[2]: nan
pandas 0.22.0
In [1]: pd.Series([]).sum()
Out[1]: 0.0
In [2]: pd.Series([np.nan]).sum()
\\\\\\\\\\\\Out[2]: 0.0
The default behavior is the same as pandas 0.20.3 with bottleneck installed. It also matches the behavior of NumPy’s
np.nansum on empty and all-NA arrays.
To have the sum of an empty series return NaN (the default behavior of pandas 0.20.3 without bottleneck, or pandas
0.21.x), use the min_count keyword.
In [3]: pd.Series([]).sum(min_count=1)
Out[3]: nan
Thanks to the skipna parameter, the .sum on an all-NA series is conceptually the same as the .sum of an empty
one with skipna=True (the default).
The min_count parameter refers to the minimum number of non-null values required for a non-NA sum or product.
Series.prod() has been updated to behave the same as Series.sum(), returning 1 instead.
In [5]: pd.Series([]).prod()
Out[5]: 1.0
In [6]: pd.Series([np.nan]).prod()
\\\\\\\\\\\\Out[6]: 1.0
In [7]: pd.Series([]).prod(min_count=1)
\\\\\\\\\\\\\\\\\\\\\\\\Out[7]: nan
These changes affect DataFrame.sum() and DataFrame.prod() as well. Finally, a few less obvious places in
pandas are affected by this change.
Grouping by a Categorical and summing now returns 0 instead of NaN for categories with no observations. The
product now returns 1 instead of NaN.
pandas 0.21.x
pandas 0.22
To restore the 0.21 behavior of returning NaN for unobserved groups, use min_count>=1.
1.6.1.3 Resample
The sum and product of all-NA bins has changed from NaN to 0 for sum and 1 for product.
pandas 0.21.x
In [12]: s.resample('2d').sum()
Out[12]:
2017-01-01 2.0
2017-01-03 NaN
Freq: 2D, dtype: float64
pandas 0.22.0
In [12]: s.resample('2d').sum()
Out[12]:
2017-01-01 2.0
2017-01-03 0.0
dtype: float64
In [13]: s.resample('2d').sum(min_count=1)
Out[13]:
2017-01-01 2.0
2017-01-03 NaN
dtype: float64
In particular, upsampling and taking the sum or product is affected, as upsampling introduces missing values even if
the original series was entirely valid.
pandas 0.21.x
pandas 0.22.0
Once again, the min_count keyword is available to restore the 0.21 behavior.
In [16]: pd.Series([1, 2], index=idx).resample("12H").sum(min_count=1)
Out[16]:
2017-01-01 00:00:00 1.0
2017-01-01 12:00:00 NaN
2017-01-02 00:00:00 2.0
Freq: 12H, dtype: float64
Rolling and expanding already have a min_periods keyword that behaves similar to min_count. The only case
that changes is when doing a rolling or expanding sum with min_periods=0. Previously this returned NaN, when
fewer than min_periods non-NA values were in the window. Now it returns 0.
pandas 0.21.1
In [17]: s = pd.Series([np.nan, np.nan])
pandas 0.22.0
In [17]: s = pd.Series([np.nan, np.nan])
The default behavior of min_periods=None, implying that min_periods equals the window size, is unchanged.
1.6.2 Compatibility
If you maintain a library that should work across pandas versions, it may be easiest to exclude pandas 0.21 from your
requirements. Otherwise, all your sum() calls would need to check if the Series is empty before summing.
With setuptools, in your setup.py use:
install_requires=['pandas!=0.21.*', ...]
Note that the inconsistency in the return value for all-NA series is still there for pandas 0.20.3 and earlier. Avoiding
pandas 0.21 will only help with the empty case.
This is a minor bug-fix release in the 0.21.x series and includes some small regression fixes, bug fixes and performance
improvements. We recommend that all users upgrade to this version.
Highlights include:
• Temporarily restore matplotlib datetime plotting functionality. This should resolve issues for users who implic-
itly relied on pandas to plot datetimes with matplotlib. See here.
• Improvements to the Parquet IO functions introduced in 0.21.0. See here.
Pandas implements some matplotlib converters for nicely formatting the axis labels on plots with datetime or
Period values. Prior to pandas 0.21.0, these were implicitly registered with matplotlib, as a side effect of import
pandas.
In pandas 0.21.0, we required users to explicitly register the converter. This caused problems for some users who
relied on those converters being present for regular matplotlib.pyplot plotting methods, so we’re temporarily
reverting that change; pandas 0.21.1 again registers the converters on import, just like before 0.21.0.
• DataFrame.to_parquet() will now write non-default indexes when the underlying engine supports it.
The indexes will be preserved when reading back in with read_parquet() (GH18581).
• read_parquet() now allows to specify the columns to read from a parquet file (GH18154)
• read_parquet() now allows to specify kwargs which are passed to the respective engine (GH18216)
1.7.3 Deprecations
1.7.5.1 Conversion
• Bug in TimedeltaIndex subtraction could incorrectly overflow when NaT is present (GH17791)
• Bug in DatetimeIndex subtracting datetimelike from DatetimeIndex could fail to overflow (GH18020)
• Bug in IntervalIndex.copy() when copying and IntervalIndex with non-default closed
(GH18339)
• Bug in DataFrame.to_dict() where columns of datetime that are tz-aware were not converted to required
arrays when used with orient='records', raising TypeError (GH18372)
• Bug in DateTimeIndex and date_range() where mismatching tz-aware start and end timezones
would not raise an err if end.tzinfo is None (GH18431)
• Bug in Series.fillna() which raised when passed a long integer on Python 2 (GH18159).
1.7.5.2 Indexing
1.7.5.3 I/O
• Bug in class:~pandas.io.stata.StataReader not converting date/time columns with display formatting addressed
(GH17990). Previously columns with display formatting were normally left as ordinal numbers and not con-
verted to datetime objects.
• Bug in read_csv() when reading a compressed UTF-16 encoded file (GH18071)
• Bug in read_csv() for handling null values in index columns when specifying na_filter=False
(GH5239)
• Bug in read_csv() when reading numeric category fields with high cardinality (GH18186)
• Bug in DataFrame.to_csv() when the table had MultiIndex columns, and a list of strings was passed
in for header (GH5539)
• Bug in parsing integer datetime-like columns with specified format in read_sql (GH17855).
• Bug in DataFrame.to_msgpack() when serializing data of the numpy.bool_ datatype (GH18390)
• Bug in read_json() not decoding when reading line deliminted JSON from S3 (GH17200)
• Bug in pandas.io.json.json_normalize() to avoid modification of meta (GH18610)
• Bug in to_latex() where repeated multi-index values were not printed even though a higher level index
differed from the previous row (GH14484)
• Bug when reading NaN-only categorical columns in HDFStore (GH18413)
• Bug in DataFrame.to_latex() with longtable=True where a latex multicolumn always spanned over
three columns (GH17959)
1.7.5.4 Plotting
1.7.5.5 Groupby/Resample/Rolling
1.7.5.6 Reshaping
• Error message in pd.merge_asof() for key datatype mismatch now includes datatype of left and right key
(GH18068)
• Bug in pd.concat when empty and non-empty DataFrames or Series are concatenated (GH18178 GH18187)
• Bug in DataFrame.filter(...) when unicode is passed as a condition in Python 2 (GH13101)
• Bug when merging empty DataFrames when np.seterr(divide='raise') is set (GH17776)
1.7.5.7 Numeric
• Bug in pd.Series.rolling.skew() and rolling.kurt() with all equal values has floating issue
(GH18044)
1.7.5.8 Categorical
1.7.5.9 String
• Series.str.split() will now propagate NaN values across all expanded columns instead of None
(GH18450)
This is a major release from 0.20.3 and includes a number of API changes, deprecations, new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Highlights include:
• Integration with Apache Parquet, including a new top-level read_parquet() function and DataFrame.
to_parquet() method, see here.
• New user-facing pandas.api.types.CategoricalDtype for specifying categoricals independent of
the data, see here.
• The behavior of sum and prod on all-NaN Series/DataFrames is now consistent and no longer depends on
whether bottleneck is installed, and sum and prod on empty Series now return NaN instead of 0, see here.
• Compatibility fixes for pypy, see here.
• Additions to the drop, reindex and rename API to make them more consistent, see here.
• Addition of the new methods DataFrame.infer_objects (see here) and GroupBy.pipe (see here).
• Indexing with a list of labels, where one or more of the labels is missing, is deprecated and will raise a KeyError
in a future version, see here.
Check the API Changes and deprecations before updating.
• New features
– Integration with Apache Parquet file format
– infer_objects type conversion
– Improved warnings when attempting to create columns
– drop now also accepts index/columns keywords
– rename, reindex now also accept axis keyword
– CategoricalDtype for specifying categoricals
– GroupBy objects now have a pipe method
– Categorical.rename_categories accepts a dict-like
– Other Enhancements
• Backwards incompatible API changes
– Dependencies have increased minimum versions
– Sum/Prod of all-NaN or empty Series/DataFrames is now consistently NaN
– Indexing with a list with missing labels is Deprecated
– NA naming Changes
– Iteration of Series/Index will now return Python scalars
– Indexing with a Boolean Index
– PeriodIndex resampling
– Improved error handling during item assignment in pd.eval
– Dtype Conversions
– MultiIndex Constructor with a Single Level
– UTC Localization with Series
– Consistency of Range Functions
– No Automatic Matplotlib Converters
– Other API Changes
• Deprecations
– Series.select and DataFrame.select
– Series.argmax and Series.argmin
• Removal of prior version deprecations/changes
• Performance Improvements
• Documentation Changes
• Bug Fixes
– Conversion
– Indexing
– I/O
– Plotting
– Groupby/Resample/Rolling
– Sparse
– Reshaping
– Numeric
– Categorical
– PyPy
– Other
Integration with Apache Parquet, including a new top-level read_parquet() and DataFrame.to_parquet()
method, see here (GH15838, GH17438).
Apache Parquet provides a cross-language, binary file format for reading and writing data frames efficiently. Parquet is
designed to faithfully serialize and de-serialize DataFrame s, supporting all of the pandas dtypes, including extension
dtypes such as datetime with timezones.
This functionality depends on either the pyarrow or fastparquet library. For more details, see see the IO docs on
Parquet.
In [2]: df.dtypes
Out[2]:
A int64
B object
C object
(continues on next page)
In [3]: df.infer_objects().dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[3]:
A int64
B int64
C object
dtype: object
Note that column 'C' was not converted - only scalar numeric types will be converted to a new type.
Other types of conversion should be accomplished using the to_numeric() function (or to_datetime(),
to_timedelta()).
In [4]: df = df.infer_objects()
In [6]: df.dtypes
Out[6]:
A int64
B int64
C int64
dtype: object
New users are often puzzled by the relationship between column operations and attribute access on DataFrame
instances (GH7175). One specific instance of this confusion is attempting to create a new column by setting an
attribute on the DataFrame:
This does not raise any obvious exceptions, but also does not create a new column:
In[3]: df
Out[3]:
one
0 1.0
1 2.0
2 3.0
Setting a list-like data structure into a new attribute now raises a UserWarning about the potential for unexpected
behavior. See Attribute Access.
The drop() method has gained index/columns keywords as an alternative to specifying the axis. This is similar
to the behavior of reindex (GH12392).
For example:
In [7]: df = pd.DataFrame(np.arange(8).reshape(2,4),
...: columns=['A', 'B', 'C', 'D'])
...:
In [8]: df
Out[8]:
A B C D
0 0 1 2 3
1 4 5 6 7
A D
0 0 3
1 4 7
The DataFrame.rename() and DataFrame.reindex() methods have gained the axis keyword to specify
the axis to target with the operation (GH12392).
Here’s rename:
And reindex:
A B C
0 1.0 4.0 NaN
1 2.0 5.0 NaN
3 NaN NaN NaN
We highly encourage using named arguments to avoid confusion when using either style.
pandas.api.types.CategoricalDtype has been added to the public API and expanded to include the
categories and ordered attributes. A CategoricalDtype can be used to specify the set of categories and
orderedness of an array, independent of the data. This can be useful for example, when converting string data to a
Categorical (GH14711, GH15078, GH16015, GH17643):
In [21]: s.astype(dtype)
Out[21]:
0 a
1 b
2 c
3 a
dtype: category
Categories (4, object): [a < b < c < d]
One place that deserves special mention is in read_csv(). Previously, with dtype={'col': 'category'},
the returned values and categories would always be strings.
GroupBy objects now have a pipe method, similar to the one on DataFrame and Series, that allow for functions
that take a GroupBy to be composed in a clean, readable syntax. (GH17871)
For a concrete example on combining .groupby and .pipe , imagine having a DataFrame with columns for stores,
products, revenue and sold quantity. We’d like to do a groupwise calculation of prices (i.e. revenue/quantity) per store
and per product. We could do this in a multi-step operation, but expressing it in terms of piping can make the code
more readable.
First we set the data:
In [27]: n = 1000
In [29]: df.head(2)
Out[29]:
Store Product Revenue Quantity
0 Store_1 Product_3 54.28 3
1 Store_2 Product_2 30.91 1
rename_categories() now accepts a dict-like argument for new_categories. The previous categories are
looked up in the dictionary’s keys and replaced if found. The behavior of missing and extra keys is the same as in
DataFrame.rename().
Warning: To assist with upgrading pandas, rename_categories treats Series as list-like. Typically, Series
are considered to be dict-like (e.g. in .rename, .map). In a future version of pandas rename_categories
will change to treat them as dict-like. Follow the warning message’s recommendations for writing future-proof
code.
In [33]: c.rename_categories(pd.Series([0, 1], index=['a', 'c']))
FutureWarning: Treating Series 'new_categories' as a list-like and using the values.
In a future version, 'rename_categories' will treat Series like a dictionary.
For dict-like, use 'new_categories.to_dict()'
For list-like, use 'new_categories.values'.
Out[33]:
[0, 0, 1]
Categories (2, int64): [0, 1]
New keywords
• Added a skipna parameter to infer_dtype() to support type inference in the presence of missing values
(GH17059).
• Series.to_dict() and DataFrame.to_dict() now support an into keyword which allows you to
specify the collections.Mapping subclass that you would like returned. The default is dict, which is
backwards compatible. (GH16122)
• Series.set_axis() and DataFrame.set_axis() now support the inplace parameter. (GH14636)
Various enhancements
• read_excel() raises ImportError with a better message if xlrd is not installed. (GH17613)
• DataFrame.assign() will preserve the original order of **kwargs for Python 3.6+ users instead of
sorting the column names. (GH14207)
• Series.reindex(), DataFrame.reindex(), Index.get_indexer() now support list-like argu-
ment for tolerance. (GH17367)
We have updated our minimum supported versions of dependencies (GH15206, GH15543, GH15214). If installed, we
now require:
Note: The changes described here have been partially reverted. See the v0.22.0 Whatsnew for more.
The behavior of sum and prod on all-NaN Series/DataFrames no longer depends on whether bottleneck is installed,
and return value of sum and prod on an empty Series has changed (GH9422, GH15507).
Calling sum or prod on an empty or all-NaN Series, or columns of a DataFrame, will result in NaN. See the
docs.
In [33]: s = Series([np.nan])
In [2]: s.sum()
Out[2]: np.nan
In [2]: s.sum()
Out[2]: 0.0
In [34]: s.sum()
Out[34]: 0.0
Note that this also changes the sum of an empty Series. Previously this always returned 0 regardless of a
bottlenck installation:
In [1]: pd.Series([]).sum()
Out[1]: 0
but for consistency with the all-NaN case, this was changed to return NaN as well:
In [35]: pd.Series([]).sum()
Out[35]: 0.0
Previously, selecting with a list of labels, where one or more labels were missing would always succeed, returning NaN
for missing labels. This will now show a FutureWarning. In the future this will raise a KeyError (GH15747).
This warning will trigger on a DataFrame or a Series for using .loc[] or [[]] when passing a list-of-labels
with at least 1 missing label. See the deprecation docs.
In [37]: s
Out[37]:
0 1
1 2
2 3
dtype: int64
Previous Behavior
Current Behavior
Out[4]:
1 2.0
2 3.0
3 NaN
dtype: float64
The idiomatic way to achieve selecting potentially not-found elements is via .reindex()
In order to promote more consistency among the pandas API, we have added additional top-level functions isna()
and notna() that are aliases for isnull() and notnull(). The naming scheme is now more consistent with
methods like .dropna() and .fillna(). Furthermore in all cases where .isnull() and .notnull() meth-
ods are defined, these have additional methods named .isna() and .notna(), these are included for classes
Categorical, Index, Series, and DataFrame. (GH15001).
The configuration option pd.options.mode.use_inf_as_null is deprecated, and pd.options.mode.
use_inf_as_na is added as a replacement.
Previously, when using certain iteration methods for a Series with dtype int or float, you would receive a
numpy scalar, e.g. a np.int64, rather than a Python int. Issue (GH10904) corrected this for Series.tolist()
and list(Series). This change makes all iteration methods consistent, in particular, for __iter__() and .
map(); note that this only affects int/float dtypes. (GH13236, GH13258, GH14216).
In [41]: s
Out[41]:
0 1
1 2
2 3
dtype: int64
Previously:
In [2]: type(list(s)[0])
Out[2]: numpy.int64
New Behaviour:
In [42]: type(list(s)[0])
Out[42]: int
Furthermore this will now correctly box the results of iteration for DataFrame.to_dict() as well.
In [44]: df = pd.DataFrame(d)
Previously:
In [8]: type(df.to_dict()['a'][0])
Out[8]: numpy.int64
New Behaviour:
In [45]: type(df.to_dict()['a'][0])
Out[45]: int
Previously when passing a boolean Index to .loc, if the index of the Series/DataFrame had boolean labels,
you would get a label based selection, potentially duplicating result labels, rather than a boolean indexing selection
(where True selects elements), this was inconsistent how a boolean numpy array indexed. The new behavior is to act
like a boolean numpy array indexer. (GH17738)
Previous Behavior:
In [46]: s = pd.Series([1, 2, 3], index=[False, True, False])
In [47]: s
Out[47]:
False 1
True 2
False 3
dtype: int64
Current Behavior
In [48]: s.loc[pd.Index([True, False, True])]
Out[48]:
False 1
False 3
dtype: int64
Furthermore, previously if you had an index that was non-numeric (e.g. strings), then a boolean Index would raise a
KeyError. This will now be treated as a boolean indexer.
Previously Behavior:
In [49]: s = pd.Series([1,2,3], index=['a', 'b', 'c'])
In [50]: s
Out[50]:
a 1
b 2
c 3
dtype: int64
Current Behavior
In [51]: s.loc[pd.Index([True, False, True])]
Out[51]:
a 1
c 3
dtype: int64
In [4]: resampled
Out[4]:
2017-03-31 1.0
2017-09-30 5.5
2018-03-31 10.0
Freq: 2Q-DEC, dtype: float64
In [5]: resampled.index
Out[5]: DatetimeIndex(['2017-03-31', '2017-09-30', '2018-03-31'], dtype=
˓→'datetime64[ns]', freq='2Q-DEC')
New Behavior:
In [52]: pi = pd.period_range('2017-01', periods=12, freq='M')
In [55]: resampled
Out[55]:
2017Q1 2.5
2017Q3 8.5
Freq: 2Q-DEC, dtype: float64
In [56]: resampled.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[56]:
˓→PeriodIndex(['2017Q1', '2017Q3'], dtype='period[2Q-DEC]', freq='2Q-DEC')
Upsampling and calling .ohlc() previously returned a Series, basically identical to calling .asfreq(). OHLC
upsampling now returns a DataFrame with columns open, high, low and close (GH13083). This is consistent
with downsampling and DatetimeIndex behavior.
Previous Behavior:
In [3]: s.resample('H').ohlc()
Out[3]:
2000-01-01 00:00 0.0
...
2000-01-10 23:00 NaN
Freq: H, Length: 240, dtype: float64
In [4]: s.resample('M').ohlc()
Out[4]:
open high low close
2000-01 0 9 0 9
New Behavior:
In [59]: s.resample('H').ohlc()
Out[59]:
open high low close
2000-01-01 00:00 0.0 0.0 0.0 0.0
2000-01-01 01:00 NaN NaN NaN NaN
2000-01-01 02:00 NaN NaN NaN NaN
2000-01-01 03:00 NaN NaN NaN NaN
2000-01-01 04:00 NaN NaN NaN NaN
2000-01-01 05:00 NaN NaN NaN NaN
2000-01-01 06:00 NaN NaN NaN NaN
... ... ... ... ...
2000-01-10 17:00 NaN NaN NaN NaN
2000-01-10 18:00 NaN NaN NaN NaN
2000-01-10 19:00 NaN NaN NaN NaN
2000-01-10 20:00 NaN NaN NaN NaN
2000-01-10 21:00 NaN NaN NaN NaN
2000-01-10 22:00 NaN NaN NaN NaN
2000-01-10 23:00 NaN NaN NaN NaN
In [60]: s.resample('M').ohlc()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
eval() will now raise a ValueError when item assignment malfunctions, or inplace operations are specified, but
there is no item assignment in the expression (GH16732)
Previously, if you attempted the following expression, you would get a not very helpful error message:
This is a very long way of saying numpy arrays don’t support string-item indexing. With this change, the error message
is now this:
It also used to be possible to evaluate expressions inplace, even if there was no item assignment:
However, this input does not make much sense because the output is not being assigned to the target. Now, a
ValueError will be raised when such an input is passed in:
Previously assignments, .where() and .fillna() with a bool assignment, would coerce to same the type (e.g.
int / float), or raise for datetimelikes. These will now preserve the bools with object dtypes. (GH16821).
In [6]: s
Out[6]:
0 1
1 1
2 3
dtype: int64
New Behavior
In [64]: s
Out[64]:
0 1
1 True
2 3
dtype: object
Previously, as assignment to a datetimelike with a non-datetimelike would coerce the non-datetime-like item being
assigned (GH14145).
In [1]: s[1] = 1
In [2]: s
Out[2]:
0 2011-01-01 00:00:00.000000000
1 1970-01-01 00:00:00.000000001
dtype: datetime64[ns]
In [66]: s[1] = 1
In [67]: s
Out[67]:
0 2011-01-01 00:00:00
1 1
dtype: object
• Inconsistent behavior in .where() with datetimelikes which would raise rather than coerce to object
(GH16402)
• Bug in assignment against int64 data with np.ndarray with float64 dtype may keep int64 dtype
(GH14001)
The MultiIndex constructors no longer squeezes a MultiIndex with all length-one levels down to a regular Index.
This affects all the MultiIndex constructors. (GH17178)
Previous behavior:
Length 1 levels are no longer special-cased. They behave exactly as if you had length 2+ levels, so a MultiIndex
is always returned from all of the MultiIndex constructors:
Previously, to_datetime() did not localize datetime Series data when utc=True was passed. Now,
to_datetime() will correctly localize Series with a datetime64[ns, UTC] dtype to be consistent with
how list-like and Index data are handled. (GH6415).
Previous Behavior
New Behavior
Additionally, DataFrames with datetime columns that were parsed by read_sql_table() and
read_sql_query() will also be localized to UTC only if the original SQL columns were timezone aware
datetime columns.
In previous versions, there were some inconsistencies between the various range functions: date_range(),
bdate_range(), period_range(), timedelta_range(), and interval_range(). (GH17471).
One of the inconsistent behaviors occurred when the start, end and period parameters were all specified, poten-
tially leading to ambiguous ranges. When all three parameters were passed, interval_range ignored the period
parameter, period_range ignored the end parameter, and the other range functions raised. To promote consistency
among the range functions, and avoid potentially ambiguous ranges, interval_range and period_range will
now raise when all three parameters are passed.
Previous Behavior:
New Behavior:
Additionally, the endpoint parameter end was not included in the intervals produced by interval_range. How-
ever, all other range functions include end in their output. To promote consistency among the range functions,
interval_range will now include end as the right endpoint of the final interval, except if freq is specified in a
way which skips end.
Previous Behavior:
New Behavior:
Pandas no longer registers our date, time, datetime, datetime64, and Period converters with matplotlib
when pandas is imported. Matplotlib plot methods (plt.plot, ax.plot, . . . ), will not nicely format the x-axis for
DatetimeIndex or PeriodIndex values. You must explicitly register these methods:
Pandas built-in Series.plot and DataFrame.plot will register these converters on first-use (GH17710).
Note: This change has been temporarily reverted in pandas 0.21.1, for more details see here.
• The Categorical constructor no longer accepts a scalar for the categories keyword. (GH16022)
• Accessing a non-existent attribute on a closed HDFStore will now raise an AttributeError rather than a
ClosedFileError (GH16301)
• read_csv() now issues a UserWarning if the names parameter contains duplicates (GH17095)
• read_csv() now treats 'null' and 'n/a' strings as missing values by default (GH16471, GH16078)
• pandas.HDFStore’s string representation is now faster and less detailed. For the previous behavior, use
pandas.HDFStore.info(). (GH16503).
• Compression defaults in HDF stores now follow pytables standards. Default is no compression and if complib
is missing and complevel > 0 zlib is used (GH15943)
• Index.get_indexer_non_unique() now returns a ndarray indexer rather than an Index; this is con-
sistent with Index.get_indexer() (GH16819)
• Removed the @slow decorator from pandas.util.testing, which caused issues for some downstream
packages’ test suites. Use @pytest.mark.slow instead, which achieves the same thing (GH16850)
• Moved definition of MergeError to the pandas.errors module.
1.8.3 Deprecations
• Passing a non-existent column in .to_excel(..., columns=) is deprecated and will raise a KeyError
in the future (GH17295)
• raise_on_error parameter to Series.where(), Series.mask(), DataFrame.where(),
DataFrame.mask() is deprecated, in favor of errors= (GH14968)
• Using DataFrame.rename_axis() and Series.rename_axis() to alter index or column labels is
now deprecated in favor of using .rename. rename_axis may still be used to alter the name of the index or
columns (GH17833).
• reindex_axis() has been deprecated in favor of reindex(). See here for more (GH17833).
The Series.select() and DataFrame.select() methods are deprecated in favor of using df.
loc[labels.map(crit)] (GH12401)
Out[3]:
A
bar 2
baz 3
The behavior of Series.argmax() and Series.argmin() have been deprecated in favor of Series.
idxmax() and Series.idxmin(), respectively (GH16830).
For compatibility with NumPy arrays, pd.Series implements argmax and argmin. Since pandas 0.13.0,
argmax has been an alias for pandas.Series.idxmax(), and argmin has been an alias for pandas.
Series.idxmin(). They return the label of the maximum or minimum, rather than the position.
We’ve deprecated the current behavior of Series.argmax and Series.argmin. Using either of these will emit
a FutureWarning. Use Series.idxmax() if you want the label of the maximum. Use Series.values.
argmax() if you want the position of the maximum. Likewise for the minimum. In a future release Series.
argmax and Series.argmin will return the position of the maximum or minimum.
1.8.7.1 Conversion
• Bug in assignment against datetime-like data with int may incorrectly convert to datetime-like (GH14145)
• Bug in assignment against int64 data with np.ndarray with float64 dtype may keep int64 dtype
(GH14001)
• Fixed the return type of IntervalIndex.is_non_overlapping_monotonic to be a Python bool
for consistency with similar attributes/methods. Previously returned a numpy.bool_. (GH17237)
• Bug in IntervalIndex.is_non_overlapping_monotonic when intervals are closed on both sides
and overlap at a point (GH16560)
• Bug in Series.fillna() returns frame when inplace=True and value is dict (GH16156)
• Bug in Timestamp.weekday_name returning a UTC-based weekday name when localized to a timezone
(GH17354)
• Bug in Timestamp.replace when replacing tzinfo around DST changes (GH15683)
• Bug in Timedelta construction and arithmetic that would not propagate the Overflow exception (GH17367)
• Bug in astype() converting to object dtype when passed extension type classes (DatetimeTZDtype,
CategoricalDtype) rather than instances. Now a TypeError is raised when a class is passed (GH17780).
• Bug in to_numeric() in which elements were not always being coerced to numeric when
errors='coerce' (GH17007, GH17125)
• Bug in DataFrame and Series constructors where range objects are converted to int32 dtype on Win-
dows instead of int64 (GH16804)
1.8.7.2 Indexing
• When called with a null slice (e.g. df.iloc[:]), the .iloc and .loc indexers return a shallow copy of the
original object. Previously they returned the original object. (GH13873).
• When called on an unsorted MultiIndex, the loc indexer now will raise UnsortedIndexError only if
proper slicing is used on non-sorted levels (GH16734).
• Fixes regression in 0.20.3 when indexing with a string on a TimedeltaIndex (GH16896).
• Fixed TimedeltaIndex.get_loc() handling of np.timedelta64 inputs (GH16909).
• Fix MultiIndex.sort_index() ordering when ascending argument is a list, but not all levels are
specified, or are in a different order (GH16934).
• Fixes bug where indexing with np.inf caused an OverflowError to be raised (GH16957)
• Bug in reindexing on an empty CategoricalIndex (GH16770)
• Fixes DataFrame.loc for setting with alignment and tz-aware DatetimeIndex (GH16889)
• Avoids IndexError when passing an Index or Series to .iloc with older numpy (GH17193)
• Allow unicode empty strings as placeholders in multilevel columns in Python 2 (GH17099)
• Bug in .iloc when used with inplace addition or assignment and an int indexer on a MultiIndex causing
the wrong indexes to be read from and written to (GH17148)
• Bug in .isin() in which checking membership in empty Series objects raised an error (GH16991)
• Bug in CategoricalIndex reindexing in which specified indices containing duplicates were not being re-
spected (GH17323)
• Bug in intersection of RangeIndex with negative step (GH17296)
• Bug in IntervalIndex where performing a scalar lookup fails for included right endpoints of non-
overlapping monotonic decreasing indexes (GH16417, GH17271)
• Bug in DataFrame.first_valid_index() and DataFrame.last_valid_index() when no
valid entry (GH17400)
• Bug in Series.rename() when called with a callable, incorrectly alters the name of the Series, rather
than the name of the Index. (GH17407)
• Bug in String.str_get() raises IndexError instead of inserting NaNs when using a negative index.
(GH17704)
1.8.7.3 I/O
• Bug in read_hdf() when reading a timezone aware index from fixed format HDFStore (GH17618)
• Bug in read_csv() in which columns were not being thoroughly de-duplicated (GH17060)
• Bug in read_csv() in which specified column names were not being thoroughly de-duplicated (GH17095)
• Bug in read_csv() in which non integer values for the header argument generated an unhelpful / unrelated
error message (GH16338)
• Bug in read_csv() in which memory management issues in exception handling, under certain conditions,
would cause the interpreter to segfault (GH14696, GH16798).
• Bug in read_csv() when called with low_memory=False in which a CSV with at least one column >
2GB in size would incorrectly raise a MemoryError (GH16798).
• Bug in read_csv() when called with a single-element list header would return a DataFrame of all NaN
values (GH7757)
• Bug in DataFrame.to_csv() defaulting to ‘ascii’ encoding in Python 3, instead of ‘utf-8’ (GH17097)
• Bug in read_stata() where value labels could not be read when using an iterator (GH16923)
• Bug in read_stata() where the index was not set (GH16342)
• Bug in read_html() where import check fails when run in multiple threads (GH16928)
• Bug in read_csv() where automatic delimiter detection caused a TypeError to be thrown when a bad line
was encountered rather than the correct error message (GH13374)
• Bug in DataFrame.to_html() with notebook=True where DataFrames with named indices or non-
MultiIndex indices had undesired horizontal or vertical alignment for column or row labels, respectively
(GH16792)
• Bug in DataFrame.to_html() in which there was no validation of the justify parameter (GH17527)
• Bug in HDFStore.select() when reading a contiguous mixed-data table featuring VLArray (GH17021)
• Bug in to_json() where several conditions (including objects with unprintable symbols, objects with deep
recursion, overlong labels) caused segfaults instead of raising the appropriate exception (GH14256)
1.8.7.4 Plotting
• Bug in plotting methods using secondary_y and fontsize not setting secondary axis font size (GH12565)
• Bug when plotting timedelta and datetime dtypes on y-axis (GH16953)
• Line plots no longer assume monotonic x data when calculating xlims, they show the entire lines now even for
unsorted x data. (GH11310, GH11471)
• With matplotlib 2.0.0 and above, calculation of x limits for line plots is left to matplotlib, so that its new default
settings are applied. (GH15495)
• Bug in Series.plot.bar or DataFrame.plot.bar with y not respecting user-passed color
(GH16822)
• Bug causing plotting.parallel_coordinates to reset the random seed when using random colors
(GH17525)
1.8.7.5 Groupby/Resample/Rolling
• Bug in groupby.transform() that would coerce boolean dtypes back to float (GH16875)
• Bug in Series.resample(...).apply() where an empty Series modified the source index and did
not return the name of a Series (GH14313)
• Bug in .rolling(...).apply(...) with a DataFrame with a DatetimeIndex, a window of a
timedelta-convertible and min_periods >= 1 (GH15305)
• Bug in DataFrame.groupby where index and column keys were not recognized correctly when the number
of keys equaled the number of elements on the groupby axis (GH16859)
• Bug in groupby.nunique() with TimeGrouper which cannot handle NaT correctly (GH17575)
• Bug in DataFrame.groupby where a single level selection from a MultiIndex unexpectedly sorts
(GH17537)
• Bug in DataFrame.groupby where spurious warning is raised when Grouper object is used to override
ambiguous column name (GH17383)
• Bug in TimeGrouper differs when passes as a list and as a scalar (GH17530)
1.8.7.6 Sparse
1.8.7.7 Reshaping
• Fixes regression when sorting by multiple columns on a datetime64 dtype Series with NaT values
(GH16836)
• Bug in pivot_table() where the result’s columns did not preserve the categorical dtype of columns when
dropna was False (GH17842)
• Bug in DataFrame.drop_duplicates where dropping with non-unique column names raised a
ValueError (GH17836)
• Bug in unstack() which, when called on a list of levels, would discard the fillna argument (GH13971)
• Bug in the alignment of range objects and other list-likes with DataFrame leading to operations being
performed row-wise instead of column-wise (GH17901)
1.8.7.8 Numeric
• Bug in .clip() with axis=1 and a list-like for threshold is passed; previously this raised ValueError
(GH15390)
• Series.clip() and DataFrame.clip() now treat NA values for upper and lower arguments as None
instead of raising ValueError (GH17276).
1.8.7.9 Categorical
1.8.7.10 PyPy
1.8.7.11 Other
• Bug where some inplace operators were not being wrapped and produced a copy when invoked (GH12962)
• Bug in eval() where the inplace parameter was being incorrectly handled (GH16732)
This is a minor bug-fix release in the 0.20.x series and includes some small regression fixes and bug fixes. We
recommend that all users upgrade to this version.
• Bug Fixes
– Conversion
– Indexing
– I/O
– Plotting
– Reshaping
– Categorical
1.9.1.1 Conversion
• Bug in pickle compat prior to the v0.20.x series, when UTC is a timezone in a Series/DataFrame/Index
(GH16608)
• Bug in Series construction when passing a Series with dtype='category' (GH16524).
• Bug in DataFrame.astype() when passing a Series as the dtype kwarg. (GH16717).
1.9.1.2 Indexing
• Bug in Float64Index causing an empty array instead of None to be returned from .get(np.nan) on a
Series whose index did not contain any NaN s (GH8569)
• Bug in MultiIndex.isin causing an error when passing an empty iterable (GH16777)
• Fixed a bug in a slicing DataFrame/Series that have a TimedeltaIndex (GH16637)
1.9.1.3 I/O
• Bug in read_csv() in which files weren’t opened as binary files by the C engine on Windows, causing EOF
characters mid-field, which would fail (GH16039, GH16559, GH16675)
• Bug in read_hdf() in which reading a Series saved to an HDF file in ‘fixed’ format fails when an explicit
mode='r' argument is supplied (GH16583)
1.9.1.4 Plotting
• Fixed regression that prevented RGB and RGBA tuples from being used as color arguments (GH16233)
• Fixed an issue with DataFrame.plot.scatter() that incorrectly raised a KeyError when categorical
data is used for plotting (GH16199)
1.9.1.5 Reshaping
1.9.1.6 Categorical
• Bug in DataFrame.sort_values not respecting the kind parameter with categorical data (GH16793)
This is a minor bug-fix release in the 0.20.x series and includes some small regression fixes, bug fixes and performance
improvements. We recommend that all users upgrade to this version.
• Enhancements
• Performance Improvements
• Bug Fixes
– Conversion
– Indexing
– I/O
– Plotting
– Groupby/Resample/Rolling
– Sparse
– Reshaping
– Numeric
– Categorical
– Other
1.10.1 Enhancements
• Silenced a warning on some Windows environments about “tput: terminal attributes: No such device or address”
when detecting the terminal size. This fix only applies to python 3 (GH16496)
• Bug in using pathlib.Path or py.path.local objects with io functions (GH16291)
• Bug in Index.symmetric_difference() on two equal MultiIndex’s, results in a TypeError
(GH13490)
• Bug in DataFrame.update() with overwrite=False and NaN values (GH15593)
• Passing an invalid engine to read_csv() now raises an informative ValueError rather than
UnboundLocalError. (GH16511)
• Bug in unique() on an array of tuples (GH16519)
• Bug in cut() when labels are set, resulting in incorrect label ordering (GH16459)
• Fixed a compatibility issue with IPython 6.0’s tab completion showing deprecation warnings on
Categoricals (GH16409)
1.10.3.1 Conversion
• Bug in to_numeric() in which empty data inputs were causing a segfault of the interpreter (GH16302)
• Silence numpy warnings when broadcasting DataFrame to Series with comparison ops (GH16378,
GH16306)
1.10.3.2 Indexing
1.10.3.3 I/O
• Bug in read_csv() when comment is passed in a space delimited text file (GH16472)
• Bug in read_csv() not raising an exception with nonexistent columns in usecols when it had the correct
length (GH14671)
• Bug that would force importing of the clipboard routines unnecessarily, potentially causing an import error on
startup (GH16288)
• Bug that raised IndexError when HTML-rendering an empty DataFrame (GH15953)
• Bug in read_csv() in which tarfile object inputs were raising an error in Python 2.x for the C engine
(GH16530)
• Bug where DataFrame.to_html() ignored the index_names parameter (GH16493)
• Bug where pd.read_hdf() returns numpy strings for index names (GH13492)
• Bug in HDFStore.select_as_multiple() where start/stop arguments were not respected (GH16209)
1.10.3.4 Plotting
1.10.3.5 Groupby/Resample/Rolling
1.10.3.6 Sparse
1.10.3.7 Reshaping
1.10.3.8 Numeric
• Bug in .interpolate(), where limit_direction was not respected when limit=None (default)
was passed (GH16282)
1.10.3.9 Categorical
• Fixed comparison operations considering the order of the categories when both categoricals are unordered
(GH16014)
1.10.3.10 Other
This is a major release from 0.19.2 and includes a number of API changes, deprecations, new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Highlights include:
• New .agg() API for Series/DataFrame similar to the groupby-rolling-resample API’s, see here
• Integration with the feather-format, including a new top-level pd.read_feather() and
DataFrame.to_feather() method, see here.
• The .ix indexer has been deprecated, see here
• Panel has been deprecated, see here
• Addition of an IntervalIndex and Interval scalar type, see here
• Improved user API when grouping by index levels in .groupby(), see here
• Improved support for UInt64 dtypes, see here
• A new orient for JSON serialization, orient='table', that uses the Table Schema spec and that gives the
possibility for a more interactive repr in the Jupyter Notebook, see here
• Experimental support for exporting styled DataFrames (DataFrame.style) to Excel, see here
• Window binary corr/cov operations now return a MultiIndexed DataFrame rather than a Panel, as Panel is
now deprecated, see here
• Support for S3 handling now uses s3fs, see here
• Google BigQuery support now uses the pandas-gbq library, see here
Warning: Pandas has changed the internal structure and layout of the codebase. This can affect imports that are
not from the top-level pandas.* namespace, please see the changes here.
Note: This is a combined release for 0.20.0 and and 0.20.1. Version 0.20.1 contains one additional change for
backwards-compatibility with downstream projects using pandas’ utils routines. (GH16250)
• New features
– agg API for DataFrame/Series
– dtype keyword for data IO
– .to_datetime() has gained an origin parameter
– Groupby Enhancements
– Better support for compressed URLs in read_csv
– Pickle file I/O now supports compression
– UInt64 Support Improved
– GroupBy on Categoricals
– Table Schema Output
– SciPy sparse matrix from/to SparseDataFrame
– Excel output for styled DataFrames
– IntervalIndex
– Other Enhancements
• Backwards incompatible API changes
– Possible incompatibility for HDF5 formats created with pandas < 0.13.0
– Map on Index types now return other Index types
– Accessing datetime fields of Index now return Index
– pd.unique will now be consistent with extension types
– S3 File Handling
– Partial String Indexing Changes
– Concat of different float dtypes will not automatically upcast
– Pandas Google BigQuery support has moved
– Memory Usage for Index is more Accurate
– DataFrame.sort_index changes
Series & DataFrame have been enhanced to support the aggregation API. This is a familiar API from groupby,
window operations, and resampling. This allows aggregation operations in a concise way by using agg() and
In [3]: df
Out[3]:
A B C
2000-01-01 1.682600 0.413582 1.689516
2000-01-02 -2.099110 -1.180182 1.595661
2000-01-03 -0.419048 0.522165 -1.208946
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.955435 -0.133009 2.011466
2000-01-09 0.578780 0.897126 -0.980013
2000-01-10 -0.045748 0.361601 -0.208039
One can operate using string function names, callables, lists, or dictionaries of these.
Using a single function is equivalent to .apply.
In [4]: df.agg('sum')
Out[4]:
A 0.652908
B 0.881282
C 2.899645
dtype: float64
Using a dict provides the ability to apply specific aggregations per column. You will get a matrix-like output of all of
the aggregators. The output has one column per unique function. Those functions applied to a particular column will
be NaN:
When presented with mixed dtypes that cannot be aggregated, .agg() will only take the valid aggregations. This is
similar to how groupby .agg() works. (GH15015)
In [9]: df.dtypes
Out[9]:
A int64
B float64
C object
D datetime64[ns]
dtype: object
The 'python' engine for read_csv(), as well as the read_fwf() function for parsing fixed-width text files
and read_excel() for parsing Excel files, now accept the dtype keyword argument for specifying the types of
specific columns (GH14295). See the io docs for more information.
In [12]: pd.read_fwf(StringIO(data)).dtypes
Out[12]:
a int64
b int64
dtype: object
to_datetime() has gained a new parameter, origin, to define a reference date from where to compute the
resulting timestamps when parsing numerical values with a specific unit specified. (GH11276, GH11745)
For example, with 1960-01-01 as the starting date:
The default is set at origin='unix', which defaults to 1970-01-01 00:00:00, which is commonly called
‘unix epoch’ or POSIX time. This was the previous default, so this is a backward compatible change.
Strings passed to DataFrame.groupby() as the by parameter may now reference either column names or index
level names. Previously, only column names could be referenced. This allows to easily group by a column and index
level at the same time. (GH5677)
In [16]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
....:
In [19]: df
Out[19]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
B
second A
one 1 2
2 4
3 6
(continues on next page)
The compression code was refactored (GH12688). As a result, reading dataframes from URLs in read_csv() or
read_table() now supports additional compression methods: xz, bz2, and zip (GH14570). Previously, only
gzip compression was supported. By default, compression of URLs and paths are now inferred using their file
extensions. Additionally, support for bz2 compression in the python 2 C-engine improved (GH14874).
In [21]: url = 'https://github.com/{repo}/raw/{branch}/{path}'.format(
....: repo = 'pandas-dev/pandas',
....: branch = 'master',
....: path = 'pandas/tests/io/parser/data/salaries.csv.bz2',
....: )
....:
In [24]: df.head(2)
Out[24]:
S X E M
0 13876 1 1 1
1 11608 1 3 0
read_pickle(), DataFrame.to_pickle() and Series.to_pickle() can now read from and write to
compressed pickle files. Compression methods can be an explicit parameter or be inferred from the file extension. See
the docs here.
In [25]: df = pd.DataFrame({
....: 'A': np.random.randn(1000),
....: 'B': 'foo',
....: 'C': pd.date_range('20130101', periods=1000, freq='s')})
....:
In [28]: rt.head()
Out[28]:
A B C
0 1.578227 foo 2013-01-01 00:00:00
1 -0.230575 foo 2013-01-01 00:00:01
2 0.695530 foo 2013-01-01 00:00:02
3 -0.466001 foo 2013-01-01 00:00:03
4 -0.154972 foo 2013-01-01 00:00:04
The default is to infer the compression type from the extension (compression='infer'):
In [29]: df.to_pickle("data.pkl.gz")
In [30]: rt = pd.read_pickle("data.pkl.gz")
In [31]: rt.head()
Out[31]:
A B C
0 1.578227 foo 2013-01-01 00:00:00
1 -0.230575 foo 2013-01-01 00:00:01
2 0.695530 foo 2013-01-01 00:00:02
3 -0.466001 foo 2013-01-01 00:00:03
4 -0.154972 foo 2013-01-01 00:00:04
In [32]: df["A"].to_pickle("s1.pkl.bz2")
In [33]: rt = pd.read_pickle("s1.pkl.bz2")
In [34]: rt.head()
Out[34]:
0 1.578227
1 -0.230575
2 0.695530
3 -0.466001
4 -0.154972
Name: A, dtype: float64
Pandas has significantly improved support for operations involving unsigned, or purely non-negative, integers. Pre-
viously, handling these integers would result in improper rounding or data-type casting, leading to incorrect results.
Notably, a new numerical index, UInt64Index, has been created (GH14937)
In [37]: df.index
Out[37]: UInt64Index([1, 2, 3], dtype='uint64')
• Bug in converting object elements of array-like objects to unsigned 64-bit integers (GH4471, GH14982)
• Bug in Series.unique() in which unsigned 64-bit integers were causing overflow (GH14721)
• Bug in DataFrame construction in which unsigned 64-bit integer elements were being converted to objects
(GH14881)
• Bug in pd.read_csv() in which unsigned 64-bit integer elements were being improperly converted to the
wrong data types (GH14983)
• Bug in pd.unique() in which unsigned 64-bit integers were causing overflow (GH14915)
• Bug in pd.value_counts() in which unsigned 64-bit integers were being erroneously truncated in the
output (GH14934)
In previous versions, .groupby(..., sort=False) would fail with a ValueError when grouping on a cat-
egorical series with some categories not appearing in the data. (GH13179)
In [38]: chromosomes = np.r_[np.arange(1, 23).astype(str), ['X', 'Y']]
In [39]: df = pd.DataFrame({
....: 'A': np.random.randint(100),
....: 'B': np.random.randint(100),
....: 'C': np.random.randint(100),
....: 'chromosomes': pd.Categorical(np.random.choice(chromosomes, 100),
....: categories=chromosomes,
....: ordered=True)})
....:
In [40]: df
Out[40]:
A B C chromosomes
0 80 36 94 12
1 80 36 94 X
2 80 36 94 19
3 80 36 94 22
4 80 36 94 17
5 80 36 94 6
6 80 36 94 13
.. .. .. .. ...
93 80 36 94 21
94 80 36 94 20
95 80 36 94 11
96 80 36 94 16
97 80 36 94 21
98 80 36 94 18
99 80 36 94 8
Previous Behavior:
In [3]: df[df.chromosomes != '1'].groupby('chromosomes', sort=False).sum()
---------------------------------------------------------------------------
ValueError: items in new_categories are not the same as in old categories
New Behavior:
In [41]: df[df.chromosomes != '1'].groupby('chromosomes', sort=False).sum()
Out[41]:
A B C
chromosomes
2 320 144 376
3 400 180 470
4 240 108 282
5 240 108 282
6 400 180 470
7 400 180 470
8 480 216 564
... ... ... ...
19 400 180 470
(continues on next page)
The new orient 'table' for DataFrame.to_json() will generate a Table Schema compatible string represen-
tation of the data.
In [42]: df = pd.DataFrame(
....: {'A': [1, 2, 3],
....: 'B': ['a', 'b', 'c'],
....: 'C': pd.date_range('2016-01-01', freq='d', periods=3),
....: }, index=pd.Index(range(3), name='idx'))
....:
In [43]: df
Out[43]:
A B C
idx
0 1 a 2016-01-01
1 2 b 2016-01-02
2 3 c 2016-01-03
In [44]: df.to_json(orient='table')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→'{"schema": {"fields":[{"name":"idx","type":"integer"},{"name":"A","type":"integer"}
˓→,{"name":"B","type":"string"},{"name":"C","type":"datetime"}],"primaryKey":["idx"],
˓→01T00:00:00.000Z"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000Z"},{"idx":2,
˓→"A":3,"B":"c","C":"2016-01-03T00:00:00.000Z"}]}'
Pandas now supports creating sparse dataframes directly from scipy.sparse.spmatrix instances. See the doc-
umentation for more information. (GH4343)
All sparse formats are supported, but matrices that are not in COOrdinate format will be converted, copying data as
needed.
In [49]: sp_arr
Out[49]:
<1000x5 sparse matrix of type '<class 'numpy.float64'>'
with 521 stored elements in Compressed Sparse Row format>
In [51]: sdf
Out[51]:
0 1 2 3 4
0 NaN NaN NaN NaN NaN
1 NaN NaN NaN 0.955103 NaN
2 NaN NaN NaN 0.900469 NaN
3 NaN NaN NaN NaN NaN
4 NaN 0.924771 NaN NaN NaN
5 NaN NaN NaN NaN NaN
6 NaN NaN NaN NaN NaN
.. .. ... ... ... ...
993 NaN NaN NaN NaN NaN
994 NaN NaN NaN NaN 0.972191
995 NaN 0.979898 0.97901 NaN NaN
996 NaN NaN NaN NaN NaN
997 NaN NaN NaN NaN NaN
998 NaN NaN NaN NaN NaN
999 NaN NaN NaN NaN NaN
To convert a SparseDataFrame back to sparse SciPy matrix in COO format, you can use:
In [52]: sdf.to_coo()
Out[52]:
<1000x5 sparse matrix of type '<class 'numpy.float64'>'
with 521 stored elements in COOrdinate format>
Experimental support has been added to export DataFrame.style formats to Excel using the openpyxl engine.
(GH15530)
For example, after running the following, styled.xlsx renders as below:
In [53]: np.random.seed(24)
In [57]: df
Out[57]:
A B C D E
0 1.0 1.329212 NaN -0.316280 -0.990810
1 2.0 -1.070816 -1.438713 0.564417 0.295722
2 3.0 -1.626404 0.219565 0.678805 1.889273
3 4.0 0.961538 0.104011 -0.481165 0.850229
4 5.0 1.453425 1.057737 0.165562 0.515018
5 6.0 -1.336936 0.562861 1.392855 -0.063328
6 7.0 0.121668 1.207603 -0.002040 1.627796
7 8.0 0.354493 1.037528 -0.385684 0.519818
8 9.0 1.686583 -1.325963 1.428984 -2.089354
9 10.0 -0.129820 0.631523 -0.586538 0.290720
1.11.1.12 IntervalIndex
pandas has gained an IntervalIndex with its own dtype, interval as well as the Interval scalar type. These
allow first-class support for interval notation, specifically as a return type for the categories in cut() and qcut().
The IntervalIndex allows some unique indexing, see the docs. (GH7640, GH8625)
Warning: These indexing behaviors of the IntervalIndex are provisional and may change in a future version of
pandas. Feedback on usage is welcome.
Previous behavior:
The returned categories were strings, representing Intervals
In [2]: c
Out[2]:
[(-0.003, 1.5], (-0.003, 1.5], (1.5, 3], (1.5, 3]]
Categories (2, object): [(-0.003, 1.5] < (1.5, 3]]
In [3]: c.categories
Out[3]: Index(['(-0.003, 1.5]', '(1.5, 3]'], dtype='object')
New behavior:
In [61]: c
Out[61]:
[(-0.003, 1.5], (-0.003, 1.5], (1.5, 3.0], (1.5, 3.0]]
Categories (2, interval[float64]): [(-0.003, 1.5] < (1.5, 3.0]]
In [62]: c.categories
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Furthermore, this allows one to bin other data with these same bins, with NaN representing a missing value similar to
other dtypes.
In [65]: df
Out[65]:
A
B
(-0.003, 1.5] 0
(1.5, 3.0] 1
(-0.003, 1.5] 2
(-0.003, 1.5] 3
• DataFrame.plot now prints a title above each subplot if suplots=True and title is a list of strings
(GH14753)
• DataFrame.plot can pass the matplotlib 2.0 default color cycle as a single string as color parameter, see
here. (GH15516)
• Series.interpolate() now supports timedelta as an index type with method='time' (GH6424)
• Addition of a level keyword to DataFrame/Series.rename to rename labels in the specified level of a
MultiIndex (GH4160).
• DataFrame.reset_index() will now interpret a tuple index.name as a key spanning across levels of
columns, if this is a MultiIndex (GH16164)
• Timedelta.isoformat method added for formatting Timedeltas as an ISO 8601 duration. See the
Timedelta docs (GH15136)
• .select_dtypes() now allows the string datetimetz to generically select datetimes with tz (GH14910)
• The .to_latex() method will now accept multicolumn and multirow arguments to use the accompa-
nying LaTeX enhancements
• pd.merge_asof() gained the option direction='backward'|'forward'|'nearest'
(GH14887)
• Series/DataFrame.asfreq() have gained a fill_value parameter, to fill missing values (GH3715).
• Series/DataFrame.resample.asfreq have gained a fill_value parameter, to fill missing values
during resampling (GH3715).
• pandas.util.hash_pandas_object() has gained the ability to hash a MultiIndex (GH15224)
• Series/DataFrame.squeeze() have gained the axis parameter. (GH15339)
• DataFrame.to_excel() has a new freeze_panes parameter to turn on Freeze Panes when exporting
to Excel (GH15160)
• pd.read_html() will parse multiple header rows, creating a MutliIndex header. (GH13434).
• HTML table output skips colspan or rowspan attribute if equal to 1. (GH15403)
• pandas.io.formats.style.Styler template now has blocks for easier extension, see the example
notebook (GH15649)
• Styler.render() now accepts **kwargs to allow user-defined variables in the template (GH15649)
• Compatibility with Jupyter notebook 5.0; MultiIndex column labels are left-aligned and MultiIndex row-labels
are top-aligned (GH15379)
• TimedeltaIndex now has a custom date-tick formatter specifically designed for nanosecond level precision
(GH8711)
• pd.api.types.union_categoricals gained the ignore_ordered argument to allow ignoring the
ordered attribute of unioned categoricals (GH13410). See the categorical union docs for more information.
• DataFrame.to_latex() and DataFrame.to_string() now allow optional header aliases.
(GH15536)
• Re-enable the parse_dates keyword of pd.read_excel() to parse string columns as dates (GH14326)
• Added .empty property to subclasses of Index. (GH15270)
• Enabled floor division for Timedelta and TimedeltaIndex (GH15828)
• pandas.io.json.json_normalize() gained the option errors='ignore'|'raise'; the default
is errors='raise' which is backward compatible. (GH14583)
1.11.2.1 Possible incompatibility for HDF5 formats created with pandas < 0.13.0
pd.TimeSeries was deprecated officially in 0.17.0, though has already been an alias since 0.13.0. It has been
dropped in favor of pd.Series. (GH15098).
This may cause HDF5 files that were created in prior versions to become unreadable if pd.TimeSeries was used.
This is most likely to be for pandas < 0.13.0. If you find yourself in this situation. You can use a recent prior version
of pandas to read in your HDF5 files, then write them out again after applying the procedure below.
In [3]: s
Out[3]:
2013-01-01 1
2013-01-02 2
2013-01-03 3
Freq: D, dtype: int64
In [4]: type(s)
Out[4]: pandas.core.series.TimeSeries
In [5]: s = pd.Series(s)
In [6]: s
Out[6]:
2013-01-01 1
2013-01-02 2
2013-01-03 3
Freq: D, dtype: int64
(continues on next page)
In [7]: type(s)
Out[7]: pandas.core.series.Series
In [69]: idx
Out[69]: Int64Index([1, 2], dtype='int64')
In [71]: mi
Out[71]:
MultiIndex(levels=[[1, 2], [2, 4]],
labels=[[0, 1], [0, 1]])
Previous Behavior:
In [5]: idx.map(lambda x: x * 2)
Out[5]: array([2, 4])
In [7]: mi.map(lambda x: x)
Out[7]: array([(1, 2), (2, 4)], dtype=object)
New Behavior:
In [72]: idx.map(lambda x: x * 2)
Out[72]: Int64Index([2, 4], dtype='int64')
In [74]: mi.map(lambda x: x)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
map on a Series with datetime64 values may return int64 dtypes rather than int32
In [77]: s
Out[77]:
0 2011-01-02 00:00:00+09:00
1 2011-01-02 01:00:00+09:00
2 2011-01-02 02:00:00+09:00
dtype: datetime64[ns, Asia/Tokyo]
Previous Behavior:
In [9]: s.map(lambda x: x.hour)
Out[9]:
0 0
1 1
2 2
dtype: int32
New Behavior:
In [78]: s.map(lambda x: x.hour)
Out[78]:
0 0
1 1
2 2
dtype: int64
The datetime-related attributes (see here for an overview) of DatetimeIndex, PeriodIndex and
TimedeltaIndex previously returned numpy arrays. They will now return a new Index object, except in the
case of a boolean field, where the result will still be a boolean ndarray. (GH15022)
Previous behaviour:
In [1]: idx = pd.date_range("2015-01-01", periods=5, freq='10H')
In [2]: idx.hour
Out[2]: array([ 0, 10, 20, 6, 16], dtype=int32)
New Behavior:
In [79]: idx = pd.date_range("2015-01-01", periods=5, freq='10H')
In [80]: idx.hour
Out[80]: Int64Index([0, 10, 20, 6, 16], dtype='int64')
This has the advantage that specific Index methods are still available on the result. On the other hand, this might
have backward incompatibilities: e.g. compared to numpy arrays, Index objects are not mutable. To get the original
ndarray, you can always convert explicitly using np.asarray(idx.hour).
In prior versions, using Series.unique() and pandas.unique() on Categorical and tz-aware data-types
would yield different return types. These are now made consistent. (GH15903)
• Datetime tz-aware
Previous behaviour:
# Series
In [5]: pd.Series([pd.Timestamp('20160101', tz='US/Eastern'),
pd.Timestamp('20160101', tz='US/Eastern')]).unique()
Out[5]: array([Timestamp('2016-01-01 00:00:00-0500', tz='US/Eastern')],
˓→dtype=object)
# Index
In [7]: pd.Index([pd.Timestamp('20160101', tz='US/Eastern'),
pd.Timestamp('20160101', tz='US/Eastern')]).unique()
Out[7]: DatetimeIndex(['2016-01-01 00:00:00-05:00'], dtype='datetime64[ns, US/
˓→Eastern]', freq=None)
New Behavior:
˓→ freq=None)
˓→ freq=None)
• Categoricals
Previous behaviour:
New Behavior:
# returns a Categorical
In [85]: pd.Series(list('baabc'), dtype='category').unique()
Out[85]:
[b, a, c]
Categories (3, object): [b, a, c]
pandas now uses s3fs for handling S3 connections. This shouldn’t break any code. However, since s3fs is not a
required dependency, you will need to install it separately, like boto in prior versions of pandas. (GH11915).
DatetimeIndex Partial String Indexing now works as an exact match, provided that string resolution coincides with
index resolution, including a case when both are seconds (GH14826). See Slice vs. Exact Match for details.
Previous Behavior:
New Behavior:
Previously, concat of multiple objects with different float dtypes would automatically upcast results to a dtype of
float64. Now the smallest acceptable dtype will be used (GH13247)
In [88]: df1 = pd.DataFrame(np.array([1.0], dtype=np.float32, ndmin=2))
In [89]: df1.dtypes
Out[89]:
0 float32
dtype: object
In [91]: df2.dtypes
Out[91]:
0 float32
dtype: object
Previous Behavior:
In [7]: pd.concat([df1, df2]).dtypes
Out[7]:
0 float64
dtype: object
New Behavior:
In [92]: pd.concat([df1, df2]).dtypes
Out[92]:
0 float32
dtype: object
pandas has split off Google BigQuery support into a separate package pandas-gbq. You can conda
install pandas-gbq -c conda-forge or pip install pandas-gbq to get it. The functional-
ity of read_gbq() and DataFrame.to_gbq() remain the same with the currently released version of
pandas-gbq=0.1.4. Documentation is now hosted here (GH15347)
In previous versions, showing .memory_usage() on a pandas structure that has an index, would only include
actual index values and not include structures that facilitated fast indexing. This will generally be different for Index
and MultiIndex and less-so for other index types. (GH15237)
Previous Behavior:
In [8]: index = Index(['foo', 'bar', 'baz'])
In [9]: index.memory_usage(deep=True)
Out[9]: 180
In [10]: index.get_loc('foo')
Out[10]: 0
(continues on next page)
In [11]: index.memory_usage(deep=True)
Out[11]: 180
New Behavior:
In [9]: index.memory_usage(deep=True)
Out[9]: 180
In [10]: index.get_loc('foo')
Out[10]: 0
In [11]: index.memory_usage(deep=True)
Out[11]: 260
In certain cases, calling .sort_index() on a MultiIndexed DataFrame would return the same DataFrame without
seeming to sort. This would happen with a lexsorted, but non-monotonic levels. (GH15622, GH15687, GH14015,
GH13431, GH15797)
This is unchanged from prior versions, but shown for illustration purposes:
In [94]: df
Out[94]:
value
B 0 0
1 1
2 2
A 0 3
1 4
2 5
In [95]: df.index.is_lexsorted()
Out[95]: False
In [96]: df.index.is_monotonic
\\\\\\\\\\\\\\\Out[96]: False
In [97]: df.sort_index()
Out[97]:
value
A 0 3
1 4
2 5
B 0 0
1 1
2 2
In [98]: df.sort_index().index.is_lexsorted()
Out[98]: True
In [99]: df.sort_index().index.is_monotonic
\\\\\\\\\\\\\\Out[99]: True
However, this example, which has a non-monotonic 2nd level, doesn’t behave as desired.
In [100]: df = pd.DataFrame(
.....: {'value': [1, 2, 3, 4]},
.....: index=pd.MultiIndex(levels=[['a', 'b'], ['bb', 'aa']],
.....: labels=[[0, 0, 1, 1], [0, 1, 0, 1]]))
.....:
In [101]: df
Out[101]:
value
a bb 1
aa 2
b bb 3
aa 4
Previous Behavior:
In [11]: df.sort_index()
Out[11]:
value
a bb 1
aa 2
b bb 3
aa 4
In [14]: df.sort_index().index.is_lexsorted()
Out[14]: True
In [15]: df.sort_index().index.is_monotonic
Out[15]: False
New Behavior:
In [102]: df.sort_index()
Out[102]:
value
a aa 2
bb 1
b aa 4
bb 3
In [103]: df.sort_index().index.is_lexsorted()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[103]: True
In [104]: df.sort_index().index.is_monotonic
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[104]:
˓→True
The output formatting of groupby.describe() now labels the describe() metrics in the columns instead of
the index. This format is consistent with groupby.agg() when applying multiple functions at once. (GH4792)
Previous Behavior:
In [2]: df.groupby('A').describe()
Out[2]:
B
A
1 count 2.000000
mean 1.500000
std 0.707107
min 1.000000
25% 1.250000
50% 1.500000
75% 1.750000
max 2.000000
2 count 2.000000
mean 3.500000
std 0.707107
min 3.000000
25% 3.250000
50% 3.500000
75% 3.750000
max 4.000000
New Behavior:
In [106]: df.groupby('A').describe()
Out[106]:
B
count mean std min 25% 50% 75% max
A
1 2.0 1.5 0.707107 1.0 1.25 1.5 1.75 2.0
2 2.0 3.5 0.707107 3.0 3.25 3.5 3.75 4.0
B
mean std amin amax
A
1 1.5 0.707107 1 2
2 3.5 0.707107 3 4
A binary window operation, like .corr() or .cov(), when operating on a .rolling(..), .expanding(..
), or .ewm(..) object, will now return a 2-level MultiIndexed DataFrame rather than a Panel, as Panel
is now deprecated, see here. These are equivalent in function, but a MultiIndexed DataFrame enjoys more support
in pandas. See the section on Windowed Binary Operations for more information. (GH15677)
In [108]: np.random.seed(1234)
In [110]: df.tail()
Out[110]:
bar A B
foo
2016-04-05 0.640880 0.126205
2016-04-06 0.171465 0.737086
2016-04-07 0.127029 0.369650
2016-04-08 0.604334 0.103104
2016-04-09 0.802374 0.945553
Previous Behavior:
In [2]: df.rolling(12).corr()
Out[2]:
<class 'pandas.core.panel.Panel'>
Dimensions: 100 (items) x 2 (major_axis) x 2 (minor_axis)
Items axis: 2016-01-01 00:00:00 to 2016-04-09 00:00:00
Major_axis axis: A to B
Minor_axis axis: A to B
New Behavior:
In [112]: res.tail()
Out[112]:
bar A B
foo bar
2016-04-07 B -0.132090 1.000000
2016-04-08 A 1.000000 -0.145775
B -0.145775 1.000000
2016-04-09 A 1.000000 0.119645
B 0.119645 1.000000
In [113]: df.rolling(12).corr().loc['2016-04-07']
Out[113]:
bar A B
foo bar
2016-04-07 A 1.00000 -0.13209
B -0.13209 1.00000
In previous versions most types could be compared to string column in a HDFStore usually resulting in an invalid
comparison, returning an empty result frame. These comparisons will now raise a TypeError (GH15492)
In [116]: df.dtypes
Out[116]:
unparsed_date object
dtype: object
Previous Behavior:
New Behavior:
In [18]: ts = pd.Timestamp('2014-01-01')
1.11.2.14 Index.intersection and inner join now preserve the order of the left Index
Index.intersection() now preserves the order of the calling Index (left) instead of the other Index (right)
(GH15582). This affects inner joins, DataFrame.join() and merge(), and the .align method.
• Index.intersection
In [118]: left
Out[118]: Int64Index([2, 1, 0], dtype='int64')
In [120]: right
Out[120]: Int64Index([1, 2, 3], dtype='int64')
Previous Behavior:
In [4]: left.intersection(right)
Out[4]: Int64Index([1, 2], dtype='int64')
New Behavior:
In [121]: left.intersection(right)
Out[121]: Int64Index([2, 1], dtype='int64')
In [123]: left
Out[123]:
a
2 20
1 10
0 0
In [125]: right
Out[125]:
b
1 100
2 200
3 300
Previous Behavior:
New Behavior:
The documentation for pivot_table() states that a DataFrame is always returned. Here a bug is fixed that
allowed this to return a Series under certain circumstance. (GH4386)
In [128]: df
Out[128]:
col1 col2 col3
0 3 C 1
1 4 D 3
2 5 E 9
Previous Behavior:
New Behavior:
In [129]: df.pivot_table('col1', index=['col3', 'col2'], aggfunc=np.sum)
Out[129]:
col1
col3 col2
1 C 3
3 D 4
9 E 5
• numexpr version is now required to be >= 2.4.6 and it will not be used at all if this requisite is not fulfilled
(GH15213).
• CParserError has been renamed to ParserError in pd.read_csv() and will be removed in the future
(GH12665)
• SparseArray.cumsum() and SparseSeries.cumsum() will now always return SparseArray and
SparseSeries respectively (GH12855)
• DataFrame.applymap() with an empty DataFrame will return a copy of the empty DataFrame instead
of a Series (GH8222)
• Series.map() now respects default values of dictionary subclasses with a __missing__ method, such as
collections.Counter (GH15999)
• .loc has compat with .ix for accepting iterators, and NamedTuples (GH15120)
• interpolate() and fillna() will raise a ValueError if the limit keyword argument is not greater
than 0. (GH9217)
• pd.read_csv() will now issue a ParserWarning whenever there are conflicting values provided by the
dialect parameter and the user (GH14898)
• pd.read_csv() will now raise a ValueError for the C engine if the quote character is larger than than
one byte (GH11592)
• inplace arguments now require a boolean value, else a ValueError is thrown (GH14189)
• pandas.api.types.is_datetime64_ns_dtype will now report True on a tz-aware dtype, similar
to pandas.api.types.is_datetime64_any_dtype
• DataFrame.asof() will return a null filled Series instead the scalar NaN if a match is not found
(GH15118)
• Specific support for copy.copy() and copy.deepcopy() functions on NDFrame objects (GH15444)
• Series.sort_values() accepts a one element list of bool for consistency with the behavior of
DataFrame.sort_values() (GH15604)
• .merge() and .join() on category dtype columns will now preserve the category dtype when possible
(GH10409)
Some formerly public python/c/c++/cython extension modules have been moved and/or renamed. These are all re-
moved from the public API. Furthermore, the pandas.core, pandas.compat, and pandas.util top-level
modules are now considered to be PRIVATE. If indicated, a deprecation warning will be issued if you reference theses
modules. (GH12588)
Some new subpackages are created with public functionality that is not directly exposed in the top-level namespace:
pandas.errors, pandas.plotting and pandas.testing (more details below). Together with pandas.
api.types and certain functions in the pandas.io and pandas.tseries submodules, these are now the public
subpackages.
Further changes:
• The function union_categoricals() is now importable from pandas.api.types, formerly from
pandas.types.concat (GH15998)
• The type import pandas.tslib.NaTType is deprecated and can be replaced by using type(pandas.
NaT) (GH16146)
• The public functions in pandas.tools.hashing deprecated from that locations, but are now importable
from pandas.util (GH16223)
• The modules in pandas.util: decorators, print_versions, doctools, validators,
depr_module are now private. Only the functions exposed in pandas.util itself are public (GH16223)
1.11.3.2 pandas.errors
We are adding a standard public module for all pandas exceptions & warnings pandas.errors. (GH14800). Pre-
viously these exceptions & warnings could be imported from pandas.core.common or pandas.io.common.
These exceptions and warnings will be removed from the *.common locations in a future release. (GH15541)
The following are now part of this API:
['DtypeWarning',
'EmptyDataError',
'OutOfBoundsDatetime',
'ParserError',
(continues on next page)
1.11.3.3 pandas.testing
We are adding a standard module that exposes the public testing functions in pandas.testing (GH9895). Those
functions can be used when writing tests for functionality using pandas objects.
The following testing functions are now part of this API:
• testing.assert_frame_equal()
• testing.assert_series_equal()
• testing.assert_index_equal()
1.11.3.4 pandas.plotting
A new public pandas.plotting module has been added that holds plotting functionality that was previously in
either pandas.tools.plotting or in the top-level namespace. See the deprecations sections for more details.
• Building pandas for development now requires cython >= 0.23 (GH14831)
• Require at least 0.23 version of cython to avoid problems with character encodings (GH14699)
• Switched the test framework to use pytest (GH13097)
• Reorganization of tests directory layout (GH14854, GH15707).
1.11.4 Deprecations
The .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers. .ix offers a lot of magic on the
inference of what the user wants to do. To wit, .ix can decide to index positionally OR via labels, depending on the
data type of the index. This has caused quite a bit of user confusion over the years. The full indexing documentation
is here. (GH14218)
The recommended methods of indexing are:
• .loc if you want to label index
• .iloc if you want to positionally index.
Using .ix will now show a DeprecationWarning with a link to some examples of how to convert code here.
In [130]: df = pd.DataFrame({'A': [1, 2, 3],
.....: 'B': [4, 5, 6]},
.....: index=list('abc'))
.....:
Previous Behavior, where you wish to get the 0th and the 2nd elements from the index in the ‘A’ column.
In [3]: df.ix[[0, 2], 'A']
Out[3]:
a 1
c 3
Name: A, dtype: int64
Using .loc. Here we will select the appropriate indexes from the index, then use label indexing.
In [132]: df.loc[df.index[[0, 2]], 'A']
Out[132]:
a 1
c 3
Name: A, dtype: int64
Using .iloc. Here we will get the location of the ‘A’ column, then use positional indexing to select things.
In [133]: df.iloc[[0, 2], df.columns.get_loc('A')]
Out[133]:
a 1
c 3
Name: A, dtype: int64
Panel is deprecated and will be removed in a future version. The recommended way to represent 3-D data
are with a MultiIndex on a DataFrame via the to_frame() or with the xarray package. Pandas pro-
vides a to_xarray() method to automate this conversion. For more details see Deprecate Panel documentation.
(GH13563).
In [134]: p = tm.makePanel()
In [135]: p
Out[135]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 3 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to D
In [137]: p.to_xarray()
Out[137]:
<xarray.DataArray (items: 3, major_axis: 3, minor_axis: 4)>
array([[[ 0.628776, 0.988138, -0.938153, -0.223019],
[ 0.186494, -0.072608, -1.239072, 2.123692],
[ 0.952478, -0.550603, 0.139683, 0.122273]],
Here is a typical useful syntax for computing different aggregations for different columns. This is a natural, and useful
syntax. We aggregate from the dict-to-list by taking the specified columns and applying the list of functions. This
returns a MultiIndex for the columns (this is not deprecated).
Here’s an example of the first deprecation, passing a dict to a grouped Series. This is a combination aggregation &
renaming:
Out[6]:
foo
A
1 3
2 2
In [23]: (df.groupby('A')
.agg({'B': {'foo': 'sum'}, 'C': {'bar': 'min'}})
)
FutureWarning: using a dict with renaming is deprecated and
will be removed in a future version
Out[23]:
B C
foo bar
A
1 3 0
2 7 3
In [142]: (df.groupby('A')
.....: .agg({'B': 'sum', 'C': 'min'})
.....: .rename(columns={'B': 'foo', 'C': 'bar'})
.....: )
.....:
Out[142]:
foo bar
A
1 3 0
2 7 3
The pandas.tools.plotting module has been deprecated, in favor of the top level pandas.plotting mod-
ule. All the public plotting functions are now available from pandas.plotting (GH12548).
Furthermore, the top-level pandas.scatter_matrix and pandas.plot_params are deprecated. Users can
import these from pandas.plotting as well.
Previous script:
pd.tools.plotting.scatter_matrix(df)
pd.scatter_matrix(df)
pd.plotting.scatter_matrix(df)
• SparseArray.to_dense() has deprecated the fill parameter, as that parameter was not being respected
(GH14647)
• SparseSeries.to_dense() has deprecated the sparse_only parameter (GH14647)
• Series.repeat() has deprecated the reps parameter in favor of repeats (GH12662)
• The Series constructor and .astype method have deprecated accepting timestamp dtypes without a fre-
quency (e.g. np.datetime64) for the dtype parameter (GH15524)
• Index.repeat() and MultiIndex.repeat() have deprecated the n parameter in favor of repeats
(GH12662)
• Categorical.searchsorted() and Series.searchsorted() have deprecated the v parameter in
favor of value (GH12662)
• TimedeltaIndex.searchsorted(), DatetimeIndex.searchsorted(), and PeriodIndex.
searchsorted() have deprecated the key parameter in favor of value (GH12662)
• DataFrame.astype() has deprecated the raise_on_error parameter in favor of errors (GH14878)
• Series.sortlevel and DataFrame.sortlevel have been deprecated in favor of Series.
sort_index and DataFrame.sort_index (GH15099)
• importing concat from pandas.tools.merge has been deprecated in favor of imports from the pandas
namespace. This should only affect explicit imports (GH15358)
• The pandas.rpy module is removed. Similar functionality can be accessed through the rpy2 project. See the
R interfacing docs for more details.
• The pandas.io.ga module with a google-analytics interface is removed (GH11308). Similar func-
tionality can be found in the Google2Pandas package.
• pd.to_datetime and pd.to_timedelta have dropped the coerce parameter in favor of errors
(GH13602)
• pandas.stats.fama_macbeth, pandas.stats.ols, pandas.stats.plm and pandas.
stats.var, as well as the top-level pandas.fama_macbeth and pandas.ols routines are removed.
Similar functionaility can be found in the statsmodels package. (GH11898)
• The TimeSeries and SparseTimeSeries classes, aliases of Series and SparseSeries, are removed
(GH10890, GH15098).
• Series.is_time_series is dropped in favor of Series.index.is_all_dates (GH15098)
• The deprecated irow, icol, iget and iget_value methods are removed in favor of iloc and iat as
explained here (GH10711).
• The deprecated DataFrame.iterkv() has been removed in favor of DataFrame.iteritems()
(GH10711)
• The Categorical constructor has dropped the name parameter (GH10632)
• Categorical has dropped support for NaN categories (GH10748)
• The take_last parameter has been dropped from duplicated(), drop_duplicates(),
nlargest(), and nsmallest() methods (GH10236, GH10792, GH10920)
• Series, Index, and DataFrame have dropped the sort and order methods (GH10726)
• Where clauses in pytables are only accepted as strings and expressions types and not other data-types
(GH12027)
• DataFrame has dropped the combineAdd and combineMult methods in favor of add and mul respec-
tively (GH10735)
1.11.7.1 Conversion
• Bug in Timestamp.replace now raises TypeError when incorrect argument names are given; previously
this raised ValueError (GH15240)
• Bug in Timestamp.replace with compat for passing long integers (GH15030)
• Bug in Timestamp returning UTC based time/date attributes when a timezone was provided (GH13303,
GH6538)
• Bug in Timestamp incorrectly localizing timezones during construction (GH11481, GH15777)
• Bug in TimedeltaIndex addition where overflow was being allowed without error (GH14816)
• Bug in TimedeltaIndex raising a ValueError when boolean indexing with loc (GH14946)
• Bug in catching an overflow in Timestamp + Timedelta/Offset operations (GH15126)
• Bug in DatetimeIndex.round() and Timestamp.round() floating point accuracy when rounding by
milliseconds or less (GH14440, GH15578)
• Bug in astype() where inf values were incorrectly converted to integers. Now raises error now with
astype() for Series and DataFrames (GH14265)
• Bug in DataFrame(..).apply(to_numeric) when values are of type decimal.Decimal. (GH14827)
• Bug in describe() when passing a numpy array which does not contain the median to the percentiles
keyword argument (GH14908)
• Cleaned up PeriodIndex constructor, including raising on floats more consistently (GH13277)
• Bug in using __deepcopy__ on empty NDFrame objects (GH15370)
• Bug in .replace() may result in incorrect dtypes. (GH12747, GH15765)
• Bug in Series.replace and DataFrame.replace which failed on empty replacement dicts (GH15289)
• Bug in Series.replace which replaced a numeric by string (GH15743)
• Bug in Index construction with NaN elements and integer dtype specified (GH15187)
• Bug in Series construction with a datetimetz (GH14928)
• Bug in Series.dt.round() inconsistent behaviour on NaT ‘s with different arguments (GH14940)
• Bug in Series constructor when both copy=True and dtype arguments are provided (GH15125)
• Incorrect dtyped Series was returned by comparison methods (e.g., lt, gt, . . . ) against a constant for an
empty DataFrame (GH15077)
• Bug in Series.ffill() with mixed dtypes containing tz-aware datetimes. (GH14956)
• Bug in DataFrame.fillna() where the argument downcast was ignored when fillna value was of type
dict (GH15277)
• Bug in .asfreq(), where frequency was not set for empty Series (GH14320)
• Bug in DataFrame construction with nulls and datetimes in a list-like (GH15869)
• Bug in DataFrame.fillna() with tz-aware datetimes (GH15855)
• Bug in is_string_dtype, is_timedelta64_ns_dtype, and is_string_like_dtype in which
an error was raised when None was passed in (GH15941)
• Bug in the return type of pd.unique on a Categorical, which was returning an ndarray and not a
Categorical (GH15903)
• Bug in Index.to_series() where the index was not copied (and so mutating later would change the
original), (GH15949)
• Bug in indexing with partial string indexing with a len-1 DataFrame (GH16071)
• Bug in Series construction where passing invalid dtype didn’t raise an error. (GH15520)
1.11.7.2 Indexing
• Bug in Series.where() where TZ-aware data was converted to float representation (GH15701)
• Bug in .loc that would not return the correct dtype for scalar access for a DataFrame (GH11617)
• Bug in output formatting of a MultiIndex when names are integers (GH12223, GH15262)
• Bug in Categorical.searchsorted() where alphabetical instead of the provided categorical order was
used (GH14522)
• Bug in Series.iloc where a Categorical object for list-like indexes input was returned, where a
Series was expected. (GH14580)
• Bug in DataFrame.isin comparing datetimelike to empty frame (GH15473)
• Bug in .reset_index() when an all NaN level of a MultiIndex would fail (GH6322)
• Bug in .reset_index() when raising error for index name already present in MultiIndex columns
(GH16120)
• Bug in creating a MultiIndex with tuples and not passing a list of names; this will now raise ValueError
(GH15110)
• Bug in the HTML display with with a MultiIndex and truncation (GH14882)
• Bug in the display of .info() where a qualifier (+) would always be displayed with a MultiIndex that
contains only non-strings (GH15245)
• Bug in pd.concat() where the names of MultiIndex of resulting DataFrame are not handled correctly
when None is presented in the names of MultiIndex of input DataFrame (GH15787)
• Bug in DataFrame.sort_index() and Series.sort_index() where na_position doesn’t work
with a MultiIndex (GH14784, GH16604)
• Bug in in pd.concat() when combining objects with a CategoricalIndex (GH16111)
• Bug in indexing with a scalar and a CategoricalIndex (GH16123)
1.11.7.3 I/O
• Bug in pd.to_numeric() in which float and unsigned integer elements were being improperly casted
(GH14941, GH15005)
• Bug in pd.read_fwf() where the skiprows parameter was not being respected during column width infer-
ence (GH11256)
• Bug in pd.read_csv() in which the dialect parameter was not being verified before processing
(GH14898)
• Bug in pd.read_csv() in which missing data was being improperly handled with usecols (GH6710)
• Bug in pd.read_csv() in which a file containing a row with many columns followed by rows with fewer
columns would cause a crash (GH14125)
• Bug in pd.read_csv() for the C engine where usecols were being indexed incorrectly with
parse_dates (GH14792)
• Bug in pd.read_csv() with parse_dates when multiline headers are specified (GH15376)
• Bug in pd.read_csv() with float_precision='round_trip' which caused a segfault when a text
entry is parsed (GH15140)
• Bug in pd.read_csv() when an index was specified and no values were specified as null values (GH15835)
• Bug in pd.read_csv() in which certain invalid file objects caused the Python interpreter to crash (GH15337)
• Bug in pd.read_csv() in which invalid values for nrows and chunksize were allowed (GH15767)
• Bug in pd.read_csv() for the Python engine in which unhelpful error messages were being raised when
parsing errors occurred (GH15910)
• Bug in pd.read_csv() in which the skipfooter parameter was not being properly validated (GH15925)
• Bug in pd.to_csv() in which there was numeric overflow when a timestamp index was being written
(GH15982)
• Bug in pd.util.hashing.hash_pandas_object() in which hashing of categoricals depended on the
ordering of categories, instead of just their values. (GH15143)
• Bug in .to_json() where lines=True and contents (keys or values) contain escaped characters
(GH15096)
• Bug in .to_json() causing single byte ascii characters to be expanded to four byte unicode (GH15344)
• Bug in .to_json() for the C engine where rollover was not correctly handled for case where frac is odd and
diff is exactly 0.5 (GH15716, GH15864)
• Bug in pd.read_json() for Python 2 where lines=True and contents contain non-ascii unicode charac-
ters (GH15132)
• Bug in pd.read_msgpack() in which Series categoricals were being improperly processed (GH14901)
• Bug in pd.read_msgpack() which did not allow loading of a dataframe with an index of type
CategoricalIndex (GH15487)
• Bug in pd.read_msgpack() when deserializing a CategoricalIndex (GH15487)
• Bug in DataFrame.to_records() with converting a DatetimeIndex with a timezone (GH13937)
• Bug in DataFrame.to_records() which failed with unicode characters in column names (GH11879)
• Bug in .to_sql() when writing a DataFrame with numeric index names (GH15404).
• Bug in DataFrame.to_html() with index=False and max_rows raising in IndexError
(GH14998)
• Bug in pd.read_hdf() passing a Timestamp to the where parameter with a non date column (GH15492)
• Bug in DataFrame.to_stata() and StataWriter which produces incorrectly formatted files to be
produced for some locales (GH13856)
• Bug in StataReader and StataWriter which allows invalid encodings (GH15723)
• Bug in the Series repr not showing the length when the output was truncated (GH15962).
1.11.7.4 Plotting
1.11.7.5 Groupby/Resample/Rolling
1.11.7.6 Sparse
1.11.7.7 Reshaping
• Bug in pd.merge_asof() where left_index or right_index caused a failure when multiple by was
specified (GH15676)
• Bug in pd.merge_asof() where left_index/right_index together caused a failure when
tolerance was specified (GH15135)
• Bug in DataFrame.pivot_table() where dropna=True would not drop all-NaN columns when the
columns was a category dtype (GH15193)
• Bug in pd.melt() where passing a tuple value for value_vars caused a TypeError (GH15348)
• Bug in pd.pivot_table() where no error was raised when values argument was not in the columns
(GH14938)
• Bug in pd.concat() in which concatenating with an empty dataframe with join='inner' was being
improperly handled (GH15328)
• Bug with sort=True in DataFrame.join and pd.merge when joining on indexes (GH15582)
• Bug in DataFrame.nsmallest and DataFrame.nlargest where identical values resulted in dupli-
cated rows (GH15297)
• Bug in pandas.pivot_table() incorrectly raising UnicodeError when passing unicode input for
margins keyword (GH13292)
1.11.7.8 Numeric
1.11.7.9 Other
This is a minor bug-fix release in the 0.19.x series and includes some small regression fixes, bug fixes and performance
improvements. We recommend that all users upgrade to this version.
Highlights include:
• Compatibility with Python 3.6
• Added a Pandas Cheat Sheet. (GH13202).
• Enhancements
• Performance Improvements
• Bug Fixes
1.12.1 Enhancements
This is a minor bug-fix release from 0.19.0 and includes some small regression fixes, bug fixes and performance
improvements. We recommend that all users upgrade to this version.
• Performance Improvements
• Bug Fixes
• Source installs from PyPI will now again work without cython installed, as in previous versions (GH14204)
• Compat with Cython 0.25 for building (GH14496)
• Fixed regression where user-provided file handles were closed in read_csv (c engine) (GH14418).
• Fixed regression in DataFrame.quantile when missing values where present in some columns
(GH14357).
• Fixed regression in Index.difference where the freq of a DatetimeIndex was incorrectly set
(GH14323)
• Added back pandas.core.common.array_equivalent with a deprecation warning (GH14555).
• Bug in pd.read_csv for the C engine in which quotation marks were improperly parsed in skipped rows
(GH14459)
• Bug in pd.read_csv for Python 2.x in which Unicode quote characters were no longer being respected
(GH14477)
• Fixed regression in Index.append when categorical indices were appended (GH14545).
• Fixed regression in pd.DataFrame where constructor fails when given dict with None value (GH14381)
• Fixed regression in DatetimeIndex._maybe_cast_slice_bound when index is empty (GH14354).
• Bug in localizing an ambiguous timezone when a boolean is passed (GH14402)
• Bug in TimedeltaIndex addition with a Datetime-like object where addition overflow in the negative direc-
tion was not being caught (GH14068, GH14453)
• Bug in string indexing against data with object Index may raise AttributeError (GH14424)
• Corrrecly raise ValueError on empty input to pd.eval() and df.query() (GH13139)
• Bug in RangeIndex.intersection when result is a empty set (GH14364).
• Bug in groupby-transform broadcasting that could cause incorrect dtype coercion (GH14457)
• Bug in Series.__setitem__ which allowed mutating read-only arrays (GH14359).
• Bug in DataFrame.insert where multiple calls with duplicate columns can fail (GH14291)
• pd.merge() will raise ValueError with non-boolean parameters in passed boolean type arguments
(GH14434)
• Bug in Timestamp where dates very near the minimum (1677-09) could underflow on creation (GH14415)
• Bug in pd.concat where names of the keys were not propagated to the resulting MultiIndex (GH14252)
• Bug in pd.concat where axis cannot take string parameters 'rows' or 'columns' (GH14369)
• Bug in pd.concat with dataframes heterogeneous in length and tuple keys (GH14438)
• Bug in MultiIndex.set_levels where illegal level values were still set after raising an error (GH13754)
• Bug in DataFrame.to_json where lines=True and a value contained a } character (GH14391)
• Bug in df.groupby causing an AttributeError when grouping a single index frame by a column and
the index level (GH14327)
• Bug in df.groupby where TypeError raised when pd.Grouper(key=...) is passed in a list
(GH14334)
• Bug in pd.pivot_table may raise TypeError or ValueError when index or columns is not scalar
and values is not specified (GH14380)
This is a major release from 0.18.1 and includes number of API changes, several new features, enhancements, and
performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Highlights include:
• merge_asof() for asof-style time-series joining, see here
• .rolling() is now time-series aware, see here
• read_csv() now supports parsing Categorical data, see here
• A function union_categorical() has been added for combining categoricals, see here
• PeriodIndex now has its own period dtype, and changed to be more consistent with other Index classes.
See here
• Sparse data structures gained enhanced support of int and bool dtypes, see here
• Comparison operations with Series no longer ignores the index, see here for an overview of the API changes.
• Introduction of a pandas development API for utility functions, see here.
• Deprecation of Panel4D and PanelND. We recommend to represent these types of n-dimensional data with
the xarray package.
• Removal of the previously deprecated modules pandas.io.data, pandas.io.wb, pandas.tools.
rplot.
Warning: pandas >= 0.19.0 will no longer silence numpy ufunc warnings upon import, see here.
• New features
– merge_asof for asof-style time-series joining
– .rolling() is now time-series aware
– read_csv has improved support for duplicate column names
– read_csv supports parsing Categorical directly
– Categorical Concatenation
– Semi-Month Offsets
– New Index methods
– Google BigQuery Enhancements
– Fine-grained numpy errstate
– get_dummies now returns integer dtypes
– Downcast values to smallest possible dtype in to_numeric
– pandas development API
– Other enhancements
• API changes
– Series.tolist() will now return Python types
– Series operators for different indexes
* Arithmetic operators
* Comparison operators
* Logical operators
* Flexible comparison methods
– Series type promotion on assignment
– .to_datetime() changes
– Merging changes
– .describe() changes
– Period changes
• Deprecations
• Removal of prior version deprecations/changes
• Performance Improvements
• Bug Fixes
A long-time requested feature has been added through the merge_asof() function, to support asof style joining of
time-series (GH1870, GH13695, GH13709, GH13902). Full documentation is here.
The merge_asof() performs an asof merge, which is similar to a left-join except that we match on nearest key
rather than equal keys.
In [3]: left
Out[3]:
a left_val
0 1 a
1 5 b
2 10 c
In [4]: right
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[4]:
a right_val
0 1 1
1 2 2
2 3 3
3 6 6
4 7 7
We typically want to match exactly when possible, and use the most recent value otherwise.
We can also match rows ONLY with prior data, and not an exact match.
In a typical time-series example, we have trades and quotes and we want to asof-join them. This also
illustrates using the by parameter to group data before merging.
In [9]: trades
Out[9]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [10]: quotes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
An asof merge joins on the on, typically a datetimelike field, which is ordered, and in this case we are using a grouper
in the by field. This is like a left-outer join, except that forward filling happens automatically taking the most recent
non-NaN value.
This returns a merged DataFrame with the entries in the same order as the original left passed DataFrame (trades
in this case), with the fields of the quotes merged.
.rolling() objects are now time-series aware and can accept a time-series offset (or convertible) for the window
argument (GH13327, GH12995). See the full documentation here.
....:
In [13]: dft
Out[13]:
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 2.0
2013-01-01 09:00:03 NaN
2013-01-01 09:00:04 4.0
This is a regular frequency index. Using an integer window parameter works to roll along the window frequency.
In [14]: dft.rolling(2).sum()
Out[14]:
B
2013-01-01 09:00:00 NaN
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 NaN
2013-01-01 09:00:04 NaN
(continues on next page)
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:04 4.0
In [16]: dft.rolling('2s').sum()
Out[16]:
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:04 4.0
Using a non-regular, but still monotonic index, rolling with an integer window does not impart any special calculation.
In [18]: dft
Out[18]:
B
foo
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
In [19]: dft.rolling(2).sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
B
foo
2013-01-01 09:00:00 NaN
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 NaN
Using the time-specification generates variable windows for this sparse data.
In [20]: dft.rolling('2s').sum()
Out[20]:
B
foo
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
Furthermore, we now allow an optional on parameter to specify a column (rather than the default of the index) in a
DataFrame.
In [22]: dft
Out[22]:
foo B
0 2013-01-01 09:00:00 0.0
1 2013-01-01 09:00:02 1.0
2 2013-01-01 09:00:03 2.0
3 2013-01-01 09:00:05 NaN
4 2013-01-01 09:00:06 4.0
foo B
0 2013-01-01 09:00:00 0.0
1 2013-01-01 09:00:02 1.0
2 2013-01-01 09:00:03 3.0
3 2013-01-01 09:00:05 NaN
4 2013-01-01 09:00:06 4.0
Duplicate column names are now supported in read_csv() whether they are in the file or passed in as the names
parameter (GH7160, GH9424)
Previous behavior:
The first a column contained the same data as the second a column, when it should have contained the values [0,
3].
New behavior:
The read_csv() function now supports parsing a Categorical column when specified as a dtype (GH10153).
Depending on the structure of the data, this can result in a faster parse time and lower memory usage compared to
converting to Categorical after parsing. See the io docs here.
In [28]: pd.read_csv(StringIO(data))
Out[28]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [29]: pd.read_csv(StringIO(data)).dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[29]:
˓→
col1 object
col2 object
col3 int64
dtype: object
col1 category
col2 category
col3 category
dtype: object
Note: The resulting categories will always be parsed as strings (object dtype). If the categories are numeric they can
be converted using the to_numeric() function, or as appropriate, another converter such as to_datetime().
In [33]: df.dtypes
Out[33]:
col1 category
(continues on next page)
In [34]: df['col3']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[34]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, object): [1, 2, 3]
In [36]: df['col3']
Out[36]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, int64): [1, 2, 3]
• A function union_categoricals() has been added for combining categoricals, see Unioning Categoricals
(GH13361, GH13763, GH13846, GH14173)
In [37]: from pandas.api.types import union_categoricals
• concat and append now can concat category dtypes with different categories as object dtype
(GH13524)
In [41]: s1 = pd.Series(['a', 'b'], dtype='category')
Previous behavior:
In [1]: pd.concat([s1, s2])
ValueError: incompatible categories in categorical concat
New behavior:
In [43]: pd.concat([s1, s2])
Out[43]:
(continues on next page)
Pandas has gained new frequency offsets, SemiMonthEnd (‘SM’) and SemiMonthBegin (‘SMS’). These provide
date offsets anchored (by default) to the 15th and end of month, and 15th and 1st of month respectively. (GH1543)
SemiMonthEnd:
SemiMonthBegin:
Using the anchoring suffix, you can also specify the day of month to use instead of the 15th.
˓→'datetime64[ns]', freq='SM-14')
The following methods and options are added to Index, to be more consistent with the Series and DataFrame
API.
Index now supports the .where() function for same shape indexing (GH13170)
In [54]: idx.dropna()
Out[54]: Float64Index([1.0, 2.0, 4.0], dtype='float64')
For MultiIndex, values are dropped if any level is missing by default. Specifying how='all' only drops values
where all levels are missing.
In [56]: midx
Out[56]:
MultiIndex(levels=[[1, 2, 4], [1, 2]],
labels=[[0, 1, -1, 2], [0, 1, -1, -1]])
In [57]: midx.dropna()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\O
˓→
In [58]: midx.dropna(how='all')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Index now supports .str.extractall() which returns a DataFrame, see the docs here (GH10008, GH13156)
In [60]: idx.str.extractall("[ab](?P<digit>\d)")
Out[60]:
digit
match
0 0 1
1 2
1 0 1
Index.astype() now accepts an optional boolean argument copy, which allows optional copying if the require-
ments on dtype are satisfied (GH13209)
• The read_gbq() method has gained the dialect argument to allow users to specify whether to use Big-
Query’s legacy SQL or BigQuery’s standard SQL. See the docs for more details (GH13615).
• The to_gbq() method now allows the DataFrame column order to differ from the destination table schema
(GH11359).
Previous versions of pandas would permanently silence numpy’s ufunc error handling when pandas was imported.
Pandas did this in order to silence the warnings that would arise from using numpy ufuncs on missing data, which are
usually represented as NaN s. Unfortunately, this silenced legitimate warnings arising in non-pandas code in the ap-
plication. Starting with 0.19.0, pandas will use the numpy.errstate context manager to silence these warnings in
a more fine-grained manner, only around where these operations are actually used in the pandas codebase. (GH13109,
GH13145)
After upgrading pandas, you may see new RuntimeWarnings being issued from your code. These are likely legiti-
mate, and the underlying cause likely existed in the code when using previous versions of pandas that simply silenced
the warning. Use numpy.errstate around the source of the RuntimeWarning to control how these conditions are
handled.
The pd.get_dummies function now returns dummy-encoded columns as small integers, rather than floats
(GH8725). This should provide an improved memory footprint.
Previous behavior:
Out[1]:
a float64
b float64
c float64
dtype: object
New behavior:
pd.to_numeric() now accepts a downcast parameter, which will downcast the data if possible to smallest
specified numerical dtype (GH13352)
In [62]: s = ['1', 2, 3]
As part of making pandas API more uniform and accessible in the future, we have created a standard sub-package of
pandas, pandas.api to hold public API’s. We are starting by exposing type introspection functions in pandas.
api.types. More sub-packages and officially sanctioned API’s will be published in future versions of pandas
(GH13147, GH13634)
The following are now part of this API:
In [68]: pprint.pprint(funcs)
['CategoricalDtype',
'DatetimeTZDtype',
'IntervalDtype',
'PeriodDtype',
'infer_dtype',
'is_any_int_dtype',
'is_array_like',
'is_bool',
'is_bool_dtype',
'is_categorical',
'is_categorical_dtype',
'is_complex',
'is_complex_dtype',
'is_datetime64_any_dtype',
'is_datetime64_dtype',
'is_datetime64_ns_dtype',
'is_datetime64tz_dtype',
'is_datetimetz',
'is_dict_like',
'is_dtype_equal',
'is_extension_type',
'is_file_like',
'is_float',
'is_float_dtype',
'is_floating_dtype',
'is_hashable',
'is_int64_dtype',
'is_integer',
'is_integer_dtype',
'is_interval',
'is_interval_dtype',
'is_iterator',
'is_list_like',
'is_named_tuple',
'is_number',
'is_numeric_dtype',
'is_object_dtype',
'is_period',
'is_period_dtype',
'is_re',
'is_re_compilable',
'is_scalar',
(continues on next page)
Note: Calling these functions from the internal module pandas.core.common will now show a
DeprecationWarning (GH13990)
• Timestamp can now accept positional and keyword parameters similar to datetime.datetime()
(GH10758, GH11630)
In [69]: pd.Timestamp(2012, 1, 1)
Out[69]: Timestamp('2012-01-01 00:00:00')
• The .resample() function now accepts a on= or level= parameter for resampling on a datetimelike col-
umn or MultiIndex level (GH13500)
....: names=['v','d']))
....:
In [72]: df
Out[72]:
date a
v d
1 2015-01-04 2015-01-04 0
2 2015-01-11 2015-01-11 1
3 2015-01-18 2015-01-18 2
4 2015-01-25 2015-01-25 3
5 2015-02-01 2015-02-01 4
a
date
(continues on next page)
a
d
2015-01-31 6
2015-02-28 4
• The .get_credentials() method of GbqConnector can now first try to fetch the application default
credentials. See the docs for more details (GH13577).
• The .tz_localize() method of DatetimeIndex and Timestamp has gained the errors keyword,
so you can potentially coerce nonexistent timestamps to NaT. The default behavior remains to raising a
NonExistentTimeError (GH13057)
• .to_hdf/read_hdf() now accept path objects (e.g. pathlib.Path, py.path.local) for the file
path (GH11773)
• The pd.read_csv() with engine='python' has gained support for the decimal (GH12933),
na_filter (GH13321) and the memory_map option (GH13381).
• Consistent with the Python API, pd.read_csv() will now interpret +inf as positive infinity (GH13274)
• The pd.read_html() has gained support for the na_values, converters, keep_default_na op-
tions (GH13461)
• Categorical.astype() now accepts an optional boolean argument copy, effective when dtype is cate-
gorical (GH13209)
• DataFrame has gained the .asof() method to return the last non-NaN values according to the selected
subset (GH13358)
• The DataFrame constructor will now respect key ordering if a list of OrderedDict objects are passed in
(GH13304)
• pd.read_html() has gained support for the decimal option (GH12907)
• Series has gained the properties .is_monotonic, .is_monotonic_increasing, .
is_monotonic_decreasing, similar to Index (GH13336)
• DataFrame.to_sql() now allows a single value as the SQL type for all columns (GH11886).
• Series.append now supports the ignore_index option (GH13677)
• .to_stata() and StataWriter can now write variable labels to Stata dta files using a dictionary to make
column names to labels (GH13535, GH13536)
• .to_stata() and StataWriter will automatically convert datetime64[ns] columns to Stata format
%tc, rather than raising a ValueError (GH12259)
• read_stata() and StataReader raise with a more explicit error message when reading Stata files with
repeated value labels when convert_categoricals=True (GH13923)
• DataFrame.style will now render sparsified MultiIndexes (GH11655)
• DataFrame.style will now show column level names (e.g. DataFrame.columns.names) (GH13775)
• DataFrame has gained support to re-order the columns based on the values in a row using df.
sort_values(by='...', axis=1) (GH10806)
In [75]: df = pd.DataFrame({'A': [2, 7], 'B': [3, 5], 'C': [4, 8]},
....: index=['row1', 'row2'])
....:
In [76]: df
Out[76]:
A B C
row1 2 3 4
row2 7 5 8
• Added documentation to I/O regarding the perils of reading in columns with mixed dtypes and how to handle it
(GH13746)
• to_html() now has a border argument to control the value in the opening <table> tag. The default is the
value of the html.border option, which defaults to 1. This also affects the notebook HTML repr, but since
Jupyter’s CSS includes a border-width attribute, the visual effect is the same. (GH11563).
• Raise ImportError in the sql functions when sqlalchemy is not installed and a connection string is used
(GH11920).
• Compatibility with matplotlib 2.0. Older versions of pandas should also work with matplotlib 2.0 (GH13333)
• Timestamp, Period, DatetimeIndex, PeriodIndex and .dt accessor have gained a .
is_leap_year property to check whether the date belongs to a leap year. (GH13727)
• astype() will now accept a dict of column name to data types mapping as the dtype argument. (GH12086)
• The pd.read_json and DataFrame.to_json has gained support for reading and writing json lines with
lines option see Line delimited json (GH9180)
• read_excel() now supports the true_values and false_values keyword arguments (GH13347)
• groupby() will now accept a scalar and a single-element list for specifying level on a non-MultiIndex
grouper. (GH13907)
• Non-convertible dates in an excel date column will be returned without conversion and the column will be
object dtype, rather than raising an exception (GH10001).
• pd.Timedelta(None) is now accepted and will return NaT, mirroring pd.Timestamp (GH13687)
• pd.read_stata() can now handle some format 111 files, which are produced by SAS when generating
Stata dta files (GH11526)
• Series and Index now support divmod which will return a tuple of series or indices. This behaves like a
standard binary operator with regards to broadcasting rules (GH14208).
Series.tolist() will now return Python types in the output, mimicking NumPy .tolist() behavior
(GH10904)
In [78]: s = pd.Series([1,2,3])
Previous behavior:
In [7]: type(s.tolist()[0])
Out[7]:
<class 'numpy.int64'>
New behavior:
In [79]: type(s.tolist()[0])
Out[79]: int
Following Series operators have been changed to make all operators consistent, including DataFrame (GH1134,
GH4581, GH13538)
• Series comparison operators now raise ValueError when index are different.
• Series logical operators align both index of left and right hand side.
Warning: Until 0.18.1, comparing Series with the same length, would succeed even if the .index are
different (the result ignores .index). As of 0.19.0, this will raises ValueError to be more strict. This section
also describes how to keep previous behavior or align different indexes, using the flexible comparison methods like
.eq.
Arithmetic operators
In [82]: s1 + s2
Out[82]:
A 3.0
B 4.0
C NaN
D NaN
dtype: float64
Comparison operators
In [1]: s1 == s2
Out[1]:
A False
B True
C False
dtype: bool
In [2]: s1 == s2
Out[2]:
ValueError: Can only compare identically-labeled Series objects
Note: To achieve the same result as previous versions (compare values based on locations ignoring .index),
compare both .values.
If you want to compare Series aligning its .index, see flexible comparison methods section below:
In [87]: s1.eq(s2)
Out[87]:
A False
B True
C False
D False
dtype: bool
Logical operators
Logical operators align both .index of left and right hand side.
Previous behavior (Series), only left hand side index was kept:
In [90]: s1 & s2
Out[90]:
A True
B False
C False
D False
dtype: bool
Note: To achieve the same result as previous versions (compare values based on only left hand side index), you can
use reindex_like:
Series flexible comparison methods like eq, ne, le, lt, ge and gt now align both index. Use these operators
if you want to compare two Series which has the different index.
In [97]: s1.eq(s2)
Out[97]:
a False
b True
c False
d False
dtype: bool
In [98]: s1.ge(s2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[98]:
a False
b True
c True
d False
dtype: bool
A Series will now correctly promote its dtype for assignment with incompat values to the current dtype (GH13234)
In [99]: s = pd.Series()
Previous behavior:
New behavior:
In [102]: s
Out[102]:
a 2016-01-01 00:00:00
b 3
dtype: object
In [103]: s.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[103]:
˓→dtype('O')
Previously if .to_datetime() encountered mixed integers/floats and strings, but no datetimes with
errors='coerce' it would convert all to NaT.
Previous behavior:
In [2]: pd.to_datetime([1, 'foo'], errors='coerce')
Out[2]: DatetimeIndex(['NaT', 'NaT'], dtype='datetime64[ns]', freq=None)
Current behavior:
This will now convert integers/floats with the default unit of ns.
In [104]: pd.to_datetime([1, 'foo'], errors='coerce')
Out[104]: DatetimeIndex(['1970-01-01 00:00:00.000000001', 'NaT'], dtype=
˓→'datetime64[ns]', freq=None)
Merging will now preserve the dtype of the join keys (GH8596)
In [105]: df1 = pd.DataFrame({'key': [1], 'v1': [10]})
In [106]: df1
Out[106]:
key v1
0 1 10
In [108]: df2
Out[108]:
key v1
0 1 20
1 2 30
Previous behavior:
In [5]: pd.merge(df1, df2, how='outer')
Out[5]:
key v1
0 1.0 10.0
(continues on next page)
New behavior:
We are able to preserve the join keys
Of course if you have missing values that are introduced, then the resulting dtype will be upcast, which is unchanged
from previous.
Percentile identifiers in the index of a .describe() output will now be rounded to the least precision that keeps
them distinct (GH13104)
Previous behavior:
The percentiles were rounded to at most one decimal place, which could raise ValueError for a data frame if the
percentiles were duplicated.
New behavior:
0
count 5.000000
mean 2.000000
std 1.581139
min 0.000000
0.01% 0.000400
0.05% 0.002000
0.1% 0.004000
50% 2.000000
99.9% 3.996000
99.95% 3.998000
99.99% 3.999600
max 4.000000
Furthermore:
PeriodIndex now has its own period dtype. The period dtype is a pandas extension dtype like category or
the timezone aware dtype (datetime64[ns, tz]) (GH13941). As a consequence of this change, PeriodIndex
no longer has an integer dtype:
Previous behavior:
In [2]: pi
Out[2]: PeriodIndex(['2016-08-01'], dtype='int64', freq='D')
In [3]: pd.api.types.is_integer_dtype(pi)
Out[3]: True
In [4]: pi.dtype
Out[4]: dtype('int64')
New behavior:
In [118]: pi
Out[118]: PeriodIndex(['2016-08-01'], dtype='period[D]', freq='D')
In [119]: pd.api.types.is_integer_dtype(pi)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[119]: False
In [120]: pd.api.types.is_period_dtype(pi)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[120]:
˓→True
In [121]: pi.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out
˓→period[D]
In [122]: type(pi.dtype)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→pandas.core.dtypes.dtypes.PeriodDtype
Previously, Period has its own Period('NaT') representation different from pd.NaT. Now Period('NaT')
has been changed to return pd.NaT. (GH12759, GH13582)
Previous behavior:
New behavior:
These result in pd.NaT without providing freq option.
In [123]: pd.Period('NaT')
Out[123]: NaT
In [124]: pd.Period(None)
\\\\\\\\\\\\\\Out[124]: NaT
To be compatible with Period addition and subtraction, pd.NaT now supports addition and subtraction with int.
Previously it raised ValueError.
Previous behavior:
In [5]: pd.NaT + 1
...
ValueError: Cannot add integral value to Timestamp without freq.
New behavior:
In [125]: pd.NaT + 1
Out[125]: NaT
In [126]: pd.NaT - 1
\\\\\\\\\\\\\\Out[126]: NaT
.values is changed to return an array of Period objects, rather than an array of integers (GH13988).
Previous behavior:
New behavior:
In [128]: pi.values
Out[128]: array([Period('2011-01', 'M'), Period('2011-02', 'M')], dtype=object)
Addition and subtraction of the base Index type and of DatetimeIndex (not the numeric index types) previously per-
formed set operations (set union and difference). This behavior was already deprecated since 0.15.0 (in favor using
the specific .union() and .difference() methods), and is now disabled. When possible, + and - are now used
for element-wise operations, for example for concatenating strings or subtracting datetimes (GH8227, GH14127).
Previous behavior:
New behavior: the same operation will now perform element-wise addition:
In [129]: pd.Index(['a', 'b']) + pd.Index(['a', 'c'])
Out[129]: Index(['aa', 'bc'], dtype='object')
Note that numeric Index objects already performed element-wise operations. For example, the behavior of adding two
integer Indexes is unchanged. The base Index is now made consistent with this behavior.
In [130]: pd.Index([1, 2, 3]) + pd.Index([2, 3, 4])
Out[130]: Int64Index([3, 5, 7], dtype='int64')
Further, because of this change, it is now possible to subtract two DatetimeIndex objects resulting in a TimedeltaIndex:
Previous behavior:
In [1]: pd.DatetimeIndex(['2016-01-01', '2016-01-02']) - pd.DatetimeIndex(['2016-01-02
˓→', '2016-01-03'])
New behavior:
In [131]: pd.DatetimeIndex(['2016-01-01', '2016-01-02']) - pd.DatetimeIndex(['2016-01-
˓→02', '2016-01-03'])
Index.difference and Index.symmetric_difference will now, more consistently, treat NaN values as
any other values. (GH13514)
In [132]: idx1 = pd.Index([1, 2, 3, np.nan])
Previous behavior:
In [3]: idx1.difference(idx2)
Out[3]: Float64Index([nan, 2.0, 3.0], dtype='float64')
In [4]: idx1.symmetric_difference(idx2)
Out[4]: Float64Index([0.0, nan, 2.0, 3.0], dtype='float64')
New behavior:
In [134]: idx1.difference(idx2)
Out[134]: Float64Index([2.0, 3.0], dtype='float64')
In [135]: idx1.symmetric_difference(idx2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[135]: Float64Index([0.0, 2.0,
˓→3.0], dtype='float64')
(continues on next page)
Index.unique() now returns unique values as an Index of the appropriate dtype. (GH13395). Previously, most
Index classes returned np.ndarray, and DatetimeIndex, TimedeltaIndex and PeriodIndex returned
Index to keep metadata like timezone.
Previous behavior:
In [1]: pd.Index([1, 2, 3]).unique()
Out[1]: array([1, 2, 3])
Out[2]:
DatetimeIndex(['2011-01-01 00:00:00+09:00', '2011-01-02 00:00:00+09:00',
'2011-01-03 00:00:00+09:00'],
dtype='datetime64[ns, Asia/Tokyo]', freq=None)
New behavior:
In [136]: pd.Index([1, 2, 3]).unique()
Out[136]: Int64Index([1, 2, 3], dtype='int64')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[137]:
DatetimeIndex(['2011-01-01 00:00:00+09:00', '2011-01-02 00:00:00+09:00',
'2011-01-03 00:00:00+09:00'],
dtype='datetime64[ns, Asia/Tokyo]', freq=None)
In [141]: midx
Out[141]:
MultiIndex(levels=[['b', 'a', 'c'], ['bar', 'foo']],
labels=[[1, 0], [1, 0]])
Previous behavior:
In [4]: midx.levels[0]
Out[4]: Index(['b', 'a', 'c'], dtype='object')
In [5]: midx.get_level_values[0]
Out[5]: Index(['a', 'b'], dtype='object')
In [143]: midx.get_level_values(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→CategoricalIndex(['a', 'b'], categories=['b', 'a', 'c'], ordered=False, dtype=
˓→'category')
Previous behavior:
In [11]: df_grouped.index.levels[1]
Out[11]: Index(['b', 'a', 'c'], dtype='object', name='C')
In [12]: df_grouped.reset_index().dtypes
Out[12]:
A int64
C object
B float64
dtype: object
In [13]: df_set_idx.index.levels[1]
Out[13]: Index(['b', 'a', 'c'], dtype='object', name='C')
In [14]: df_set_idx.reset_index().dtypes
Out[14]:
A int64
C object
B int64
dtype: object
New behavior:
In [147]: df_grouped.index.levels[1]
Out[147]: CategoricalIndex(['b', 'a', 'c'], categories=['b', 'a', 'c'], ordered=False,
˓→ name='C', dtype='category')
In [148]: df_grouped.reset_index().dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A int64
C category
B float64
dtype: object
In [149]: df_set_idx.index.levels[1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→CategoricalIndex(['b', 'a', 'c'], categories=['b', 'a', 'c'], ordered=False, name='C
˓→', dtype='category')
In [150]: df_set_idx.reset_index().dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A int64
C category
B int64
dtype: object
When read_csv() is called with chunksize=n and without specifying an index, each chunk used to have an
independently generated index from 0 to n-1. They are now given instead a progressive index, starting from 0 for
the first chunk, from n for the second, and so on, so that, when concatenated, they are identical to the result of calling
read_csv() without the chunksize= argument (GH12185).
Previous behavior:
New behavior:
These changes allow pandas to handle sparse data with more dtypes, and for work to make a smoother experience with
data handling.
Sparse data structures now gained enhanced support of int64 and bool dtype (GH667, GH13849).
Previously, sparse data were float64 dtype by default, even if all inputs were of int or bool dtype. You had to
specify dtype explicitly to create sparse data with int64 dtype. Also, fill_value had to be specified explicitly
because the default was np.nan which doesn’t appear in int64 or bool data.
# specifying int64 dtype, but all values are stored in sp_values because
# fill_value default is np.nan
In [2]: pd.SparseArray([1, 2, 0, 0], dtype=np.int64)
Out[2]:
[1, 2, 0, 0]
Fill: nan
IntIndex
Indices: array([0, 1, 2, 3], dtype=int32)
As of v0.19.0, sparse data keeps the input dtype, and uses more appropriate fill_value defaults (0 for int64
dtype, False for bool dtype).
In [153]: pd.SparseArray([1, 2, 0, 0], dtype=np.int64)
Out[153]:
[1, 2, 0, 0]
Fill: 0
IntIndex
Indices: array([0, 1], dtype=int32)
• Sparse data structure now can preserve dtype after arithmetic ops (GH13848)
In [155]: s = pd.SparseSeries([0, 2, 0, 1], fill_value=0, dtype=np.int64)
In [156]: s.dtype
Out[156]: dtype('int64')
In [157]: s + 1
\\\\\\\\\\\\\\\\\\\\\\\\\Out[157]:
0 1
1 3
(continues on next page)
• Sparse data structure now support astype to convert internal dtype (GH13900)
In [159]: s
Out[159]:
0 1.0
1 0.0
2 2.0
3 0.0
dtype: float64
BlockIndex
Block locations: array([0, 2], dtype=int32)
Block lengths: array([1, 1], dtype=int32)
In [160]: s.astype(np.int64)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1
1 0
2 2
3 0
dtype: int64
BlockIndex
Block locations: array([0, 2], dtype=int32)
Block lengths: array([1, 1], dtype=int32)
astype fails if data contains values which cannot be converted to specified dtype. Note that the limitation is
applied to fill_value which default is np.nan.
Out[7]:
ValueError: unable to coerce current fill_value nan to int64 dtype
• Subclassed SparseDataFrame and SparseSeries now preserve class types when slicing or transposing.
(GH13787)
• SparseArray with bool dtype now supports logical (bool) operators (GH14000)
• Bug in SparseSeries with MultiIndex [] indexing may raise IndexError (GH13144)
• Bug in SparseSeries with MultiIndex [] indexing result may have normal Index (GH13144)
• Bug in SparseDataFrame in which axis=None did not default to axis=0 (GH13048)
• Bug in SparseSeries and SparseDataFrame creation with object dtype may raise TypeError
(GH11633)
Note: This change only affects 64 bit python running on Windows, and only affects relatively advanced indexing
operations
Methods such as Index.get_indexer that return an indexer array, coerce that array to a “platform int”, so that
it can be directly used in 3rd party library operations like numpy.take. Previously, a platform int was defined as
np.int_ which corresponds to a C integer, but the correct type, and what is being used now, is np.intp, which
corresponds to the C integer size that can hold a pointer (GH3033, GH13972).
These types are the same on many platform, but for 64 bit python on Windows, np.int_ is 32 bits, and np.intp
is 64 bits. Changing this behavior improves performance for many operations on that platform.
Previous behavior:
New behavior:
• Timestamp.to_pydatetime will issue a UserWarning when warn=True, and the instance has a non-
zero number of nanoseconds, previously this would print a message to stdout (GH14101).
• Series.unique() with datetime and timezone now returns return array of Timestamp with timezone
(GH13565).
• Panel.to_sparse() will raise a NotImplementedError exception when called (GH13778).
• Index.reshape() will raise a NotImplementedError exception when called (GH12882).
• .filter() enforces mutual exclusion of the keyword arguments (GH12399).
• eval’s upcasting rules for float32 types have been updated to be more consistent with NumPy’s rules. New
behavior will not upcast to float64 if you multiply a pandas float32 object by a scalar float64 (GH12388).
• An UnsupportedFunctionCall error is now raised if NumPy ufuncs like np.mean are called on groupby
or resample objects (GH12811).
• __setitem__ will no longer apply a callable rhs as a function instead of storing it. Call where directly to
get the previous behavior (GH13299).
• Calls to .sample() will respect the random seed set via numpy.random.seed(n) (GH13161)
• Styler.apply is now more strict about the outputs your function must return. For axis=0 or axis=1, the
output shape must be identical. For axis=None, the output must be a DataFrame with identical columns and
index labels (GH13222).
• Float64Index.astype(int) will now raise ValueError if Float64Index contains NaN values
(GH13149)
• TimedeltaIndex.astype(int) and DatetimeIndex.astype(int) will now return
Int64Index instead of np.array (GH13209)
• Passing Period with multiple frequencies to normal Index now returns Index with object dtype
(GH13664)
• PeriodIndex.fillna with Period has different freq now coerces to object dtype (GH13664)
• Faceted boxplots from DataFrame.boxplot(by=col) now return a Series when return_type is
not None. Previously these returned an OrderedDict. Note that when return_type=None, the default,
these still return a 2-D NumPy array (GH12216, GH7096).
• pd.read_hdf will now raise a ValueError instead of KeyError, if a mode other than r, r+ and a is
supplied. (GH13623)
• pd.read_csv(), pd.read_table(), and pd.read_hdf() raise the builtin FileNotFoundError
exception for Python 3.x when called on a nonexistent file; this is back-ported as IOError in Python 2.x
(GH14086)
• More informative exceptions are passed through the csv parser. The exception type would now be the original
exception type instead of CParserError (GH13652).
• pd.read_csv() in the C engine will now issue a ParserWarning or raise a ValueError when sep
encoded is more than one character long (GH14065)
• DataFrame.values will now return float64 with a DataFrame of mixed int64 and uint64 dtypes,
conforming to np.find_common_type (GH10364, GH13917)
• .groupby.groups will now return a dictionary of Index objects, rather than a dictionary of np.ndarray
or lists (GH14293)
1.14.3 Deprecations
• Series.reshape and Categorical.reshape have been deprecated and will be removed in a subse-
quent release (GH12882, GH12882)
• PeriodIndex.to_datetime has been deprecated in favor of PeriodIndex.to_timestamp
(GH8254)
• Timestamp.to_datetime has been deprecated in favor of Timestamp.to_pydatetime (GH8254)
• Index.to_datetime and DatetimeIndex.to_datetime have been deprecated in favor of pd.
to_datetime (GH8254)
• pandas.core.datetools module has been deprecated and will be removed in a subsequent release
(GH14094)
• SparseList has been deprecated and will be removed in a future version (GH13784)
• DataFrame.to_html() and DataFrame.to_latex() have dropped the colSpace parameter in fa-
vor of col_space (GH13857)
• DataFrame.to_sql() has deprecated the flavor parameter, as it is superfluous when SQLAlchemy is
not installed (GH13611)
• Deprecated read_csv keywords:
– compact_ints and use_unsigned have been deprecated and will be removed in a future version
(GH13320)
– buffer_lines has been deprecated and will be removed in a future version (GH13360)
– as_recarray has been deprecated and will be removed in a future version (GH13373)
– skip_footer has been deprecated in favor of skipfooter and will be removed in a future version
(GH13349)
• top-level pd.ordered_merge() has been renamed to pd.merge_ordered() and the original name will
be removed in a future version (GH13358)
• Timestamp.offset property (and named arg in the constructor), has been deprecated in favor of freq
(GH12160)
• pd.tseries.util.pivot_annual is deprecated. Use pivot_table as alternative, an example is here
(GH736)
• pd.tseries.util.isleapyear has been deprecated and will be removed in a subsequent release.
Datetime-likes now have a .is_leap_year property (GH13727)
• Panel4D and PanelND constructors are deprecated and will be removed in a future version. The recom-
mended way to represent these types of n-dimensional data are with the xarray package. Pandas provides a
to_xarray() method to automate this conversion (GH13564).
• pandas.tseries.frequencies.get_standard_freq is deprecated. Use pandas.tseries.
frequencies.to_offset(freq).rule_code instead (GH13874)
• pandas.tseries.frequencies.to_offset’s freqstr keyword is deprecated in favor of freq
(GH13874)
• Categorical.from_array has been deprecated and will be removed in a future version (GH13854)
• The pd.sandbox module has been removed in favor of the external library pandas-qt (GH13670)
• The pandas.io.data and pandas.io.wb modules are removed in favor of the pandas-datareader package
(GH13724).
• The pandas.tools.rplot module has been removed in favor of the seaborn package (GH13855)
• DataFrame.to_csv() has dropped the engine parameter, as was deprecated in 0.17.1 (GH11274,
GH13419)
• DataFrame.to_dict() has dropped the outtype parameter in favor of orient (GH13627, GH8486)
• pd.Categorical has dropped setting of the ordered attribute directly in favor of the set_ordered
method (GH13671)
• pd.Categorical has dropped the levels attribute in favor of categories (GH8376)
• DataFrame.to_sql() has dropped the mysql option for the flavor parameter (GH13611)
• Panel.shift() has dropped the lags parameter in favor of periods (GH14041)
• pd.Index has dropped the diff method in favor of difference (GH13669)
• pd.DataFrame has dropped the to_wide method in favor of to_panel (GH14039)
• Series.to_csv has dropped the nanRep parameter in favor of na_rep (GH13804)
• Series.xs, DataFrame.xs, Panel.xs, Panel.major_xs, and Panel.minor_xs have dropped
the copy parameter (GH13781)
• str.split has dropped the return_type parameter in favor of expand (GH13701)
• Removal of the legacy time rules (offset aliases), deprecated since 0.17.0 (this has been alias since 0.8.0)
(GH13590, GH13868). Now legacy time rules raises ValueError. For the list of currently supported off-
sets, see here.
• The default value for the return_type parameter for DataFrame.plot.box and DataFrame.
boxplot changed from None to "axes". These methods will now return a matplotlib axes by default instead
of a dictionary of artists. See here (GH6581).
• The tquery and uquery functions in the pandas.io.sql module are removed (GH5950).
• Bug in groupby().shift(), which could cause a segfault or corruption in rare circumstances when group-
ing by columns with missing values (GH13813)
• Bug in groupby().cumsum() calculating cumprod when axis=1. (GH13994)
• Bug in pd.to_timedelta() in which the errors parameter was not being respected (GH13613)
• Bug in io.json.json_normalize(), where non-ascii keys raised an exception (GH13213)
• Bug when passing a not-default-indexed Series as xerr or yerr in .plot() (GH11858)
• Bug in area plot draws legend incorrectly if subplot is enabled or legend is moved after plot (matplotlib 1.5.0 is
required to draw area plot legend properly) (GH9161, GH13544)
• Bug in DataFrame assignment with an object-dtyped Index where the resultant column is mutable to the
original object. (GH13522)
• Bug in matplotlib AutoDataFormatter; this restores the second scaled formatting and re-adds micro-second
scaled formatting (GH13131)
• Bug in selection from a HDFStore with a fixed format and start and/or stop specified will now return the
selected range (GH8287)
• Bug in Categorical.from_codes() where an unhelpful error was raised when an invalid ordered
parameter was passed in (GH14058)
• Bug in Series construction from a tuple of integers on windows not returning default dtype (int64) (GH13646)
• Bug in TimedeltaIndex addition with a Datetime-like object where addition overflow was not being caught
(GH14068)
• Bug in .groupby(..).resample(..) when the same object is called multiple times (GH13174)
• Bug in .to_records() when index name is a unicode string (GH13172)
• Bug in calling .memory_usage() on object which doesn’t implement (GH12924)
• Regression in Series.quantile with nans (also shows up in .median() and .describe() ); further-
more now names the Series with the quantile (GH13098, GH13146)
• Bug in SeriesGroupBy.transform with datetime values and missing groups (GH13191)
• Bug where empty Series were incorrectly coerced in datetime-like numeric operations (GH13844)
• Bug in Categorical constructor when passed a Categorical containing datetimes with timezones
(GH14190)
• Bug in Series.str.extractall() with str index raises ValueError (GH13156)
• Bug in Series.str.extractall() with single group and quantifier (GH13382)
• Bug in DatetimeIndex and Period subtraction raises ValueError or AttributeError rather than
TypeError (GH13078)
• Bug in Index and Series created with NaN and NaT mixed data may not have datetime64 dtype
(GH13324)
• Bug in cartesian_product and MultiIndex.from_product which may raise with empty input ar-
rays (GH12258)
• Bug in pd.read_csv() which may cause a segfault or corruption when iterating in large chunks over a
stream/file under rare circumstances (GH13703)
• Bug in pd.read_csv() which caused errors to be raised when a dictionary containing scalars is passed in
for na_values (GH12224)
• Bug in pd.read_csv() which caused BOM files to be incorrectly parsed by not ignoring the BOM (GH4793)
• Bug in pd.read_csv() with engine='python' which raised errors when a numpy array was passed in
for usecols (GH12546)
• Bug in pd.read_csv() where the index columns were being incorrectly parsed when parsed as dates with a
thousands parameter (GH14066)
• Bug in pd.read_csv() with engine='python' in which NaN values weren’t being detected after data
was converted to numeric values (GH13314)
• Bug in pd.read_csv() in which the nrows argument was not properly validated for both engines
(GH10476)
• Bug in pd.read_csv() with engine='python' in which infinities of mixed-case forms were not being
interpreted properly (GH13274)
• Bug in pd.read_csv() with engine='python' in which trailing NaN values were not being parsed
(GH13320)
• Bug in pd.read_csv() with engine='python' when reading from a tempfile.TemporaryFile
on Windows with Python 3 (GH13398)
• Bug in pd.read_csv() that prevents usecols kwarg from accepting single-byte unicode strings
(GH13219)
• Bug in pd.read_csv() that prevents usecols from being an empty set (GH13402)
• Bug in pd.read_csv() in the C engine where the NULL character was not being parsed as NULL
(GH14012)
• Bug in pd.read_csv() with engine='c' in which NULL quotechar was not accepted even though
quoting was specified as None (GH13411)
• Bug in pd.read_csv() with engine='c' in which fields were not properly cast to float when quoting was
specified as non-numeric (GH13411)
• Bug in pd.read_csv() in Python 2.x with non-UTF8 encoded, multi-character separated data (GH3404)
• Bug in pd.read_csv(), where aliases for utf-xx (e.g. UTF-xx, UTF_xx, utf_xx) raised UnicodeDecodeError
(GH13549)
• Bug in pd.read_csv, pd.read_table, pd.read_fwf, pd.read_stata and pd.read_sas where
files were opened by parsers but not closed if both chunksize and iterator were None. (GH13940)
• Bug in StataReader, StataWriter, XportReader and SAS7BDATReader where a file was not prop-
erly closed when an error was raised. (GH13940)
• Bug in pd.pivot_table() where margins_name is ignored when aggfunc is a list (GH13354)
• Bug in pd.Series.str.zfill, center, ljust, rjust, and pad when passing non-integers, did not
raise TypeError (GH13598)
• Bug in checking for any null objects in a TimedeltaIndex, which always returned True (GH13603)
• Bug in Series arithmetic raises TypeError if it contains datetime-like as object dtype (GH13043)
• Bug in DatetimeIndex with nanosecond frequency does not include timestamp specified with end
(GH13672)
• Bug in `Series when setting a slice with a np.timedelta64 (GH14155)
• Bug in Index raises OutOfBoundsDatetime if datetime exceeds datetime64[ns] bounds, rather
than coercing to object dtype (GH13663)
• Bug in Index may ignore specified datetime64 or timedelta64 passed as dtype (GH13981)
• Bug in RangeIndex can be created without no arguments rather than raises TypeError (GH13793)
• Bug in .value_counts() raises OutOfBoundsDatetime if data exceeds datetime64[ns] bounds
(GH13663)
• Bug in DatetimeIndex may raise OutOfBoundsDatetime if input np.datetime64 has other unit
than ns (GH9114)
• Bug in Series creation with np.datetime64 which has other unit than ns as object dtype results in
incorrect values (GH13876)
• Bug in resample with timedelta data where data was casted to float (GH13119).
• Bug in pd.isnull() pd.notnull() raise TypeError if input datetime-like has other unit than ns
(GH13389)
• Bug in pd.merge() may raise TypeError if input datetime-like has other unit than ns (GH13389)
• Bug in HDFStore/read_hdf() discarded DatetimeIndex.name if tz was set (GH13884)
• Bug in Categorical.remove_unused_categories() changes .codes dtype to platform int
(GH13261)
• Bug in groupby with as_index=False returns all NaN’s when grouping on multiple columns including a
categorical one (GH13204)
• Bug in df.groupby(...)[...] where getitem with Int64Index raised an error (GH13731)
• Bug in the CSS classes assigned to DataFrame.style for index names. Previously they were assigned
"col_heading level<n> col<c>" where n was the number of levels + 1. Now they are assigned
"index_name level<n>", where n is the correct level for that MultiIndex.
• Bug where pd.read_gbq() could throw ImportError: No module named discovery as a re-
sult of a naming conflict with another python package called apiclient (GH13454)
• Bug in Index.union returns an incorrect result with a named empty index (GH13432)
• Bugs in Index.difference and DataFrame.join raise in Python3 when using mixed-integer indexes
(GH13432, GH12814)
• Bug in subtract tz-aware datetime.datetime from tz-aware datetime64 series (GH14088)
• Bug in .to_excel() when DataFrame contains a MultiIndex which contains a label with a NaN value
(GH13511)
• Bug in invalid frequency offset string like “D1”, “-2-3H” may not raise ValueError (GH13930)
• Bug in concat and groupby for hierarchical frames with RangeIndex levels (GH13542).
• Bug in Series.str.contains() for Series containing only NaN values of object dtype (GH14171)
• Bug in agg() function on groupby dataframe changes dtype of datetime64[ns] column to float64
(GH12821)
• Bug in using NumPy ufunc with PeriodIndex to add or subtract integer raise
IncompatibleFrequency. Note that using standard operator like + or - is recommended, because
standard operators use more efficient path (GH13980)
• Bug in operations on NaT returning float instead of datetime64[ns] (GH12941)
• Bug in Series flexible arithmetic methods (like .add()) raises ValueError when axis=None
(GH13894)
• Bug in DataFrame.to_csv() with MultiIndex columns in which a stray empty line was added
(GH6618)
• Bug in DatetimeIndex, TimedeltaIndex and PeriodIndex.equals() may return True when
input isn’t Index but contains the same values (GH13107)
• Bug in assignment against datetime with timezone may not work if it contains datetime near DST boundary
(GH14146)
• Bug in pd.eval() and HDFStore query truncating long float literals with python 2 (GH14241)
• Bug in Index raises KeyError displaying incorrect column when column is not in the df and columns con-
tains duplicate values (GH13822)
• Bug in Period and PeriodIndex creating wrong dates when frequency has combined offset aliases
(GH13874)
• Bug in .to_string() when called with an integer line_width and index=False raises an Unbound-
LocalError exception because idx referenced before assignment.
• Bug in eval() where the resolvers argument would not accept a list (GH14095)
• Bugs in stack, get_dummies, make_axis_dummies which don’t preserve categorical dtypes in
(multi)indexes (GH13854)
• PeriodIndex can now accept list and array which contains pd.NaT (GH13430)
• Bug in df.groupby where .median() returns arbitrary values if grouped dataframe contains empty bins
(GH13629)
• Bug in Index.copy() where name parameter was ignored (GH14302)
This is a minor bug-fix release from 0.18.0 and includes a large number of bug fixes along with several new features,
enhancements, and performance improvements. We recommend that all users upgrade to this version.
Highlights include:
• .groupby(...) has been enhanced to provide convenient syntax when working with .rolling(..),
.expanding(..) and .resample(..) per group, see here
• pd.to_datetime() has gained the ability to assemble dates from a DataFrame, see here
• Method chaining improvements, see here.
• Custom business hour offset, see here.
• Many bug fixes in the handling of sparse, see here
• Expanded the Tutorials section with a feature on modern pandas, courtesy of @TomAugsburger. (GH13045).
• New features
– Custom Business Hour
– .groupby(..) syntax with window and resample operations
– Method chaininng improvements
In [6]: dt + bhour_us * 2
Out[6]: Timestamp('2014-01-20 09:00:00')
.groupby(...) has been enhanced to provide convenient syntax when working with .rolling(..), .
expanding(..) and .resample(..) per group, see (GH12486, GH12738).
You can now use .rolling(..) and .expanding(..) as methods on groupbys. These return another deferred
object (similar to what .rolling() and .expanding() do on ungrouped pandas objects). You can then operate
on these RollingGroupby objects in a similar manner.
Previously you would have to do this to get a rolling window mean per-group:
In [8]: df
Out[8]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
5 1 5
6 1 6
.. .. ..
33 3 33
34 3 34
35 3 35
36 3 36
37 3 37
38 3 38
39 3 39
In [10]: df.groupby('A').rolling(4).B.mean()
Out[10]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
5 3.5
6 4.5
...
3 33 NaN
34 NaN
35 33.5
36 34.5
37 35.5
38 36.5
39 37.5
Name: B, Length: 40, dtype: float64
In [12]: df
Out[12]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [14]: df.groupby('group').resample('1D').ffill()
Out[14]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
2016-01-08 1 5
2016-01-09 1 5
... ... ...
2 2016-01-18 2 7
2016-01-19 2 7
2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
The following methods / indexers now accept a callable. It is intended to make these more useful in method chains,
see the documentation. (GH11485, GH12533)
• .where() and .mask()
• .loc[], iloc[] and .ix[]
• [] indexing
These can accept a callable for the condition and other arguments.
These can accept a callable, and a tuple of callable as a slicer. The callable can return a valid boolean indexer or
anything which is valid for these indexer’s input.
[] indexing
Finally, you can use a callable in [] indexing of Series, DataFrame and Panel. The callable must return a valid input
for [] indexing depending on its class and index type.
Using these methods / indexers, you can chain data selection operations without using temporary variable.
2007 CIN 6 379 745 101 203 35 2 36 125.0 10.0 1.0 105 127.0 14.
˓→0 1.0 1.0 15.0 18.0
DET 5 301 1062 162 283 54 4 37 144.0 24.0 7.0 97 176.0 3.
˓→0 10.0 4.0 8.0 28.0
HOU 4 311 926 109 218 47 6 14 77.0 10.0 4.0 60 212.0 3.
˓→0 9.0 16.0 6.0 17.0
LAN 11 413 1021 153 293 61 3 36 154.0 7.0 5.0 114 141.0 8.
˓→0 9.0 3.0 8.0 29.0
NYN 13 622 1854 240 509 101 3 61 243.0 22.0 4.0 174 310.0 24.
˓→0 23.0 18.0 15.0 48.0
SFN 5 482 1305 198 337 67 6 40 171.0 26.0 7.0 235 188.0 51.
˓→0 8.0 16.0 6.0 41.0
TEX 2 198 729 115 200 40 4 28 115.0 21.0 4.0 73 140.0 4.
˓→0 5.0 2.0 8.0 16.0
TOR 4 459 1408 187 378 96 2 58 223.0 4.0 2.0 190 265.0 16.
˓→0 12.0 4.0 16.0 38.0
Partial string indexing now matches on DateTimeIndex when part of a MultiIndex (GH10331)
....:
˓→periods=10,
....: freq='12H
˓→'),
In [23]: dft2
Out[23]:
A
2013-01-01 00:00:00 a 0.156998
b -0.571455
2013-01-01 12:00:00 a 1.057633
b -0.791489
2013-01-02 00:00:00 a -0.524627
b 0.071878
2013-01-02 12:00:00 a 1.910759
... ...
2013-01-04 00:00:00 b 1.015405
2013-01-04 12:00:00 a 0.749185
b -0.675521
2013-01-05 00:00:00 a 0.440266
b 0.688972
2013-01-05 12:00:00 a -0.276646
b 1.924533
In [24]: dft2.loc['2013-01-05']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A
2013-01-05 00:00:00 a 0.440266
b 0.688972
2013-01-05 12:00:00 a -0.276646
b 1.924533
On other levels
In [27]: dft2
Out[27]:
A
a 2013-01-01 00:00:00 0.156998
2013-01-01 12:00:00 1.057633
2013-01-02 00:00:00 -0.524627
2013-01-02 12:00:00 1.910759
2013-01-03 00:00:00 0.513082
2013-01-03 12:00:00 1.043945
2013-01-04 00:00:00 1.459927
... ...
b 2013-01-02 12:00:00 0.787965
2013-01-03 00:00:00 -0.546416
2013-01-03 12:00:00 2.107785
2013-01-04 00:00:00 1.015405
2013-01-04 12:00:00 -0.675521
2013-01-05 00:00:00 0.688972
2013-01-05 12:00:00 1.924533
A
a 2013-01-05 00:00:00 0.440266
2013-01-05 12:00:00 -0.276646
b 2013-01-05 00:00:00 0.688972
2013-01-05 12:00:00 1.924533
pd.to_datetime() has gained the ability to assemble datetimes from a passed in DataFrame or a dict.
(GH8158).
In [30]: df
Out[30]:
year month day hour
0 2015 2 4 2
1 2016 3 5 3
In [31]: pd.to_datetime(df)
Out[31]:
0 2015-02-04 02:00:00
1 2016-03-05 03:00:00
dtype: datetime64[ns]
You can pass only the columns that you need to assemble.
• Index now supports .str.get_dummies() which returns MultiIndex, see Creating Indicator Vari-
ables (GH10008, GH10103)
In [37]: idx.str.get_dummies('|')
Out[37]:
MultiIndex(levels=[[0, 1], [0, 1], [0, 1]],
labels=[[1, 1, 0], [1, 0, 1], [0, 1, 1]],
names=['a', 'b', 'c'])
• pd.crosstab() has gained a normalize argument for normalizing frequency tables (GH12569). Exam-
ples in the updated docs here.
• .resample(..).interpolate() is now supported (GH12925)
• .isin() now accepts passed sets (GH12988)
These changes conform sparse handling to return the correct types and work to make a smoother experience with
indexing.
SparseArray.take now returns a scalar for scalar input, SparseArray for others. Furthermore, it handles a
negative indexer with the same rule as Index (GH10560, GH12796)
In [39]: s.take(0)
Out[39]: nan
The index in .groupby(..).nth() output is now more consistent when the as_index argument is passed
(GH11039):
In [42]: df
Out[42]:
A B
0 a 1
1 b 2
2 a 3
Previous Behavior:
New Behavior:
Furthermore, previously, a .groupby would always sort, regardless if sort=False was passed with .nth().
In [45]: np.random.seed(1234)
Previous Behavior:
New Behavior:
a b
c
(continues on next page)
Compatibility between pandas array-like methods (e.g. sum and take) and their numpy counterparts has been greatly
increased by augmenting the signatures of the pandas methods so as to accept arguments that can be passed in from
numpy, even if they are not necessarily used in the pandas implementation (GH12644, GH12638, GH12687)
• .searchsorted() for Index and TimedeltaIndex now accept a sorter argument to maintain com-
patibility with numpy’s searchsorted function (GH12238)
• Bug in numpy compatibility of np.round() on a Series (GH12600)
An example of this signature augmentation is illustrated below:
In [51]: sp
Out[51]:
0
0 1
1 2
2 3
Previous behaviour:
New behaviour:
Using apply on resampling groupby operations (using a pd.TimeGrouper) now has the same output types as
similar apply calls on other groupby operations. (GH11742).
In [54]: df
Out[54]:
date value
(continues on next page)
Previous behavior:
Out[1]:
...
TypeError: cannot concatenate a non-NDFrame object
# Output is a Series
In [2]: df.groupby(pd.TimeGrouper(key='date', freq='M')).apply(lambda x: x[['value']].
˓→sum())
Out[2]:
date
2000-10-31 value 10
2000-11-30 value 13
dtype: int64
New Behavior:
# Output is a Series
In [55]: df.groupby(pd.TimeGrouper(key='date', freq='M')).apply(lambda x: x.value.
˓→sum())
Out[55]:
date
2000-10-31 10
2000-11-30 13
Freq: M, dtype: int64
# Output is a DataFrame
In [56]: df.groupby(pd.TimeGrouper(key='date', freq='M')).apply(lambda x: x[['value
˓→']].sum())
Out[56]:
value
date
2000-10-31 10
2000-11-30 13
In order to standardize the read_csv API for both the c and python engines, both will now raise an
EmptyDataError, a subclass of ValueError, in response to empty columns or header (GH12493, GH12506)
Previous behaviour:
New behaviour:
In addition to this error change, several others have been made as well:
• CParserError now sub-classes ValueError instead of just a Exception (GH12551)
• A CParserError is now raised instead of a generic Exception in read_csv when the c engine cannot
parse a column (GH12506)
• A ValueError is now raised instead of a generic Exception in read_csv when the c engine encounters
a NaN value in an integer column (GH12506)
• A ValueError is now raised instead of a generic Exception in read_csv when true_values is
specified, and the c engine encounters an element in a column containing unencodable bytes (GH12506)
• pandas.parser.OverflowError exception has been removed and has been replaced with Python’s built-
in OverflowError exception (GH12506)
• pd.read_csv() no longer allows a combination of strings and integers for the usecols parameter
(GH12678)
Bugs in pd.to_datetime() when passing a unit with convertible entries and errors='coerce' or non-
convertible with errors='ignore'. Furthermore, an OutOfBoundsDateime exception will be raised when an
out-of-range value is encountered for that unit when errors='raise'. (GH11758, GH13052, GH13059)
Previous behaviour:
New behaviour:
• .swaplevel() for Series, DataFrame, Panel, and MultiIndex now features defaults for its first
two parameters i and j that swap the two innermost levels of the index. (GH12934)
• .searchsorted() for Index and TimedeltaIndex now accept a sorter argument to maintain com-
patibility with numpy’s searchsorted function (GH12238)
• Period and PeriodIndex now raises IncompatibleFrequency error which inherits ValueError
rather than raw ValueError (GH12615)
• Series.apply for category dtype now applies the passed function to each of the .categories (and not
the .codes), and returns a category dtype if possible (GH12473)
• read_csv will now raise a TypeError if parse_dates is neither a boolean, list, or dictionary (matches
the doc-string) (GH5636)
• The default for .query()/.eval() is now engine=None, which will use numexpr if it’s installed;
otherwise it will fallback to the python engine. This mimics the pre-0.18.1 behavior if numexpr is installed
(and which, previously, if numexpr was not installed, .query()/.eval() would raise). (GH12749)
• pd.show_versions() now includes pandas_datareader version (GH12740)
• Provide a proper __name__ and __qualname__ attributes for generic functions (GH12021)
• pd.concat(ignore_index=True) now uses RangeIndex as default (GH12695)
• pd.merge() and DataFrame.join() will show a UserWarning when merging/joining a single- with
a multi-leveled dataframe (GH9455, GH12219)
• Compat with scipy > 0.17 for deprecated piecewise_polynomial interpolation method; support for the
replacement from_derivatives method (GH12887)
1.15.3.7 Deprecations
• usecols parameter in pd.read_csv is now respected even when the lines of a CSV file are not even
(GH12203)
• Bug in groupby.transform(..) when axis=1 is specified with a non-monotonic ordered index
(GH12713)
• Bug in Period and PeriodIndex creation raises KeyError if freq="Minute" is specified. Note that
“Minute” freq is deprecated in v0.17.0, and recommended to use freq="T" instead (GH11854)
• Bug in .resample(...).count() with a PeriodIndex always raising a TypeError (GH12774)
• Bug in .resample(...) with a PeriodIndex casting to a DatetimeIndex when empty (GH12868)
• Bug in .resample(...) with a PeriodIndex when resampling to an existing frequency (GH12770)
• Bug in printing data which contains Period with different freq raises ValueError (GH12615)
• Bug in Series construction with Categorical and dtype='category' is specified (GH12574)
• Bugs in concatenation with a coercable dtype was too aggressive, resulting in different dtypes in outputfor-
matting when an object was longer than display.max_rows (GH12411, GH12045, GH11594, GH10571,
GH12211)
• Bug in float_format option with option not being validated as a callable. (GH12706)
• Bug in GroupBy.filter when dropna=False and no groups fulfilled the criteria (GH12768)
• Bug in __name__ of .cum* functions (GH12021)
• Bug in .astype() of a Float64Inde/Int64Index to an Int64Index (GH12881)
• Bug in roundtripping an integer based index in .to_json()/.read_json() when orient='index'
(the default) (GH12866)
• Bug in plotting Categorical dtypes cause error when attempting stacked bar plot (GH13019)
• Compat with >= numpy 1.11 for NaT comparions (GH12969)
• Bug in .drop() with a non-unique MultiIndex. (GH12701)
• Bug in .concat of datetime tz-aware and naive DataFrames (GH12467)
• Bug in correctly raising a ValueError in .resample(..).fillna(..) when passing a non-string
(GH12952)
• Bug fixes in various encoding and header processing issues in pd.read_sas() (GH12659, GH12654,
GH12647, GH12809)
• Bug in pd.crosstab() where would silently ignore aggfunc if values=None (GH12569).
• Potential segfault in DataFrame.to_json when serialising datetime.time (GH11473).
• Potential segfault in DataFrame.to_json when attempting to serialise 0d array (GH11299).
• Segfault in to_json when attempting to serialise a DataFrame or Series with non-ndarray values; now
supports serialization of category, sparse, and datetime64[ns, tz] dtypes (GH10778).
• Bug in DataFrame.to_json with unsupported dtype not passed to default handler (GH12554).
• Bug in .align not returning the sub-class (GH12983)
• Bug in aligning a Series with a DataFrame (GH13037)
• Bug in ABCPanel in which Panel4D was not being considered as a valid instance of this generic type
(GH12810)
This is a major release from 0.17.1 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Warning: pandas >= 0.18.0 no longer supports compatibility with Python version 2.6 and 3.3 (GH7718,
GH11273)
Warning: numexpr version 2.4.4 will now show a warning and not be used as a computation back-end for
pandas because of some buggy behavior. This does not affect other versions (>= 2.1 and >= 2.4.6). (GH12489)
Highlights include:
• Moving and expanding window functions are now methods on Series and DataFrame, similar to .groupby,
see here.
• Adding support for a RangeIndex as a specialized form of the Int64Index for memory savings, see here.
• API breaking change to the .resample method to make it more .groupby like, see here.
• Removal of support for positional indexing with floats, which was deprecated since 0.14.0. This will now raise
a TypeError, see here.
• The .to_xarray() function has been added for compatibility with the xarray package, see here.
• The read_sas function has been enhanced to read sas7bdat files, see here.
• Addition of the .str.extractall() method, and API changes to the .str.extract() method and .str.cat() method.
• pd.test() top-level nose test runner is available (GH4327).
Check the API Changes and deprecations before updating.
• New features
– Window functions are now methods
– Changes to rename
– Range Index
– Changes to str.extract
– Addition of str.extractall
– Changes to str.cat
– Datetimelike rounding
– Formatting of Integers in FloatIndex
– Changes to dtype assignment behaviors
– to_xarray
– Latex Representation
– pd.read_sas() changes
– Other enhancements
• Backwards incompatible API changes
– NaT and Timedelta operations
– Changes to msgpack
– Signature change for .rank
– Bug in QuarterBegin with n=0
– Resample API
* Downsampling
* Upsampling
* Previous API will work but with deprecations
– Changes to eval
– Other API Changes
– Deprecations
Window functions have been refactored to be methods on Series/DataFrame objects, rather than top-level func-
tions, which are now deprecated. This allows these window-type functions, to have a similar API to that of .groupby.
See the full documentation here (GH11603, GH12373)
In [1]: np.random.seed(1234)
In [3]: df
Out[3]:
A B
0 0 0.471435
1 1 -1.190976
2 2 1.432707
3 3 -0.312652
4 4 -0.720589
5 5 0.887163
6 6 0.859588
7 7 -0.636524
8 8 0.015696
9 9 -2.242685
Previous Behavior:
In [8]: pd.rolling_mean(df,window=3)
FutureWarning: pd.rolling_mean is deprecated for DataFrame and will be
˓→removed in a future version, replace with
DataFrame.rolling(window=3,center=False).mean()
Out[8]:
A B
0 NaN NaN
1 NaN NaN
2 1 0.237722
3 2 -0.023640
4 3 0.133155
5 4 -0.048693
6 5 0.342054
7 6 0.370076
8 7 0.079587
9 8 -0.954504
New Behavior:
In [4]: r = df.rolling(window=3)
Series.rename and NDFrame.rename_axis can now take a scalar or list-like argument for altering the Series
or axis name, in addition to their old behaviors of altering labels. (GH9494, GH11965)
In [9]: s = pd.Series(np.random.randn(5))
In [10]: s.rename('newname')
Out[10]:
0 1.150036
1 0.991946
2 0.953324
3 -2.021255
4 -0.334077
Name: newname, dtype: float64
In [12]: (df.rename_axis("indexname")
....: .rename_axis("columns_name", axis="columns"))
....:
Out[12]:
columns_name 0 1
indexname
0 0.002118 0.405453
1 0.289092 1.321158
2 -1.546906 -0.202646
3 -0.655969 0.193421
4 0.553439 1.318152
The new functionality works well in method chains. Previously these methods only accepted functions or dicts map-
ping a label to a new label. This continues to work as before for function or dict-like values.
A RangeIndex has been added to the Int64Index sub-classes to support a memory saving alternative for common
use cases. This has a similar implementation to the python range object (xrange in python 2), in that it only
stores the start, stop, and step values for the index. It will transparently interact with the user API, converting to
Int64Index if needed.
This will now be the default constructed index for NDFrame objects, rather than previous an Int64Index. (GH939,
GH12070, GH12071, GH12109, GH12888)
Previous Behavior:
In [3]: s = pd.Series(range(1000))
In [4]: s.index
(continues on next page)
In [6]: s.index.nbytes
Out[6]: 8000
New Behavior:
In [13]: s = pd.Series(range(1000))
In [14]: s.index
Out[14]: RangeIndex(start=0, stop=1000, step=1)
In [15]: s.index.nbytes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[15]: 80
The .str.extract method takes a regular expression with capture groups, finds the first match in each subject string, and
returns the contents of the capture groups (GH11386).
In v0.18.0, the expand argument was added to extract.
• expand=False: it returns a Series, Index, or DataFrame, depending on the subject and regular expres-
sion pattern (same behavior as pre-0.18.0).
• expand=True: it always returns a DataFrame, which is more consistent and less confusing from the per-
spective of a user.
Currently the default is expand=None which gives a FutureWarning and uses expand=False. To avoid this
warning, please explicitly specify expand.
Out[1]:
0 1
1 2
2 NaN
dtype: object
Calling on an Index with a regex with exactly one capture group returns an Index if expand=False.
In [18]: s = pd.Series(["a1", "b2", "c3"], ["A11", "B22", "C33"])
In [19]: s.index
Out[19]: Index(['A11', 'B22', 'C33'], dtype='object')
Calling on an Index with a regex with more than one capture group raises ValueError if expand=False.
>>> s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=False)
ValueError: only one regex group is supported with Index
In summary, extract(expand=True) always returns a DataFrame with a row for every subject string, and a
column for every capture group.
The .str.extractall method was added (GH11386). Unlike extract, which returns only the first match.
In [23]: s = pd.Series(["a1a2", "b1", "c1"], ["A", "B", "C"])
In [24]: s
Out[24]:
A a1a2
B b1
C c1
dtype: object
(continues on next page)
The method .str.cat() concatenates the members of a Series. Before, if NaN values were present in the Series,
calling .str.cat() on it would return NaN, unlike the rest of the Series.str.* API. This behavior has been
amended to ignore NaN values by default. (GH11435).
A new, friendlier ValueError is added to protect against the mistake of supplying the sep as an arg, rather than as
a kwarg. (GH11334).
In [27]: pd.Series(['a','b',np.nan,'c']).str.cat(sep=' ')
Out[27]: 'a b c'
DatetimeIndex, Timestamp, TimedeltaIndex, Timedelta have gained the .round(), .floor() and
.ceil() method for datetimelike rounding, flooring and ceiling. (GH4314, GH11963)
Naive datetimes
In [29]: dr = pd.date_range('20130101 09:12:56.1234', periods=3)
In [30]: dr
Out[30]:
DatetimeIndex(['2013-01-01 09:12:56.123400', '2013-01-02 09:12:56.123400',
'2013-01-03 09:12:56.123400'],
dtype='datetime64[ns]', freq='D')
In [31]: dr.round('s')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
# Timestamp scalar
In [32]: dr[0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timestamp('2013-01-01 09:12:56.123400', freq='D')
In [33]: dr[0].round('10s')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timestamp('2013-01-01 09:13:00')
In [34]: dr = dr.tz_localize('US/Eastern')
In [35]: dr
Out[35]:
DatetimeIndex(['2013-01-01 09:12:56.123400-05:00',
'2013-01-02 09:12:56.123400-05:00',
'2013-01-03 09:12:56.123400-05:00'],
dtype='datetime64[ns, US/Eastern]', freq='D')
In [36]: dr.round('s')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Timedeltas
In [38]: t
Out[38]:
TimedeltaIndex(['1 days 02:13:00.000045', '2 days 02:13:00.000045',
'3 days 02:13:00.000045'],
dtype='timedelta64[ns]', freq='D')
In [39]: t.round('10min')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→TimedeltaIndex(['1 days 02:10:00', '2 days 02:10:00', '3 days 02:10:00'], dtype=
˓→'timedelta64[ns]', freq=None)
# Timedelta scalar
In [40]: t[0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timedelta('1 days 02:13:00.000045')
In [41]: t[0].round('2h')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timedelta('1 days 02:00:00')
In addition, .round(), .floor() and .ceil() will be available thru the .dt accessor of Series.
In [42]: s = pd.Series(dr)
In [43]: s
Out[43]:
0 2013-01-01 09:12:56.123400-05:00
1 2013-01-02 09:12:56.123400-05:00
2 2013-01-03 09:12:56.123400-05:00
dtype: datetime64[ns, US/Eastern]
In [44]: s.dt.round('D')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Integers in FloatIndex, e.g. 1., are now formatted with a decimal point and a 0 digit, e.g. 1.0 (GH11713) This
change not only affects the display to the console, but also the output of IO methods like .to_csv or .to_html.
Previous Behavior:
In [3]: s
Out[3]:
0 1
1 2
2 3
dtype: int64
In [4]: s.index
Out[4]: Float64Index([0.0, 1.0, 2.0], dtype='float64')
In [5]: print(s.to_csv(path=None))
0,1
1,2
2,3
New Behavior:
In [46]: s
Out[46]:
0.0 1
1.0 2
2.0 3
dtype: int64
In [47]: s.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[47]: Float64Index([0.0, 1.0, 2.
˓→0], dtype='float64')
1.0,2
2.0,3
When a DataFrame’s slice is updated with a new slice of the same dtype, the dtype of the DataFrame will now remain
the same. (GH10503)
Previous Behavior:
In [7]: df.dtypes
Out[7]:
a int64
b uint32
dtype: object
In [8]: ix = df['a'] == 1
In [11]: df.dtypes
Out[11]:
a int64
b int64
dtype: object
New Behavior:
In [50]: df.dtypes
Out[50]:
a int64
b uint32
dtype: object
In [51]: ix = df['a'] == 1
In [53]: df.dtypes
Out[53]:
a int64
b uint32
dtype: object
When a DataFrame’s integer slice is partially updated with a new slice of floats that could potentially be downcasted
to integer without losing precision, the dtype of the slice will be set to float instead of integer.
Previous Behavior:
In [4]: df = pd.DataFrame(np.array(range(1,10)).reshape(3,3),
columns=list('abc'),
index=[[4,4,8], [8,10,12]])
In [5]: df
Out[5]:
a b c
4 8 1 2 3
10 4 5 6
8 12 7 8 9
In [8]: df
Out[8]:
a b c
4 8 1 2 0
10 4 5 1
8 12 7 8 9
New Behavior:
In [54]: df = pd.DataFrame(np.array(range(1,10)).reshape(3,3),
....: columns=list('abc'),
....: index=[[4,4,8], [8,10,12]])
....:
In [55]: df
Out[55]:
a b c
4 8 1 2 3
10 4 5 6
8 12 7 8 9
In [57]: df
Out[57]:
a b c
4 8 1 2 0.0
10 4 5 1.0
8 12 7 8 9.0
1.16.1.10 to_xarray
In a future version of pandas, we will be deprecating Panel and other > 2 ndim objects. In order to provide for
continuity, all NDFrame objects have gained the .to_xarray() method in order to convert to xarray objects,
which has a pandas-like interface for > 2 ndim. (GH11972)
See the xarray full-documentation here.
In [1]: p = Panel(np.arange(2*3*4).reshape(2,3,4))
In [2]: p.to_xarray()
Out[2]:
(continues on next page)
DataFrame has gained a ._repr_latex_() method in order to allow for conversion to latex in a ipython/jupyter
notebook using nbconvert. (GH11778)
Note that this must be activated by setting the option pd.display.latex.repr=True (GH12182)
For example, if you have a jupyter notebook you plan to convert to latex using nbconvert, place the statement pd.
display.latex.repr=True in the first cell to have the contained DataFrame output also stored as latex.
The options display.latex.escape and display.latex.longtable have also been added to the config-
uration and are used automatically by the to_latex method. See the available options docs for more info.
read_sas has gained the ability to read SAS7BDAT files, including compressed files. The files can be read in
entirety, or incrementally. For full details see here. (GH4052)
• Added Google BigQuery service account authentication support, which enables authentication on remote
servers. (GH11881, GH12572). For further details see here
• HDFStore is now iterable: for k in store is equivalent to for k in store.keys() (GH12221).
• Add missing methods/fields to .dt for Period (GH8848)
• The entire codebase has been PEP-ified (GH12096)
• the leading whitespaces have been removed from the output of .to_string(index=False) method
(GH11833)
• the out parameter has been removed from the Series.round() method. (GH11763)
• DataFrame.round() leaves non-numeric columns unchanged in its return, rather than raises. (GH11885)
• DataFrame.head(0) and DataFrame.tail(0) return empty frames, rather than self. (GH11937)
• Series.head(0) and Series.tail(0) return empty series, rather than self. (GH11937)
• to_msgpack and read_msgpack encoding now defaults to 'utf-8'. (GH12170)
• the order of keyword arguments to text file parsing functions (.read_csv(), .read_table(), .
read_fwf()) changed to group related arguments. (GH11555)
• NaTType.isoformat now returns the string 'NaT to allow the result to be passed to the constructor of
Timestamp. (GH12300)
NaT and Timedelta have expanded arithmetic operations, which are extended to Series arithmetic where applica-
ble. Operations defined for datetime64[ns] or timedelta64[ns] are now also defined for NaT (GH11564).
NaT now supports arithmetic operations with integers and floats.
In [58]: pd.NaT * 1
Out[58]: NaT
In [60]: pd.NaT / 2
\\\\\\\\\\\\\\\\\\\\\\\\\\Out[60]: NaT
NaT may represent either a datetime64[ns] null or a timedelta64[ns] null. Given the ambiguity, it is
treated as a timedelta64[ns], which allows more operations to succeed.
# same as
In [65]: pd.Timedelta('1s') + pd.Timedelta('1s')
\\\\\\\\\\\\\Out[65]: Timedelta('0 days 00:00:02')
as opposed to
In [3]: pd.Timestamp('19900315') + pd.Timestamp('19900315')
TypeError: unsupported operand type(s) for +: 'Timestamp' and 'Timestamp'
However, when wrapped in a Series whose dtype is datetime64[ns] or timedelta64[ns], the dtype
information is respected.
In [1]: pd.Series([pd.NaT], dtype='<M8[ns]') + pd.Series([pd.NaT], dtype='<M8[ns]')
TypeError: can only operate on a datetimes for subtraction,
but the operator [__add__] was passed
In [69]: ser
Out[69]:
0 1 days
1 2 days
2 3 days
dtype: timedelta64[ns]
NaT.isoformat() now returns 'NaT'. This change allows allows pd.Timestamp to rehydrate any timestamp
like object from its isoformat (GH12300).
Forward incompatible changes in msgpack writing format were made over 0.17.0 and 0.18.0; older versions of
pandas cannot read files packed by newer versions (GH12129, GH10527)
Bugs in to_msgpack and read_msgpack introduced in 0.17.0 and fixed in 0.18.0, caused files packed in Python
2 unreadable by Python 3 (GH12142). The following table describes the backward and forward compat of msgpacks.
Warning:
0.18.0 is backward-compatible for reading files packed by older versions, except for files packed with 0.17 in
Python 2, in which case only they can only be unpacked in Python 2.
New signature
In previous versions, the behavior of the QuarterBegin offset was inconsistent depending on the date when the n
parameter was 0. (GH11406)
The general semantics of anchored offsets for n=0 is to not move the date when it is an anchor point (e.g., a quarter
start date), and otherwise roll forward to the next anchor point.
In [73]: d = pd.Timestamp('2014-02-01')
In [74]: d
Out[74]: Timestamp('2014-02-01 00:00:00')
For the QuarterBegin offset in previous versions, the date would be rolled backwards if date was in the same
month as the quarter start date.
In [3]: d = pd.Timestamp('2014-02-15')
This behavior has been corrected in version 0.18.0, which is consistent with other anchored offsets like MonthBegin
and YearBegin.
In [77]: d = pd.Timestamp('2014-02-15')
Like the change in the window functions API above, .resample(...) is changing to have a more groupby-like
API. (GH11732, GH12702, GH12202, GH12332, GH12334, GH12348, GH12448).
In [79]: np.random.seed(1234)
In [80]: df = pd.DataFrame(np.random.rand(10,4),
....: columns=list('ABCD'),
....: index=pd.date_range('2010-01-01 09:00:00', periods=10,
˓→freq='s'))
....:
In [81]: df
Out[81]:
A B C D
2010-01-01 09:00:00 0.191519 0.622109 0.437728 0.785359
2010-01-01 09:00:01 0.779976 0.272593 0.276464 0.801872
2010-01-01 09:00:02 0.958139 0.875933 0.357817 0.500995
2010-01-01 09:00:03 0.683463 0.712702 0.370251 0.561196
(continues on next page)
Previous API:
You would write a resampling operation that immediately evaluates. If a how parameter was not provided, it would
default to how='mean'.
In [6]: df.resample('2s')
Out[6]:
A B C D
2010-01-01 09:00:00 0.485748 0.447351 0.357096 0.793615
2010-01-01 09:00:02 0.820801 0.794317 0.364034 0.531096
2010-01-01 09:00:04 0.433985 0.314582 0.424104 0.625733
2010-01-01 09:00:06 0.624988 0.609738 0.633165 0.612452
2010-01-01 09:00:08 0.510470 0.534317 0.573201 0.806949
New API:
Now, you can write .resample(..) as a 2-stage operation like .groupby(...), which yields a Resampler.
In [82]: r = df.resample('2s')
In [83]: r
Out[83]: DatetimeIndexResampler [freq=<2 * Seconds>, axis=0, closed=left, label=left,
˓→convention=start, base=0]
Downsampling
You can then use this object to perform operations. These are downsampling operations (going from a higher frequency
to a lower one).
In [84]: r.mean()
Out[84]:
A B C D
2010-01-01 09:00:00 0.485748 0.447351 0.357096 0.793615
2010-01-01 09:00:02 0.820801 0.794317 0.364034 0.531096
2010-01-01 09:00:04 0.433985 0.314582 0.424104 0.625733
2010-01-01 09:00:06 0.624988 0.609738 0.633165 0.612452
2010-01-01 09:00:08 0.510470 0.534317 0.573201 0.806949
In [85]: r.sum()
Out[85]:
A B C D
2010-01-01 09:00:00 0.971495 0.894701 0.714192 1.587231
2010-01-01 09:00:02 1.641602 1.588635 0.728068 1.062191
2010-01-01 09:00:04 0.867969 0.629165 0.848208 1.251465
2010-01-01 09:00:06 1.249976 1.219477 1.266330 1.224904
2010-01-01 09:00:08 1.020940 1.068634 1.146402 1.613897
Furthermore, resample now supports getitem operations to perform the resample on specific columns.
In [86]: r[['A','C']].mean()
Out[86]:
A C
2010-01-01 09:00:00 0.485748 0.357096
2010-01-01 09:00:02 0.820801 0.364034
2010-01-01 09:00:04 0.433985 0.424104
2010-01-01 09:00:06 0.624988 0.633165
2010-01-01 09:00:08 0.510470 0.573201
In [88]: r[['A','B']].agg(['mean','sum'])
Out[88]:
A B
mean sum mean sum
2010-01-01 09:00:00 0.485748 0.971495 0.447351 0.894701
2010-01-01 09:00:02 0.820801 1.641602 0.794317 1.588635
2010-01-01 09:00:04 0.433985 0.867969 0.314582 0.629165
2010-01-01 09:00:06 0.624988 1.249976 0.609738 1.219477
2010-01-01 09:00:08 0.510470 1.020940 0.534317 1.068634
Upsampling
Upsampling operations take you from a lower frequency to a higher frequency. These are now performed with the
Resampler objects with backfill(), ffill(), fillna() and asfreq() methods.
In [89]: s = pd.Series(np.arange(5,dtype='int64'),
....: index=date_range('2010-01-01', periods=5, freq='Q'))
....:
In [90]: s
Out[90]:
2010-03-31 0
(continues on next page)
Previously
New API
In [91]: s.resample('M').ffill()
Out[91]:
2010-03-31 0
2010-04-30 0
2010-05-31 0
2010-06-30 1
2010-07-31 1
2010-08-31 1
2010-09-30 2
2010-10-31 2
2010-11-30 2
2010-12-31 3
2011-01-31 3
2011-02-28 3
2011-03-31 4
Freq: M, dtype: int64
Note: In the new API, you can either downsample OR upsample. The prior implementation would allow you to pass
an aggregator function (like mean) even though you were upsampling, providing a bit of confusion.
Warning: This new API for resample includes some internal changes for the prior-to-0.18.0 API, to work with a
deprecation warning in most cases, as the resample operation returns a deferred object. We can intercept operations
and just do what the (pre 0.18.0) API did (with a warning). Here is a typical use case:
In [4]: r = df.resample('2s')
In [6]: r*10
pandas/tseries/resample.py:80: FutureWarning: .resample() is now a deferred
˓→operation
Out[6]:
A B C D
2010-01-01 09:00:00 4.857476 4.473507 3.570960 7.936154
2010-01-01 09:00:02 8.208011 7.943173 3.640340 5.310957
2010-01-01 09:00:04 4.339846 3.145823 4.241039 6.257326
2010-01-01 09:00:06 6.249881 6.097384 6.331650 6.124518
2010-01-01 09:00:08 5.104699 5.343172 5.732009 8.069486
However, getting and assignment operations directly on a Resampler will raise a ValueError:
In [7]: r.iloc[0] = 5
ValueError: .resample() is now a deferred operation
use .resample(...).mean() instead of .resample(...)
There is a situation where the new API can not perform all the operations when using original code. This code is
intending to resample every 2s, take the mean AND then take the min of those results.
In [4]: df.resample('2s').min()
Out[4]:
A 0.433985
B 0.314582
C 0.357096
D 0.531096
dtype: float64
The good news is the return dimensions will differ between the new API and the old API, so this should loudly
raise an exception.
To replicate the original operation
In [93]: df.resample('2s').mean().min()
Out[93]:
A 0.433985
B 0.314582
C 0.357096
D 0.531096
dtype: float64
In prior versions, new columns assignments in an eval expression resulted in an inplace change to the DataFrame.
(GH9297, GH8664, GH10486)
In [95]: df
Out[95]:
a b
0 0.0 0
1 2.5 1
2 5.0 2
3 7.5 3
4 10.0 4
This will change in a future version of pandas, use inplace=True to avoid this
˓→warning.
In [13]: df
Out[13]:
a b c
0 0.0 0 0.0
1 2.5 1 3.5
2 5.0 2 7.0
3 7.5 3 10.5
4 10.0 4 14.0
In version 0.18.0, a new inplace keyword was added to choose whether the assignment should be done inplace or
return a copy.
In [96]: df
Out[96]:
a b c
0 0.0 0 0.0
1 2.5 1 3.5
2 5.0 2 7.0
3 7.5 3 10.5
4 10.0 4 14.0
a b c d
0 0.0 0 0.0 0.0
1 2.5 1 3.5 2.5
2 5.0 2 7.0 5.0
3 7.5 3 10.5 7.5
4 10.0 4 14.0 10.0
In [98]: df
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b c
(continues on next page)
In [100]: df
Out[100]:
a b c d
0 0.0 0 0.0 0.0
1 2.5 1 3.5 2.5
2 5.0 2 7.0 5.0
3 7.5 3 10.5 7.5
4 10.0 4 14.0 10.0
Warning: For backwards compatibility, inplace defaults to True if not specified. This will change in a
future version of pandas. If your code depends on an inplace assignment you should update to explicitly set
inplace=True
The inplace keyword parameter was also added the query method.
In [103]: df
Out[103]:
a b c d
3 7.5 3 10.5 7.5
4 10.0 4 14.0 10.0
Warning: Note that the default value for inplace in a query is False, which is consistent with prior versions.
eval has also been updated to allow multi-line expressions for multiple assignments. These expressions will be
evaluated one at a time in order. Only assignments are valid for multi-line expressions.
In [104]: df
Out[104]:
a b c d
3 7.5 3 10.5 7.5
4 10.0 4 14.0 10.0
In [105]: df.eval("""
.....: e = d + a
.....: f = e - 22
.....: g = f / 2.0""", inplace=True)
(continues on next page)
In [106]: df
Out[106]:
a b c d e f g
3 7.5 3 10.5 7.5 15.0 -7.0 -3.5
4 10.0 4 14.0 10.0 20.0 -2.0 -1.0
• DataFrame.between_time and Series.between_time now only parse a fixed set of time strings.
Parsing of date strings is no longer supported and raises a ValueError. (GH11818)
• .memory_usage() now includes values in the index, as does memory_usage in .info() (GH11597)
• DataFrame.to_latex() now supports non-ascii encodings (eg utf-8) in Python 2 with the parameter
encoding (GH7061)
• pandas.merge() and DataFrame.merge() will show a specific error message when trying to merge
with an object that is not of type DataFrame or a subclass (GH12081)
• DataFrame.unstack and Series.unstack now take fill_value keyword to allow direct replace-
ment of missing values when an unstack results in missing values in the resulting DataFrame. As an added
benefit, specifying fill_value will preserve the data type of the original stacked data. (GH9746)
• As part of the new API for window functions and resampling, aggregation functions have been clarified, raising
more informative error messages on invalid aggregations. (GH9052). A full set of examples are presented in
groupby.
• Statistical functions for NDFrame objects (like sum(), mean(), min()) will now raise if non-numpy-
compatible arguments are passed in for **kwargs (GH12301)
• .to_latex and .to_html gain a decimal parameter like .to_csv; the default is '.' (GH12031)
• More helpful error message when constructing a DataFrame with empty data but with indices (GH8020)
• .describe() will now properly handle bool dtype as a categorical (GH6625)
• More helpful error message with an invalid .transform with user defined input (GH10165)
• Exponentially weighted functions now allow specifying alpha directly (GH10789) and raise ValueError if
parameters violate 0 < alpha <= 1 (GH12492)
1.16.2.8 Deprecations
• The functions pd.rolling_*, pd.expanding_*, and pd.ewm* are deprecated and replaced by the cor-
responding method call. Note that the new suggested syntax includes all of the arguments (even if default)
(GH11603)
In [1]: s = pd.Series(range(3))
In [2]: pd.rolling_mean(s,window=2,min_periods=1)
FutureWarning: pd.rolling_mean is deprecated for Series and
will be removed in a future version, replace with
Series.rolling(min_periods=1,window=2,center=False).mean()
Out[2]:
0 0.0
1 0.5
2 1.5
dtype: float64
• The freq and how arguments to the .rolling, .expanding, and .ewm (new) functions are deprecated,
and will be removed in a future version. You can simply resample the input prior to creating a window function.
(GH11603).
For example, instead of s.rolling(window=5,freq='D').max() to get the max value on a rolling
5 Day window, one could use s.resample('D').mean().rolling(window=5).max(), which first
resamples the data to daily data, then provides a rolling 5 day window.
• pd.tseries.frequencies.get_offset_name function is deprecated. Use offset’s .freqstr prop-
erty as alternative (GH11192)
• pandas.stats.fama_macbeth routines are deprecated and will be removed in a future version (GH6077)
• pandas.stats.ols, pandas.stats.plm and pandas.stats.var routines are deprecated and will
be removed in a future version (GH6077)
• show a FutureWarning rather than a DeprecationWarning on using long-time deprecated syntax in
HDFStore.select, where the where clause is not a string-like (GH12027)
• The pandas.options.display.mpl_style configuration has been deprecated and will be removed in
a future version of pandas. This functionality is better handled by matplotlib’s style sheets (GH11783).
In GH4892 indexing with floating point numbers on a non-Float64Index was deprecated (in version 0.14.0). In
0.18.0, this deprecation warning is removed and these will now raise a TypeError. (GH12165, GH12333)
In [109]: s = pd.Series([1, 2, 3], index=[4, 5, 6])
In [110]: s
(continues on next page)
In [112]: s2
Out[112]:
a 1
b 2
c 3
dtype: int64
Previous Behavior:
Out[2]: 2
Out[3]: 2
Out[4]: 2
In [6]: s2
Out[6]:
a 1
b 10
c 3
dtype: int64
New Behavior:
For iloc, getting & setting via a float scalar will always raise.
In [3]: s.iloc[2.0]
TypeError: cannot do label indexing on <class 'pandas.indexes.numeric.Int64Index'>
˓→with these indexers [2.0] of <type 'float'>
Other indexers will coerce to a like integer for both getting and setting. The FutureWarning has been dropped for
.loc, .ix and [].
In [113]: s[5.0]
Out[113]: 2
In [114]: s.loc[5.0]
\\\\\\\\\\\\Out[114]: 2
and setting
In [116]: s_copy[5.0] = 10
In [117]: s_copy
Out[117]:
4 1
5 10
6 3
dtype: int64
In [119]: s_copy.loc[5.0] = 10
In [120]: s_copy
Out[120]:
4 1
5 10
6 3
dtype: int64
Positional setting with .ix and a float indexer will ADD this value to the index, rather than previously setting the
value by position.
In [3]: s2.ix[1.0] = 10
In [4]: s2
Out[4]:
a 1
b 2
c 3
1.0 10
dtype: int64
In [121]: s.loc[5.0:6]
Out[121]:
5 2
6 3
dtype: int64
Note that for floats that are NOT coercible to ints, the label based bounds will be excluded
In [122]: s.loc[5.1:6]
Out[122]:
6 3
dtype: int64
In [124]: s[1.0]
Out[124]: 2
In [125]: s[1.0:2.5]
\\\\\\\\\\\\Out[125]:
1.0 2
2.0 3
dtype: int64
• Bug in value label reading for StataReader when reading incrementally (GH12014)
• Bug in vectorized DateOffset when n parameter is 0 (GH11370)
• Compat for numpy 1.11 w.r.t. NaT comparison changes (GH12049)
• Bug in read_csv when reading from a StringIO in threads (GH11790)
• Bug in not treating NaT as a missing value in datetimelikes when factorizing & with Categoricals
(GH12077)
• Bug in getitem when the values of a Series were tz-aware (GH12089)
• Bug in Series.str.get_dummies when one of the variables was ‘name’ (GH12180)
• Bug in pd.concat while concatenating tz-aware NaT series. (GH11693, GH11755, GH12217)
• Bug in pd.read_stata with version <= 108 files (GH12232)
• Bug in Series.resample using a frequency of Nano when the index is a DatetimeIndex and contains
non-zero nanosecond parts (GH12037)
• Bug in resampling with .nunique and a sparse index (GH12352)
• Removed some compiler warnings (GH12471)
• Work around compat issues with boto in python 3.5 (GH11915)
• Bug in NaT subtraction from Timestamp or DatetimeIndex with timezones (GH11718)
• Bug in subtraction of Series of a single tz-aware Timestamp (GH12290)
• Use compat iterators in PY2 to support .next() (GH12299)
• Bug in Timedelta.round with negative values (GH11690)
• Bug in .loc against CategoricalIndex may result in normal Index (GH11586)
• Bug in DataFrame.info when duplicated column names exist (GH11761)
• Bug in .copy of datetime tz-aware objects (GH11794)
• Bug in Series.apply and Series.map where timedelta64 was not boxed (GH11349)
• Bug in DataFrame.set_index() with tz-aware Series (GH12358)
• Bug in subclasses of DataFrame where AttributeError did not propagate (GH11808)
• Bug groupby on tz-aware data where selection not returning Timestamp (GH11616)
• Bug in pd.read_clipboard and pd.to_clipboard functions not supporting Unicode; upgrade in-
cluded pyperclip to v1.5.15 (GH9263)
• Bug in DataFrame.query containing an assignment (GH8664)
• Bug in from_msgpack where __contains__() fails for columns of the unpacked DataFrame, if the
DataFrame has object columns. (GH11880)
• Bug in .resample on categorical data with TimedeltaIndex (GH12169)
• Bug in timezone info lost when broadcasting scalar datetime to DataFrame (GH11682)
• Bug in Index creation from Timestamp with mixed tz coerces to UTC (GH11488)
• Bug in to_numeric where it does not raise if input is more than one dimension (GH11776)
• Bug in parsing timezone offset strings with non-zero minutes (GH11708)
• Bug in df.plot using incorrect colors for bar plots under matplotlib 1.5+ (GH11614)
• Bug in the groupby plot method when using keyword arguments (GH11805).
• Bug in DataFrame.duplicated and drop_duplicates causing spurious matches when setting
keep=False (GH11864)
• Bug in .loc result with duplicated key may have Index with incorrect dtype (GH11497)
• Bug in pd.rolling_median where memory allocation failed even with sufficient memory (GH11696)
• Bug in DataFrame.style with spurious zeros (GH12134)
• Bug in DataFrame.style with integer columns not starting at 0 (GH12125)
• Bug in .style.bar may not rendered properly using specific browser (GH11678)
• Bug in rich comparison of Timedelta with a numpy.array of Timedelta that caused an infinite recur-
sion (GH11835)
• Bug in DataFrame.round dropping column index name (GH11986)
• Bug in df.replace while replacing value in mixed dtype Dataframe (GH11698)
• Bug in Index prevents copying name of passed Index, when a new name is not provided (GH11193)
• Bug in read_excel failing to read any non-empty sheets when empty sheets exist and sheetname=None
(GH11711)
• Bug in read_excel failing to raise NotImplemented error when keywords parse_dates and
date_parser are provided (GH11544)
• Bug in read_sql with pymysql connections failing to return chunked data (GH11522)
• Bug in .to_csv ignoring formatting parameters decimal, na_rep, float_format for float indexes
(GH11553)
• Bug in Int64Index and Float64Index preventing the use of the modulo operator (GH9244)
• Bug in MultiIndex.drop for not lexsorted multi-indexes (GH12078)
• Bug in DataFrame when masking an empty DataFrame (GH11859)
• Bug in .plot potentially modifying the colors input when the number of columns didn’t match the number
of series provided (GH12039).
• Bug in Series.plot failing when index has a CustomBusinessDay frequency (GH7222).
• Bug in .to_sql for datetime.time values with sqlite fallback (GH8341)
• Bug in read_excel failing to read data with one column when squeeze=True (GH12157)
• Bug in read_excel failing to read one empty column (GH12292, GH9002)
• Bug in .groupby where a KeyError was not raised for a wrong column if there was only one row in the
dataframe (GH11741)
• Bug in .read_csv with dtype specified on empty data producing an error (GH12048)
• Bug in .read_csv where strings like '2E' are treated as valid floats (GH12237)
• Bug in building pandas with debugging symbols (GH12123)
• Removed millisecond property of DatetimeIndex. This would always raise a ValueError
(GH12019).
• Bug in Series constructor with read-only data (GH11502)
• Removed pandas.util.testing.choice(). Should use np.random.choice(), instead.
(GH12386)
• Bug in .loc setitem indexer preventing the use of a TZ-aware DatetimeIndex (GH12050)
• Bug in .style indexes and multi-indexes not appearing (GH11655)
• Bug in to_msgpack and from_msgpack which did not correctly serialize or deserialize NaT (GH12307).
• Bug in .skew and .kurt due to roundoff error for highly similar values (GH11974)
• Bug in Timestamp constructor where microsecond resolution was lost if HHMMSS were not separated with
‘:’ (GH10041)
• Bug in buffer_rd_bytes src->buffer could be freed more than once if reading failed, causing a segfault
(GH12098)
• Bug in crosstab where arguments with non-overlapping indexes would return a KeyError (GH10291)
• Bug in DataFrame.apply in which reduction was not being prevented for cases in which dtype was not a
numpy dtype (GH12244)
• Bug when initializing categorical series with a scalar value. (GH12336)
• Bug when specifying a UTC DatetimeIndex by setting utc=True in .to_datetime (GH11934)
• Bug when increasing the buffer size of CSV reader in read_csv (GH12494)
• Bug when setting columns of a DataFrame with duplicate column names (GH12344)
Note: We are proud to announce that pandas has become a sponsored project of the (NumFOCUS organization).
This will help ensure the success of development of pandas as a world-class open-source project.
This is a minor bug-fix release from 0.17.0 and includes a large number of bug fixes along several new features,
enhancements, and performance improvements. We recommend that all users upgrade to this version.
Highlights include:
• Support for Conditional HTML Formatting, see here
• Releasing the GIL on the csv reader & other ops, see here
• Fixed regression in DataFrame.drop_duplicates from 0.16.2, causing incorrect results on integer values
(GH11376)
• New features
– Conditional HTML Formatting
• Enhancements
• API changes
– Deprecations
• Performance Improvements
• Bug Fixes
Warning: This is a new feature and is under active development. We’ll be adding features an possibly making
breaking changes in future releases. Feedback is welcome.
We’ve added experimental support for conditional HTML formatting: the visual styling of a DataFrame based on the
data. The styling is accomplished with HTML and CSS. Acesses the styler class with the pandas.DataFrame.
style, attribute, an instance of Styler with your data attached.
Here’s a quick example:
In [1]: np.random.seed(123)
1.17.2 Enhancements
• Series of type category now make .str.<...> and .dt.<...> accessor methods / properties available,
if the categories are of that type. (GH10661)
In [9]: s = pd.Series(list('aabb')).astype('category')
In [10]: s
Out[10]:
0 a
1 a
2 b
3 b
dtype: category
Categories (2, object): [a, b]
In [11]: s.str.contains("a")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[11]:
˓→
0 True
1 True
2 False
3 False
dtype: bool
In [13]: date
Out[13]:
0 2015-01-01
1 2015-01-02
2 2015-01-03
3 2015-01-04
4 2015-01-05
dtype: category
Categories (5, datetime64[ns]): [2015-01-01, 2015-01-02, 2015-01-03, 2015-01-04,
˓→2015-01-05]
In [14]: date.dt.day
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
• pivot_table now has a margins_name argument so you can use something other than the default of ‘All’
(GH3335)
• Implement export of datetime64[ns, tz] dtypes with a fixed HDF5 store (GH11411)
• Pretty printing sets (e.g. in DataFrame cells) now uses set literal syntax ({x, y}) instead of Legacy Python
syntax (set([x, y])) (GH11215)
• Improve the error message in pandas.io.gbq.to_gbq() when a streaming insert fails (GH11285) and
when the DataFrame does not match the schema of the destination table (GH11359)
1.17.3.1 Deprecations
• The pandas.io.ga module which implements google-analytics support is deprecated and will be
removed in a future version (GH11308)
• Deprecate the engine keyword in .to_csv(), which will be removed in a future version (GH11274)
• Bug in inference of numpy scalars and preserving dtype when setting columns (GH11638)
• Bug in to_sql using unicode column names giving UnicodeEncodeError with (GH11431).
• Fix regression in setting of xticks in plot (GH11529).
• Bug in holiday.dates where observance rules could not be applied to holiday and doc enhancement
(GH11477, GH11533)
• Fix plotting issues when having plain Axes instances instead of SubplotAxes (GH11520, GH11556).
• Bug in DataFrame.to_latex() produces an extra rule when header=False (GH7124)
• Bug in df.groupby(...).apply(func) when a func returns a Series containing a new datetimelike
column (GH11324)
• Bug in pandas.json when file to load is big (GH11344)
• Bugs in to_excel with duplicate columns (GH11007, GH10982, GH10970)
• Fixed a bug that prevented the construction of an empty series of dtype datetime64[ns, tz] (GH11245).
• Bug in read_excel with multi-index containing integers (GH11317)
• Bug in to_excel with openpyxl 2.2+ and merging (GH11408)
• Bug in DataFrame.to_dict() produces a np.datetime64 object instead of Timestamp when only
datetime is present in data (GH11327)
• Bug in DataFrame.corr() raises exception when computes Kendall correlation for DataFrames with
boolean and not boolean columns (GH11560)
• Bug in the link-time error caused by C inline functions on FreeBSD 10+ (with clang) (GH10510)
• Bug in DataFrame.to_csv in passing through arguments for formatting MultiIndexes, including
date_format (GH7791)
• Bug in DataFrame.join() with how='right' producing a TypeError (GH11519)
• Bug in Series.quantile with empty list results has Index with object dtype (GH11588)
• Bug in pd.merge results in empty Int64Index rather than Index(dtype=object) when the merge
result is empty (GH11588)
• Bug in Categorical.remove_unused_categories when having NaN values (GH11599)
• Bug in DataFrame.to_sparse() loses column names for MultiIndexes (GH11600)
• Bug in DataFrame.round() with non-unique column index producing a Fatal Python error (GH11611)
• Bug in DataFrame.round() with decimals being a non-unique indexed Series producing extra columns
(GH11618)
This is a major release from 0.16.2 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Warning: pandas >= 0.17.0 will no longer support compatibility with Python version 3.2 (GH9118)
Warning: The pandas.io.data package is deprecated and will be replaced by the pandas-datareader pack-
age. This will allow the data modules to be independently updated to your pandas installation. The API for
pandas-datareader v0.1.1 is exactly the same as in pandas v0.17.0 (GH8961, GH10861).
After installing pandas-datareader, you can easily change your imports:
from pandas.io import data, wb
becomes
from pandas_datareader import data, wb
Highlights include:
• Release the Global Interpreter Lock (GIL) on some cython operations, see here
• Plotting methods are now available as attributes of the .plot accessor, see here
• The sorting API has been revamped to remove some long-time inconsistencies, see here
• Support for a datetime64[ns] with timezones as a first-class dtype, see here
• The default for to_datetime will now be to raise when presented with unparseable formats, previously
this would return the original input. Also, date parse functions now return consistent results. See here
• The default for dropna in HDFStore has changed to False, to store by default all rows even if they are all
NaN, see here
• Datetime accessor (dt) now supports Series.dt.strftime to generate formatted strings for datetime-
likes, and Series.dt.total_seconds to generate each duration of the timedelta in seconds. See here
• Period and PeriodIndex can handle multiplied freq like 3D, which corresponding to 3 days span. See here
• Development installed versions of pandas will now have PEP440 compliant version strings (GH9518)
• Development support for benchmarking with the Air Speed Velocity library (GH8361)
• Support for reading SAS xport files, see here
• Documentation comparing SAS to pandas, see here
• Removal of the automatic TimeSeries broadcasting, deprecated since 0.8.0, see here
• Display format with plain text can optionally align with Unicode East Asian Width, see here
• Compatibility with Python 3.5 (GH11097)
• Compatibility with matplotlib 1.5.0 (GH11111)
Check the API Changes and deprecations before updating.
• New features
– Datetime with TZ
– Releasing the GIL
– Plot submethods
– Additional methods for dt accessor
* strftime
* total_seconds
– Period Frequency Enhancement
– Support for SAS XPORT files
– Support for Math Functions in .eval()
– Changes to Excel with MultiIndex
– Google BigQuery Enhancements
– Display Alignment with Unicode East Asian Width
– Other enhancements
• Backwards incompatible API changes
– Changes to sorting API
– Changes to to_datetime and to_timedelta
* Error handling
* Consistent Parsing
– Changes to Index Comparisons
– Changes to Boolean Comparisons vs. None
– HDFStore dropna behavior
– Changes to display.precision option
– Changes to Categorical.unique
– Changes to bool passed as header in Parsers
– Other API Changes
– Deprecations
– Removal of prior version deprecations/changes
• Performance Improvements
• Bug Fixes
We are adding an implementation that natively supports datetime with timezones. A Series or a DataFrame
column previously could be assigned a datetime with timezones, and would work as an object dtype. This had
performance issues with a large number rows. See the docs for more details. (GH8260, GH10763, GH11034).
The new implementation allows for having a single-timezone across all rows, with operations in a performant manner.
In [1]: df = DataFrame({'A' : date_range('20130101',periods=3),
...: 'B' : date_range('20130101',periods=3,tz='US/Eastern'),
...: 'C' : date_range('20130101',periods=3,tz='CET')})
...:
In [2]: df
(continues on next page)
In [3]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A datetime64[ns]
B datetime64[ns, US/Eastern]
C datetime64[ns, CET]
dtype: object
In [4]: df.B
Out[4]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
Name: B, dtype: datetime64[ns, US/Eastern]
In [5]: df.B.dt.tz_localize(None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2013-01-01
1 2013-01-02
2 2013-01-03
Name: B, dtype: datetime64[ns]
This uses a new-dtype representation as well, that is very similar in look-and-feel to its numpy cousin
datetime64[ns]
In [6]: df['B'].dtype
Out[6]: datetime64[ns, US/Eastern]
In [7]: type(df['B'].dtype)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[7]: pandas.core.dtypes.dtypes.DatetimeTZDtype
Note: There is a slightly different string repr for the underlying DatetimeIndex as a result of the dtype changes,
but functionally these are the same.
Previous Behavior:
In [1]: pd.date_range('20130101',periods=3,tz='US/Eastern')
Out[1]: DatetimeIndex(['2013-01-01 00:00:00-05:00', '2013-01-02 00:00:00-05:00',
'2013-01-03 00:00:00-05:00'],
dtype='datetime64[ns]', freq='D', tz='US/Eastern')
In [2]: pd.date_range('20130101',periods=3,tz='US/Eastern').dtype
Out[2]: dtype('<M8[ns]')
New Behavior:
In [8]: pd.date_range('20130101',periods=3,tz='US/Eastern')
Out[8]:
(continues on next page)
In [9]: pd.date_range('20130101',periods=3,tz='US/Eastern').dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→datetime64[ns, US/Eastern]
We are releasing the global-interpreter-lock (GIL) on some cython operations. This will allow other threads to run
simultaneously during computation, potentially allowing performance improvements from multi-threading. Notably
groupby, nsmallest, value_counts and some indexing operations benefit from this. (GH8882)
For example the groupby expression in the following code will have the GIL released during the factorization step,
e.g. df.groupby('key') as well as the .sum() operation.
N = 1000000
ngroups = 10
df = DataFrame({'key' : np.random.randint(0,ngroups,size=N),
'data' : np.random.randn(N) })
df.groupby('key')['data'].sum()
Releasing of the GIL could benefit an application that uses threads for user interactions (e.g. QT), or performing
multi-threaded computations. A nice example of a library that can handle these types of computation-in-parallel is the
dask library.
The Series and DataFrame .plot() method allows for customizing plot types by supplying the kind keyword
arguments. Unfortunately, many of these kinds of plots use different required and optional keyword arguments, which
makes it difficult to discover what any given plot kind uses out of the dozens of possible arguments.
To alleviate this issue, we have added a new, optional plotting interface, which exposes each kind of plot as a method of
the .plot attribute. Instead of writing series.plot(kind=<kind>, ...), you can now also use series.
plot.<kind>(...):
In [11]: df.plot.bar()
As a result of this change, these methods are now all discoverable via tab-completion:
In [12]: df.plot.<TAB>
df.plot.area df.plot.barh df.plot.density df.plot.hist df.plot.line
˓→df.plot.scatter
Each method signature only includes relevant arguments. Currently, these are limited to required arguments, but in the
future these will include optional arguments, as well. For an overview, see the new Plotting API documentation.
strftime
We are now supporting a Series.dt.strftime method for datetime-likes to generate a formatted string
(GH10110). Examples:
# DatetimeIndex
In [13]: s = pd.Series(pd.date_range('20130101', periods=4))
In [14]: s
Out[14]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: datetime64[ns]
In [15]: s.dt.strftime('%Y/%m/%d')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[15]:
˓→
0 2013/01/01
(continues on next page)
# PeriodIndex
In [16]: s = pd.Series(pd.period_range('20130101', periods=4))
In [17]: s
Out[17]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: object
In [18]: s.dt.strftime('%Y/%m/%d')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[18]:
˓→
0 2013/01/01
1 2013/01/02
2 2013/01/03
3 2013/01/04
dtype: object
The string format is as the python standard library and details can be found here
total_seconds
pd.Series of type timedelta64 has new method .dt.total_seconds() returning the duration of the
timedelta in seconds (GH10817)
# TimedeltaIndex
In [19]: s = pd.Series(pd.timedelta_range('1 minutes', periods=4))
In [20]: s
Out[20]:
0 0 days 00:01:00
1 1 days 00:01:00
2 2 days 00:01:00
3 3 days 00:01:00
dtype: timedelta64[ns]
In [21]: s.dt.total_seconds()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 60.0
1 86460.0
2 172860.0
3 259260.0
dtype: float64
Period, PeriodIndex and period_range can now accept multiplied freq. Also, Period.freq and
PeriodIndex.freq are now stored as a DateOffset instance like DatetimeIndex, and not as str
(GH7811)
A multiplied freq represents a span of corresponding length. The example below creates a period of 3 days. Addition
and subtraction will shift the period by its span.
In [23]: p
Out[23]: Period('2015-08-01', '3D')
In [24]: p + 1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[24]: Period('2015-08-04', '3D')
In [25]: p - 2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[25]:
˓→Period('2015-07-26', '3D')
In [26]: p.to_timestamp()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timestamp('2015-08-01 00:00:00')
In [27]: p.to_timestamp(how='E')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timestamp('2015-08-03 00:00:00')
In [29]: idx
Out[29]: PeriodIndex(['2015-08-01', '2015-08-03', '2015-08-05', '2015-08-07'], dtype=
˓→'period[2D]', freq='2D')
In [30]: idx + 1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→PeriodIndex(['2015-08-03', '2015-08-05', '2015-08-07', '2015-08-09'], dtype=
˓→'period[2D]', freq='2D')
read_sas() provides support for reading SAS XPORT format files. (GH4052).
df = pd.read_sas('sas_xport.xpt')
df = pd.DataFrame({'a': np.random.randn(10)})
df.eval("b = sin(a)")
The support math functions are sin, cos, exp, log, expm1, log1p, sqrt, sinh, cosh, tanh, arcsin, arccos, arctan, arccosh,
arcsinh, arctanh, abs and arctan2.
These functions map to the intrinsics for the NumExpr engine. For the Python engine, they are mapped to NumPy
calls.
In version 0.16.2 a DataFrame with MultiIndex columns could not be written to Excel via to_excel. That
functionality has been added (GH10564), along with updating read_excel so that the data can be read back with, no
loss of information, by specifying which columns/rows make up the MultiIndex in the header and index_col
parameters (GH4679)
See the documentation for more details.
In [32]: df
Out[32]:
col1 foo bar
col2 a b a b
i1 i2
j l 1 2 3 4
k 5 6 7 8
In [33]: df.to_excel('test.xlsx')
In [35]: df
Out[35]:
col1 foo bar
col2 a b a b
i1 i2
j l 1 2 3 4
k 5 6 7 8
Previously, it was necessary to specify the has_index_names argument in read_excel, if the serialized data
had index names. For version 0.17.0 the ouptput format of to_excel has been changed to make this keyword
unnecessary - the change is shown below.
Old
New
Warning: Excel files saved in version 0.16.2 or prior that had index names will still able to be read in, but the
has_index_names argument must specified to True.
• Added ability to automatically create a table/dataset using the pandas.io.gbq.to_gbq() function if the
destination table/dataset does not exist. (GH8325, GH11121).
• Added ability to replace an existing table and schema when calling the pandas.io.gbq.to_gbq() func-
tion via the if_exists argument. See the docs for more details (GH8325).
• InvalidColumnOrder and InvalidPageToken in the gbq module will raise ValueError instead of
IOError.
• The generate_bq_schema() function is now deprecated and will be removed in a future version
(GH11121)
• The gbq module will now support Python 3 (GH11094).
Warning: Enabling this option will affect the performance for printing of DataFrame and Series (about 2
times slower). Use only when it is actually required.
Some East Asian countries use Unicode characters its width is corresponding to 2 alphabets. If a DataFrame or
Series contains these characters, the default output cannot be aligned properly. The following options are added to
enable precise handling for these characters.
• display.unicode.east_asian_width: Whether to use the Unicode East Asian Width to calculate the
display text width. (GH2612)
• display.unicode.ambiguous_as_wide: Whether to handle Unicode characters belong to Ambiguous
as Wide. (GH11102)
In [37]: df;
In [39]: df;
• Support for openpyxl >= 2.2. The API for style support is now stable (GH10125)
• merge now accepts the argument indicator which adds a Categorical-type column (by default called
_merge) to the output object that takes on the values (GH8790)
Previous Behavior:
In [1] pd.concat([foo, bar, baz], 1)
Out[1]:
0 1 2
0 1 1 4
1 2 2 5
New Behavior:
In [46]: pd.concat([foo, bar, baz], 1)
Out[46]:
foo 0 1
0 1 1 4
1 2 2 5
• Added a DataFrame.round method to round the values to a variable number of decimal places (GH10568).
In [49]: df = pd.DataFrame(np.random.random([3, 3]), columns=['A', 'B', 'C'],
....: index=['first', 'second', 'third'])
....:
In [50]: df
Out[50]:
A B C
first 0.342764 0.304121 0.417022
second 0.681301 0.875457 0.510422
third 0.669314 0.585937 0.624904
In [51]: df.round(2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C
first 0.34 0.30 0.42
second 0.68 0.88 0.51
third 0.67 0.59 0.62
A B C
first 0.0 0.304121 0.42
second 1.0 0.875457 0.51
third 1.0 0.585937 0.62
• drop_duplicates and duplicated now accept a keep keyword to target first, last, and all duplicates.
The take_last keyword is deprecated, see here (GH6511, GH8505)
In [53]: s = pd.Series(['A', 'B', 'C', 'A', 'B', 'D'])
In [54]: s.drop_duplicates()
Out[54]:
0 A
1 B
2 C
5 D
dtype: object
In [55]: s.drop_duplicates(keep='last')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[55]:
2 C
3 A
4 B
5 D
dtype: object
In [56]: s.drop_duplicates(keep=False)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2 C
5 D
dtype: object
• Reindex now has a tolerance argument that allows for finer control of Limits on filling while reindexing
(GH10411):
In [59]: df = df.set_index('t')
In [60]: df.reindex(pd.to_datetime(['1999-12-31']),
....: method='nearest',
....: tolerance='1 day')
....:
Out[60]:
x
1999-12-31 0
tolerance is also exposed by the lower level Index.get_indexer and Index.get_loc methods.
• Added functionality to use the base argument when resampling a TimeDeltaIndex (GH10530)
• DatetimeIndex can be instantiated using strings contains NaT (GH7599)
• to_datetime can now accept the yearfirst keyword (GH7599)
• pandas.tseries.offsets larger than the Day offset can now be used with a Series for addi-
tion/subtraction (GH10699). See the docs for more details.
• pd.Timedelta.total_seconds() now returns Timedelta duration to ns precision (previously microsec-
ond precision) (GH10939)
• PeriodIndex now supports arithmetic with np.ndarray (GH10638)
• Support pickling of Period objects (GH10439)
• .as_blocks will now take a copy optional argument to return a copy of the data, default is to copy (no
change in behavior from prior versions), (GH9607)
• regex argument to DataFrame.filter now handles numeric column names instead of raising
ValueError (GH10384).
• Enable reading gzip compressed files via URL, either by explicitly setting the compression parameter or by
inferring from the presence of the HTTP Content-Encoding header in the response (GH8685)
• Enable writing Excel files in memory using StringIO/BytesIO (GH7074)
• Enable serialization of lists and dicts to strings in ExcelWriter (GH8188)
• SQL io functions now accept a SQLAlchemy connectable. (GH7877)
• pd.read_sql and to_sql can accept database URI as con parameter (GH10214)
• read_sql_table will now allow reading from views (GH10750).
• Enable writing complex values to HDFStores when using the table format (GH10447)
• Enable pd.read_hdf to be used without specifying a key when the HDF file contains a single dataset
(GH10443)
• pd.read_stata will now read Stata 118 type files. (GH9882)
• msgpack submodule has been updated to 0.4.6 with backward compatibility (GH10581)
• DataFrame.to_dict now accepts orient='index' keyword argument (GH10844).
• DataFrame.apply will return a Series of dicts if the passed function returns a dict and reduce=True
(GH8735).
• Allow passing kwargs to the interpolation methods (GH10378).
• Improved error message when concatenating an empty iterable of Dataframe objects (GH9157)
• pd.read_csv can now read bz2-compressed files incrementally, and the C parser can read bz2-compressed
files from AWS S3 (GH11070, GH11072).
• In pd.read_csv, recognize s3n:// and s3a:// URLs as designating S3 file storage (GH11070,
GH11071).
• Read CSV files from AWS S3 incrementally, instead of first downloading the entire file. (Full file download still
required for compressed files in Python 2.) (GH11070, GH11073)
• pd.read_csv is now able to infer compression type for files read from AWS S3 storage (GH11070,
GH11074).
The sorting API has had some longtime inconsistencies. (GH9816, GH8239).
Here is a summary of the API PRIOR to 0.17.0:
• Series.sort is INPLACE while DataFrame.sort returns a new object.
• Series.order returns a new object
• It was possible to use Series/DataFrame.sort_index to sort by values by passing the by keyword.
• Series/DataFrame.sortlevel worked only on a MultiIndex for sorting by index.
To address these issues, we have revamped the API:
• We have introduced a new method, DataFrame.sort_values(), which is the merger of DataFrame.
sort(), Series.sort(), and Series.order(), to handle sorting of values.
• The existing methods Series.sort(), Series.order(), and DataFrame.sort() have been depre-
cated and will be removed in a future version.
• The by argument of DataFrame.sort_index() has been deprecated and will be removed in a future
version.
• The existing method .sort_index() will gain the level keyword to enable level sorting.
We now have two distinct and non-overlapping methods of sorting. A * marks items that will show a
FutureWarning.
To sort by the values:
Previous Replacement
* Series.order() Series.sort_values()
* Series.sort() Series.sort_values(inplace=True)
* DataFrame.sort(columns=...) DataFrame.sort_values(by=...)
Previous Replacement
Series.sort_index() Series.sort_index()
Series.sortlevel(level=...) Series.sort_index(level=...)
DataFrame.sort_index() DataFrame.sort_index()
DataFrame.sortlevel(level=...) DataFrame.sort_index(level=...)
* DataFrame.sort() DataFrame.sort_index()
We have also deprecated and changed similar methods in two Series-like classes, Index and Categorical.
Previous Replacement
* Index.order() Index.sort_values()
* Categorical.order() Categorical.sort_values()
Error handling
The default for pd.to_datetime error handling has changed to errors='raise'. In prior versions it was
errors='ignore'. Furthermore, the coerce argument has been deprecated in favor of errors='coerce'.
This means that invalid parsing will raise rather that return the original input as in previous versions. (GH10636)
Previous Behavior:
New Behavior:
Consistent Parsing
The string parsing of to_datetime, Timestamp and DatetimeIndex has been made consistent. (GH7599)
Prior to v0.17.0, Timestamp and to_datetime may parse year-only datetime-string incorrectly using today’s
date, otherwise DatetimeIndex uses the beginning of the year. Timestamp and to_datetime may raise
ValueError in some types of datetime-string which DatetimeIndex can parse, such as a quarterly string.
Previous Behavior:
In [1]: Timestamp('2012Q2')
Traceback
...
ValueError: Unable to parse 2012Q2
In [63]: Timestamp('2012Q2')
Out[63]: Timestamp('2012-04-01 00:00:00')
In [64]: Timestamp('2014')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[64]: Timestamp('2014-01-01 00:00:00')
Note: If you want to perform calculations based on today’s date, use Timestamp.now() and pandas.
tseries.offsets.
In [67]: Timestamp.now()
Out[67]: Timestamp('2018-08-05 12:02:13.064182')
New Behavior:
Note that this is different from the numpy behavior where a comparison can be broadcast:
Boolean comparisons of a Series vs None will now be equivalent to comparing with np.nan, rather than raise
TypeError. (GH1079).
In [71]: s = Series(range(3))
In [73]: s
Out[73]:
0 0.0
1 NaN
2 2.0
dtype: float64
Previous Behavior:
In [5]: s==None
TypeError: Could not compare <type 'NoneType'> type with Series
New Behavior:
In [74]: s==None
Out[74]:
(continues on next page)
Warning: You generally will want to use isnull/notnull for these types of comparisons, as isnull/
notnull tells you which elements are null. One has to be mindful that nan's don’t compare equal, but None's
do. Note that Pandas/numpy uses the fact that np.nan != np.nan, and treats None like np.nan.
In [76]: None == None
Out[76]: True
The default behavior for HDFStore write functions with format='table' is now to keep rows that are all missing.
Previously, the behavior was to drop rows that were all missing save the index. The previous behavior can be replicated
using the dropna=True option. (GH9382)
Previous Behavior:
In [78]: df_with_missing = pd.DataFrame({'col1':[0, np.nan, 2],
....: 'col2':[1, np.nan, np.nan]})
....:
In [79]: df_with_missing
Out[79]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [27]:
df_with_missing.to_hdf('file.h5',
'df_with_missing',
format='table',
mode='w')
Out [28]:
col1 col2
(continues on next page)
New Behavior:
In [80]: df_with_missing.to_hdf('file.h5',
....: 'df_with_missing',
....: format='table',
....: mode='w')
....:
The display.precision option has been clarified to refer to decimal places (GH10451).
Earlier versions of pandas would format floating point numbers to have one less decimal place than the value in
display.precision.
In [1]: pd.set_option('display.precision', 2)
If interpreting precision as “significant figures” this did work for scientific notation but that same interpretation did not
work for values with standard formatting. It was also out of step with how numpy handles formatting.
Going forward the value of display.precision will directly control the number of places after the decimal, for
regular formatting as well as scientific notation, similar to how numpy’s precision print option works.
In [82]: pd.set_option('display.precision', 2)
To preserve output behavior with prior versions the default value of display.precision has been reduced to 6
from 7.
Categorical.unique now returns new Categoricals with categories and codes that are unique, rather
than returning np.array (GH10508)
• unordered category: values and categories are sorted by appearance order.
• ordered category: values are sorted by appearance order, categories keep existing order.
In [85]: cat
Out[85]:
[C, A, B, C]
Categories (3, object): [A < B < C]
In [86]: cat.unique()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[86]:
[C, A, B]
Categories (3, object): [A < B < C]
In [88]: cat
Out[88]:
[C, A, B, C]
Categories (3, object): [A, B, C]
In [89]: cat.unique()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[89]:
[C, A, B]
Categories (3, object): [C, A, B]
In earlier versions of pandas, if a bool was passed the header argument of read_csv, read_excel, or
read_html it was implicitly converted to an integer, resulting in header=0 for False and header=1 for True
(GH6113)
A bool input to header will now raise a TypeError
• Line and kde plot with subplots=True now uses default colors, not all black. Specify color='k' to draw
all lines in black (GH9894)
• Calling the .value_counts() method on a Series with a categorical dtype now returns a Series with a
CategoricalIndex (GH10704)
• The metadata properties of subclasses of pandas objects will now be serialized (GH10553).
• groupby using Categorical follows the same rule as Categorical.unique described above
(GH10508)
• When constructing DataFrame with an array of complex64 dtype previously meant the corresponding col-
umn was automatically promoted to the complex128 dtype. Pandas will now preserve the itemsize of the
input for complex data (GH10952)
• some numeric reduction operators would return ValueError, rather than TypeError on object types that
includes strings and numbers (GH11131)
• Passing currently unsupported chunksize argument to read_excel or ExcelFile.parse will now
raise NotImplementedError (GH8011)
• Allow an ExcelFile object to be passed into read_excel (GH11198)
• DatetimeIndex.union does not infer freq if self and the input have None as freq (GH11086)
• NaT’s methods now either raise ValueError, or return np.nan or NaT (GH9513)
Behavior Methods
return np.nan weekday, isoweekday
return NaT date, now, replace, to_datetime, today
return np.datetime64('NaT') to_datetime64 (unchanged)
raise ValueError All other public methods (names not beginning with underscores)
1.18.2.10 Deprecations
Note: These indexing function have been deprecated in the documentation since 0.11.0.
• TimeSeries deprecated in favor of Series (note that this has been an alias since 0.13.0), (GH10890)
• SparsePanel deprecated and will be removed in a future version (GH11157).
• Series.is_time_series deprecated in favor of Series.index.is_all_dates (GH11135)
• Legacy offsets (like 'A@JAN') are deprecated (note that this has been alias since 0.8.0) (GH10878)
• WidePanel deprecated in favor of Panel, LongPanel in favor of DataFrame (note these have been
aliases since < 0.11.0), (GH10892)
• DataFrame.convert_objects has been deprecated in favor of type-specific functions pd.
to_datetime, pd.to_timestamp and pd.to_numeric (new in 0.17.0) (GH11133).
In [90]: np.random.seed(1234)
In [91]: df = DataFrame(np.random.randn(5,2),columns=list('AB'),index=date_range(
˓→'20130101',periods=5))
In [92]: df
Out[92]:
A B
2013-01-01 0.471435 -1.190976
2013-01-02 1.432707 -0.312652
2013-01-03 -0.720589 0.887163
2013-01-04 0.859588 -0.636524
2013-01-05 0.015696 -2.242685
Previously
In [3]: df + df.A
FutureWarning: TimeSeries broadcasting along DataFrame index by default is
˓→deprecated.
Out[3]:
A B
2013-01-01 0.942870 -0.719541
2013-01-02 2.865414 1.120055
2013-01-03 -1.441177 0.166574
2013-01-04 1.719177 0.223065
2013-01-05 0.031393 -2.226989
Current
In [93]: df.add(df.A,axis='index')
Out[93]:
A B
2013-01-01 0.942870 -0.719541
2013-01-02 2.865414 1.120055
2013-01-03 -1.441177 0.166574
2013-01-04 1.719177 0.223065
2013-01-05 0.031393 -2.226989
• Development support for benchmarking with the Air Speed Velocity library (GH8361)
• Added vbench benchmarks for alternative ExcelWriter engines and reading Excel files (GH7171)
• Performance improvements in Categorical.value_counts (GH10804)
• Performance improvements in SeriesGroupBy.nunique and SeriesGroupBy.value_counts and
SeriesGroupby.transform (GH10820, GH11077)
• Performance improvements in DataFrame.drop_duplicates with integer dtypes (GH10917)
• Performance improvements in DataFrame.duplicated with wide frames. (GH10161, GH11180)
• 4x improvement in timedelta string parsing (GH6755, GH10426)
• 8x improvement in timedelta64 and datetime64 ops (GH6755)
• Significantly improved performance of indexing MultiIndex with slicers (GH10287)
• 8x improvement in iloc using list-like input (GH10791)
• Improved performance of Series.isin for datetimelike/integer Series (GH10287)
• 20x improvement in concat of Categoricals when categories are identical (GH10587)
• Improved performance of to_datetime when specified format string is ISO8601 (GH10178)
• 2x improvement of Series.value_counts for float dtype (GH10821)
• Enable infer_datetime_format in to_datetime when date components do not have 0 padding
(GH11142)
• Regression from 0.16.1 in constructing DataFrame from nested dictionary (GH11084)
• Performance improvements in addition/subtraction operations for DateOffset with Series or
DatetimeIndex (GH10744, GH11205)
• Regression fixed in (GH9311, GH6620, GH9345), where groupby with a datetime-like converting to float with
certain aggregators (GH10979)
• Bug in DataFrame.interpolate with axis=1 and inplace=True (GH10395)
• Bug in io.sql.get_schema when specifying multiple columns as primary key (GH10385).
• Bug in groupby(sort=False) with datetime-like Categorical raises ValueError (GH10505)
• Bug in groupby(axis=1) with filter() throws IndexError (GH11041)
• Bug in test_categorical on big-endian builds (GH10425)
• Bug in Series.shift and DataFrame.shift not supporting categorical data (GH9416)
• Bug in Series.map using categorical Series raises AttributeError (GH10324)
• Bug in MultiIndex.get_level_values including Categorical raises AttributeError
(GH10460)
• Bug in pd.get_dummies with sparse=True not returning SparseDataFrame (GH10531)
• Bug in Index subtypes (such as PeriodIndex) not returning their own type for .drop and .insert
methods (GH10620)
• Bug in algos.outer_join_indexer when right array is empty (GH10618)
• Bug in filter (regression from 0.16.0) and transform when grouping on multiple keys, one of which is
datetime-like (GH10114)
• Bug in to_datetime and to_timedelta causing Index name to be lost (GH10875)
• Bug in len(DataFrame.groupby) causing IndexError when there’s a column containing only NaNs
(GH11016)
• Bug that caused segfault when resampling an empty Series (GH10228)
• Bug in DatetimeIndex and PeriodIndex.value_counts resets name from its result, but retains in
result’s Index. (GH10150)
• Bug in pd.eval using numexpr engine coerces 1 element numpy array to scalar (GH10546)
• Bug in pd.concat with axis=0 when column is of dtype category (GH10177)
• Bug in read_msgpack where input type is not always checked (GH10369, GH10630)
• Bug in pd.read_csv with kwargs index_col=False, index_col=['a', 'b'] or dtype
(GH10413, GH10467, GH10577)
• Bug in Series.from_csv with header kwarg not setting the Series.name or the Series.index.
name (GH10483)
• Bug in groupby.var which caused variance to be inaccurate for small float values (GH10448)
• Bug in Series.plot(kind='hist') Y Label not informative (GH10485)
• Bug in read_csv when using a converter which generates a uint8 type (GH9266)
• Bug causes memory leak in time-series line and area plot (GH9003)
• Bug when setting a Panel sliced along the major or minor axes when the right-hand side is a DataFrame
(GH11014)
• Bug that returns None and does not raise NotImplementedError when operator functions (e.g. .add) of
Panel are not implemented (GH7692)
• Bug in line and kde plot cannot accept multiple colors when subplots=True (GH9894)
• Bug in DataFrame.plot raises ValueError when color name is specified by multiple characters
(GH10387)
• Bug in left and right align of Series with MultiIndex may be inverted (GH10665)
• Bug in left and right join of with MultiIndex may be inverted (GH10741)
• Bug in read_stata when reading a file with a different order set in columns (GH10757)
• Bug in Categorical may not representing properly when category contains tz or Period (GH10713)
• Bug in Categorical.__iter__ may not returning correct datetime and Period (GH10713)
• Bug in indexing with a PeriodIndex on an object with a PeriodIndex (GH4125)
• Bug in read_csv with engine='c': EOF preceded by a comment, blank line, etc. was not handled correctly
(GH10728, GH10548)
• Reading “famafrench” data via DataReader results in HTTP 404 error because of the website url is changed
(GH10591).
• Bug in read_msgpack where DataFrame to decode has duplicate column names (GH9618)
• Bug in io.common.get_filepath_or_buffer which caused reading of valid S3 files to fail if the
bucket also contained keys for which the user does not have read permission (GH10604)
• Bug in vectorised setting of timestamp columns with python datetime.date and numpy datetime64
(GH10408, GH10412)
• Bug in Index.take may add unnecessary freq attribute (GH10791)
• Bug in merge with empty DataFrame may raise IndexError (GH10824)
• Bug in to_latex where unexpected keyword argument for some documented arguments (GH10888)
• Bug in indexing of large DataFrame where IndexError is uncaught (GH10645 and GH10692)
• Bug in read_csv when using the nrows or chunksize parameters if file contains only a header line
(GH9535)
• Bug in serialization of category types in HDF5 in presence of alternate encodings. (GH10366)
• Bug in pd.DataFrame when constructing an empty DataFrame with a string dtype (GH9428)
• Bug in pd.DataFrame.diff when DataFrame is not consolidated (GH10907)
• Bug in pd.unique for arrays with the datetime64 or timedelta64 dtype that meant an array with object
dtype was returned instead the original dtype (GH9431)
• Bug in Timedelta raising error when slicing from 0s (GH10583)
• Bug in DatetimeIndex.take and TimedeltaIndex.take may not raise IndexError against invalid
index (GH10295)
• Bug in Series([np.nan]).astype('M8[ms]'), which now returns Series([pd.NaT])
(GH10747)
• Bug in PeriodIndex.order reset freq (GH10295)
• Bug in date_range when freq divides end as nanos (GH10885)
• Bug in iloc allowing memory outside bounds of a Series to be accessed with negative integers (GH10779)
• Bug in read_msgpack where encoding is not respected (GH10581)
• Bug preventing access to the first index when using iloc with a list containing the appropriate negative integer
(GH10547, GH10779)
• Bug in TimedeltaIndex formatter causing error while trying to save DataFrame with
TimedeltaIndex using to_csv (GH10833)
• Bug in DataFrame.where when handling Series slicing (GH10218, GH9558)
• Bug where pd.read_gbq throws ValueError when Bigquery returns zero rows (GH10273)
• Bug in to_json which was causing segmentation fault when serializing 0-rank ndarray (GH9576)
• Bug in plotting functions may raise IndexError when plotted on GridSpec (GH10819)
• Bug in plot result may show unnecessary minor ticklabels (GH10657)
• Bug in groupby incorrect computation for aggregation on DataFrame with NaT (E.g first, last, min).
(GH10590, GH11010)
• Bug when constructing DataFrame where passing a dictionary with only scalar values and specifying columns
did not raise an error (GH10856)
• Bug in .var() causing roundoff errors for highly similar values (GH10242)
• Bug in DataFrame.plot(subplots=True) with duplicated columns outputs incorrect result (GH10962)
• Bug in Index arithmetic may result in incorrect class (GH10638)
• Bug in date_range results in empty if freq is negative annually, quarterly and monthly (GH11018)
• Bug in DatetimeIndex cannot infer negative freq (GH11018)
• Remove use of some deprecated numpy comparison operations, mainly in tests. (GH10569)
• Bug in Index dtype may not applied properly (GH11017)
• Bug in io.gbq when testing for minimum google api client version (GH10652)
• Bug in DataFrame construction from nested dict with timedelta keys (GH11129)
• Bug in .fillna against may raise TypeError when data contains datetime dtype (GH7095, GH11153)
• Bug in .groupby when number of keys to group by is same as length of index (GH11185)
• Bug in convert_objects where converted values might not be returned if all null and coerce (GH9589)
• Bug in convert_objects where copy keyword was not respected (GH9589)
This is a minor bug-fix release from 0.16.1 and includes a a large number of bug fixes along some new features
(pipe() method), enhancements, and performance improvements.
We recommend that all users upgrade to this version.
Highlights include:
• A new pipe method, see here
• Documentation on how to use numba with pandas, see here
• New features
– Pipe
– Other Enhancements
• API Changes
• Performance Improvements
• Bug Fixes
1.19.1.1 Pipe
We’ve introduced a new method DataFrame.pipe(). As suggested by the name, pipe should be used to pipe
data through a chain of function calls. The goal is to avoid confusing nested function calls like
# df is a DataFrame
# f, g, and h are functions that take and return DataFrames
f(g(h(df), arg1=1), arg2=2, arg3=3)
The logic flows from inside out, and function names are separated from their keyword arguments. This can be rewritten
as
(df.pipe(h)
.pipe(g, arg1=1)
.pipe(f, arg2=2, arg3=3)
)
Now both the code and the logic flow from top to bottom. Keyword arguments are next to their functions. Overall the
code is much more readable.
In the example above, the functions f, g, and h each expected the DataFrame as the first positional argument. When
the function you wish to apply takes its data anywhere other than the first argument, pass a tuple of (function,
keyword) indicating where the DataFrame should flow. For example:
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly
˓→specified.
[2] The condition number is large, 1.49e+07. This might indicate that there are
strong multicollinearity or other numerical problems.
"""
The pipe method is inspired by unix pipes, which stream text through processes. More recently dplyr and magrittr
have introduced the popular (%>%) pipe operator for R.
See the documentation for more. (GH10129)
• Holiday now raises NotImplementedError if both offset and observance are used in the construc-
tor instead of returning an incorrect result (GH10217).
• Bug in Series.hist raises an error when a one row Series was given (GH10214)
• Bug where HDFStore.select modifies the passed columns list (GH7212)
• Bug in Categorical repr with display.width of None in Python 3 (GH10087)
• Bug in to_json with certain orients and a CategoricalIndex would segfault (GH10317)
• Bug where some of the nan funcs do not have consistent return dtypes (GH10251)
• Bug in DataFrame.quantile on checking that a valid axis was passed (GH9543)
• Bug in groupby.apply aggregation for Categorical not preserving categories (GH10138)
• Bug in to_csv where date_format is ignored if the datetime is fractional (GH10209)
• Bug in DataFrame.to_json with mixed data types (GH10289)
• Bug in cache updating when consolidating (GH10264)
• Bug in mean() where integer dtypes can overflow (GH10172)
• Bug where Panel.from_dict does not set dtype when specified (GH10058)
• Bug in Index.union raises AttributeError when passing array-likes. (GH10149)
• Bug in Timestamp’s‘ microsecond, quarter, dayofyear, week and daysinmonth properties re-
turn np.int type, not built-in int. (GH10050)
• Bug in NaT raises AttributeError when accessing to daysinmonth, dayofweek properties.
(GH10096)
• Bug in Index repr when using the max_seq_items=None setting (GH10182).
• Bug in getting timezone data with dateutil on various platforms ( GH9059, GH8639, GH9663, GH10121)
• Bug in displaying datetimes with mixed frequencies; display ‘ms’ datetimes to the proper precision. (GH10170)
• Bug in setitem where type promotion is applied to the entire block (GH10280)
• Bug in Series arithmetic methods may incorrectly hold names (GH10068)
• Bug in GroupBy.get_group when grouping on multiple keys, one of which is categorical. (GH10132)
• Bug in DatetimeIndex and TimedeltaIndex names are lost after timedelta arithmetics ( GH9926)
• Bug in DataFrame construction from nested dict with datetime64 (GH10160)
• Bug in Series construction from dict with datetime64 keys (GH9456)
• Bug in Series.plot(label="LABEL") not correctly setting the label (GH10119)
• Bug in plot not defaulting to matplotlib axes.grid setting (GH9792)
• Bug causing strings containing an exponent, but no decimal to be parsed as int instead of float in
engine='python' for the read_csv parser (GH9565)
• Bug in Series.align resets name when fill_value is specified (GH10067)
• Bug in read_csv causing index name not to be set on an empty DataFrame (GH10184)
• Bug in SparseSeries.abs resets name (GH10241)
• Bug in TimedeltaIndex slicing may reset freq (GH10292)
• Bug in GroupBy.get_group raises ValueError when group key contains NaT (GH6992)
• Bug in SparseSeries constructor ignores input data name (GH10258)
This is a minor bug-fix release from 0.16.0 and includes a a large number of bug fixes along several new features,
enhancements, and performance improvements. We recommend that all users upgrade to this version.
Highlights include:
• Support for a CategoricalIndex, a category based index, see here
• New section on how-to-contribute to pandas, see here
• Revised “Merge, join, and concatenate” documentation, including graphical examples to make it easier to un-
derstand each operations, see here
• New method sample for drawing random samples from Series, DataFrames and Panels. See here
• The default Index printing has changed to a more uniform format, see here
• BusinessHour datetime-offset is now supported, see here
• Further enhancement to the .str accessor to make string operations easier, see here
• Enhancements
– CategoricalIndex
– Sample
– String Methods Enhancements
– Other Enhancements
• API changes
– Deprecations
• Index Representation
• Performance Improvements
• Bug Fixes
Warning: In pandas 0.17.0, the sub-package pandas.io.data will be removed in favor of a separately
installable package (GH8961).
1.20.1 Enhancements
1.20.1.1 CategoricalIndex
We introduce a CategoricalIndex, a new type of index object that is useful for supporting indexing with dupli-
cates. This is a container around a Categorical (introduced in v0.15.0) and allows efficient indexing and storage
of an index with a large number of duplicated elements. Prior to 0.16.1, setting the index of a DataFrame/Series
with a category dtype would convert this to regular object-based Index.
In [2]: df
Out[2]:
A B
0 0 a
1 1 a
2 2 b
3 3 b
4 4 c
5 5 a
In [3]: df.dtypes
Out[3]:
A int64
B category
dtype: object
In [4]: df.B.cat.categories
Out[4]: Index(['c', 'a', 'b'], dtype='object')
In [6]: df2.index
Out[6]: CategoricalIndex(['a', 'a', 'b', 'b', 'c', 'a'], categories=['c', 'a', 'b'],
˓→ordered=False, name='B', dtype='category')
indexing with __getitem__/.iloc/.loc/.ix works similarly to an Index with duplicates. The indexers
MUST be in the category or the operation will raise.
In [7]: df2.loc['a']
Out[7]:
A
B
a 0
a 1
a 5
groupby operations on the index will preserve the index nature as well
In [10]: df2.groupby(level=0).sum()
Out[10]:
A
B
c 4
a 6
b 5
In [11]: df2.groupby(level=0).sum().index
Out[11]: CategoricalIndex(['c', 'a', 'b'], categories=['c', 'a', 'b'], ordered=False,
˓→name='B', dtype='category')
reindexing operations, will return a resulting index based on the type of the passed indexer, meaning that passing a
list will return a plain-old-Index; indexing with a Categorical will return a CategoricalIndex, indexed
according to the categories of the PASSED Categorical dtype. This allows one to arbitrarly index these even with
values NOT in the categories, similarly to how you can reindex ANY pandas index.
In [12]: df2.reindex(['a','e'])
Out[12]:
A
B
a 0.0
a 1.0
a 5.0
e NaN
In [13]: df2.reindex(['a','e']).index
Out[13]: Index(['a', 'a', 'a', 'e'], dtype='object', name='B')
(continues on next page)
In [14]: df2.reindex(pd.Categorical(['a','e'],categories=list('abcde')))
Out[14]:
A
B
a 0.0
a 1.0
a 5.0
e NaN
In [15]: df2.reindex(pd.Categorical(['a','e'],categories=list('abcde'))).index
Out[15]: CategoricalIndex(['a', 'a', 'a', 'e'], categories=['a', 'b', 'c', 'd', 'e'],
˓→ordered=False, name='B', dtype='category')
1.20.1.2 Sample
Series, DataFrames, and Panels now have a new method: sample(). The method accepts a specific number of rows
or columns to return, or a fraction of the total number or rows or columns. It also has options for sampling with or
without replacement, for passing in a column for weights for non-uniform sampling, and for setting seed values to
facilitate replication. (GH2419)
When applied to a DataFrame, one may pass the name of a column to specify sampling weights when sampling from
rows.
In [9]: df = DataFrame({'col1':[9,8,7,6], 'weight_column':[0.5, 0.4, 0.1, 0]})
Continuing from v0.16.0, the following enhancements make string operations easier and more consistent with standard
python string operations.
• Added StringMethods (.str accessor) to Index (GH9068)
The .str accessor is now available for both Series and Index.
In [11]: idx = Index([' jack', 'jill ', ' jesse ', 'frank'])
In [12]: idx.str.strip()
Out[12]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='object')
One special case for the .str accessor on Index is that if a string method returns bool, the .str accessor
will return a np.array instead of a boolean Index (GH8875). This enables the following expression to work
naturally:
In [13]: idx = Index(['a1', 'a2', 'b1', 'b2'])
In [15]: s
Out[15]:
a1 0
a2 1
b1 2
b2 3
dtype: int64
In [16]: idx.str.startswith('a')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[16]: array([ True,
˓→True, False, False], dtype=bool)
(continues on next page)
In [17]: s[s.index.str.startswith('a')]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a1 0
a2 1
dtype: int64
• The following new methods are accesible via .str accessor to apply the function to each values. (GH9766,
GH9773, GH10031, GH10045, GH10052)
Methods
capitalize() swapcase() normalize() partition() rpartition()
index() rindex() translate()
• split now takes expand keyword to specify whether to expand dimensionality. return_type is depre-
cated. (GH9847)
# return Series
In [19]: s.str.split(',')
Out[19]:
0 [a, b]
1 [a, c]
2 [b, c]
dtype: object
# return DataFrame
In [20]: s.str.split(',', expand=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[20]:
0 1
0 a b
1 a c
2 b c
# return Index
In [22]: idx.str.split(',')
Out[22]: Index([['a', 'b'], ['a', 'c'], ['b', 'c']], dtype='object')
# return MultiIndex
In [23]: idx.str.split(',', expand=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[23]:
MultiIndex(levels=[['a', 'b'], ['b', 'c']],
labels=[[0, 0, 1], [0, 1, 1]])
• BusinessHour offset is now supported, which represents business hours starting from 09:00 - 17:00 on
BusinessDay by default. See Here for details. (GH7905)
• DataFrame.diff now takes an axis parameter that determines the direction of differencing (GH9727)
• Allow clip, clip_lower, and clip_upper to accept array-like arguments as thresholds (This is a regres-
sion from 0.11.0). These methods now have an axis parameter which determines how the Series or DataFrame
will be aligned with the threshold(s). (GH6966)
• DataFrame.mask() and Series.mask() now support same keywords as where (GH8801)
• drop function can now accept errors keyword to suppress ValueError raised when any of label does not
exist in the target data. (GH6736)
• Add support for separating years and quarters using dashes, for example 2014-Q1. (GH9688)
• Allow conversion of values with dtype datetime64 or timedelta64 to strings using astype(str)
(GH9757)
• get_dummies function now accepts sparse keyword. If set to True, the return DataFrame is sparse, e.g.
SparseDataFrame. (GH8823)
• Period now accepts datetime64 as value input. (GH9054)
• Allow timedelta string conversion when leading zero is missing from time definition, ie 0:00:00 vs 00:00:00.
(GH9570)
• Allow Panel.shift with axis='items' (GH9890)
• Trying to write an excel file now raises NotImplementedError if the DataFrame has a MultiIndex
instead of writing a broken Excel file. (GH9794)
• Allow Categorical.add_categories to accept Series or np.array. (GH9927)
• Add/delete str/dt/cat accessors dynamically from __dir__. (GH9910)
• Add normalize as a dt accessor method. (GH10047)
• DataFrame and Series now have _constructor_expanddim property as overridable constructor for
one higher dimensionality data. This should be used only when it is really needed, see here
• pd.lib.infer_dtype now returns 'bytes' in Python 3 where appropriate. (GH10032)
• When passing in an ax to df.plot( ..., ax=ax), the sharex kwarg will now default to False. The result
is that the visibility of xlabels and xticklabels will not anymore be changed. You have to do that by yourself
for the right axes in your figure or set sharex=True explicitly (but this changes the visible for all axes in the
figure, not only the one which is passed in!). If pandas creates the subplots itself (e.g. no passed in ax kwarg),
then the default is still sharex=True and the visibility changes are applied.
• assign() now inserts new columns in alphabetical order. Previously the order was arbitrary. (GH9777)
• By default, read_csv and read_table will now try to infer the compression type based on the file exten-
sion. Set compression=None to restore the previous behavior (no decompression). (GH9770)
1.20.2.1 Deprecations
The string representation of Index and its sub-classes have now been unified. These will show a single-line display
if there are few values; a wrapped multi-line display for a lot of values (but less than display.max_seq_items;
if lots of items (> display.max_seq_items) will show a truncated display (the head and tail of the data). The
formatting for MultiIndex is unchanges (a multi-line wrapped display). The display width responds to the option
display.max_seq_items, which is defaulted to 100. (GH6482)
Previous Behavior
In [2]: pd.Index(range(4),name='foo')
Out[2]: Int64Index([0, 1, 2, 3], dtype='int64')
In [3]: pd.Index(range(104),name='foo')
Out[3]: Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
˓→19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39,
˓→40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60,
˓→61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81,
˓→82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, ...], dtype=
˓→'int64')
In [4]: pd.date_range('20130101',periods=4,name='foo',tz='US/Eastern')
Out[4]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-01-01 00:00:00-05:00, ..., 2013-01-04 00:00:00-05:00]
Length: 4, Freq: D, Timezone: US/Eastern
In [5]: pd.date_range('20130101',periods=104,name='foo',tz='US/Eastern')
Out[5]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-01-01 00:00:00-05:00, ..., 2013-04-14 00:00:00-04:00]
Length: 104, Freq: D, Timezone: US/Eastern
New Behavior
In [30]: pd.set_option('display.width', 80)
• Improved csv write performance with mixed dtypes, including datetimes by up to 5x (GH9940)
• Improved csv write performance generally by 2x (GH9940)
• Improved the performance of pd.lib.max_len_string_array by 5-7x (GH10024)
• Bug where labels did not appear properly in the legend of DataFrame.plot(), passing label= arguments
works, and Series indices are no longer mutated. (GH9542)
• Bug in json serialization causing a segfault when a frame had zero length. (GH9805)
• Bug in read_csv where missing trailing delimiters would cause segfault. (GH5664)
• Bug in retaining index name on appending (GH9862)
• Bug in scatter_matrix draws unexpected axis ticklabels (GH5662)
• Fixed bug in StataWriter resulting in changes to input DataFrame upon save (GH9795).
• Bug in transform causing length mismatch when null entries were present and a fast aggregator was being
used (GH9697)
• Bug in equals causing false negatives when block order differed (GH9330)
• Bug in grouping with multiple pd.Grouper where one is non-time based (GH10063)
• Bug in read_sql_table error when reading postgres table with timezone (GH7139)
• Bug in DataFrame slicing may not retain metadata (GH9776)
• Bug where TimdeltaIndex were not properly serialized in fixed HDFStore (GH9635)
• Bug with TimedeltaIndex constructor ignoring name when given another TimedeltaIndex as data
(GH10025).
• Bug in DataFrameFormatter._get_formatted_index with not applying max_colwidth to the
DataFrame index (GH7856)
• Bug in .loc with a read-only ndarray data source (GH10043)
• Bug in groupby.apply() that would raise if a passed user defined function either returned only None (for
all input). (GH9685)
• Bug in bar plot with log=True raises TypeError if all values are less than 1 (GH9905)
• Bug in horizontal bar plot ignores log=True (GH9905)
• Bug in PyTables queries that did not return proper results using the index (GH8265, GH9676)
• Bug where dividing a dataframe containing values of type Decimal by another Decimal would raise.
(GH9787)
• Bug where using DataFrames asfreq would remove the name of the index. (GH9885)
• Bug causing extra index point when resample BM/BQ (GH9756)
• Changed caching in AbstractHolidayCalendar to be at the instance level rather than at the class level as
the latter can result in unexpected behaviour. (GH9552)
• Fixed latex output for multi-indexed dataframes (GH9778)
• Bug causing an exception when setting an empty range using DataFrame.loc (GH9596)
• Bug in hiding ticklabels with subplots and shared axes when adding a new plot to an existing grid of axes
(GH9158)
• Bug in transform and filter when grouping on a categorical variable (GH9921)
• Bug in transform when groups are equal in number and dtype to the input index (GH9700)
• Google BigQuery connector now imports dependencies on a per-method basis.(GH9713)
• Updated BigQuery connector to no longer use deprecated oauth2client.tools.run() (GH8327)
• Bug in subclassed DataFrame. It may not return the correct class, when slicing or subsetting it. (GH9632)
• Bug in .median() where non-float null values are not handled correctly (GH10040)
• Bug in Series.fillna() where it raises if a numerically convertible string is given (GH10092)
This is a major release from 0.15.2 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Highlights include:
• DataFrame.assign method, see here
• Series.to_coo/from_coo methods to interact with scipy.sparse, see here
• Backwards incompatible change to Timedelta to conform the .seconds attribute with datetime.
timedelta, see here
• Changes to the .loc slicing API to conform with the behavior of .ix see here
• Changes to the default for ordering in the Categorical constructor, see here
• Enhancement to the .str accessor to make string operations easier, see here
• The pandas.tools.rplot, pandas.sandbox.qtpandas and pandas.rpy modules are deprecated.
We refer users to external packages like seaborn, pandas-qt and rpy2 for similar or equivalent functionality, see
here
Check the API Changes and deprecations before updating.
• New features
– DataFrame Assign
– Interaction with scipy.sparse
– String Methods Enhancements
– Other enhancements
• Backwards incompatible API changes
– Changes in Timedelta
– Indexing Changes
– Categorical Changes
– Other API Changes
– Deprecations
– Removal of prior version deprecations/changes
• Performance Improvements
• Bug Fixes
Inspired by dplyr’s mutate verb, DataFrame has a new assign() method. The function signature for assign is
simply **kwargs. The keys are the column names for the new fields, and the values are either a value to be inserted
(for example, a Series or NumPy array), or a function of one argument to be called on the DataFrame. The new
values are inserted, and the entire DataFrame (with all original and new columns) is returned.
In [2]: iris.head()
Out[2]:
SepalLength SepalWidth PetalLength PetalWidth Name
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
Above was an example of inserting a precomputed value. We can also pass in a function to be evaluated.
In [4]: iris.assign(sepal_ratio = lambda x: (x['SepalWidth'] /
...: x['SepalLength'])).head()
...:
Out[4]:
SepalLength SepalWidth PetalLength PetalWidth Name sepal_ratio
0 5.1 3.5 1.4 0.2 Iris-setosa 0.686275
1 4.9 3.0 1.4 0.2 Iris-setosa 0.612245
2 4.7 3.2 1.3 0.2 Iris-setosa 0.680851
3 4.6 3.1 1.5 0.2 Iris-setosa 0.673913
4 5.0 3.6 1.4 0.2 Iris-setosa 0.720000
The power of assign comes when used in chains of operations. For example, we can limit the DataFrame to just
those with a Sepal Length greater than 5, calculate the ratio, and plot
In [5]: (iris.query('SepalLength > 5')
...: .assign(SepalRatio = lambda x: x.SepalWidth / x.SepalLength,
...: PetalRatio = lambda x: x.PetalWidth / x.PetalLength)
...: .plot(kind='scatter', x='SepalRatio', y='PetalRatio'))
...:
Out[5]: <matplotlib.axes._subplots.AxesSubplot at 0x7f20b9c060b8>
In [9]: s
Out[9]:
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
# SparseSeries
In [10]: ss = s.to_sparse()
In [11]: ss
Out[11]:
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
BlockIndex
Block locations: array([0, 2], dtype=int32)
Block lengths: array([1, 2], dtype=int32)
In [13]: A
Out[13]:
<3x4 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [14]: A.todense()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [15]: rows
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→[(1, 2), (1, 1), (2, 1)]
In [16]: columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→[('a', 0), ('a', 1), ('b', 0), ('b', 1)]
The from_coo method is a convenience method for creating a SparseSeries from a scipy.sparse.
coo_matrix:
In [17]: from scipy import sparse
In [19]: A
Out[19]:
<3x4 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [20]: A.todense()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [21]: ss = SparseSeries.from_coo(A)
In [22]: ss
Out[22]:
0 2 1.0
3 2.0
1 0 3.0
dtype: float64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([3], dtype=int32)
• Following new methods are accesible via .str accessor to apply the function to each values. This is intended
to make it more consistent with standard methods on strings. (GH9282, GH9352, GH9386, GH9387, GH9439)
Methods
isalnum() isalpha() isdigit() isdigit() isspace()
islower() isupper() istitle() isnumeric() isdecimal()
find() rfind() ljust() rjust() zfill()
In [24]: s.str.isalpha()
Out[24]:
0 True
1 False
2 True
dtype: bool
In [25]: s.str.find('ab')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[25]:
0 0
(continues on next page)
• Series.str.pad() and Series.str.center() now accept fillchar option to specify filling char-
acter (GH9352)
In [26]: s = Series(['12', '300', '25'])
• Reindex now supports method='nearest' for frames or series with a monotonic increasing or decreasing
index (GH9258):
In [31]: df = pd.DataFrame({'x': range(5)})
This method is also exposed by the lower level Index.get_indexer and Index.get_loc methods.
• The read_excel() function’s sheetname argument now accepts a list and None, to get multiple or all sheets
respectively. If more than one sheet is specified, a dictionary is returned. (GH9450)
# Returns the 1st and 4th sheet, as a dictionary of DataFrames.
pd.read_excel('path_to_file.xls',sheetname=['Sheet1',3])
• Allow Stata files to be read incrementally with an iterator; support for long strings in Stata files. See the docs
here (GH9493:).
• Paths beginning with ~ will now be expanded to begin with the user’s home directory (GH9066)
• Added time interval selection in get_data_yahoo (GH9071)
• Added Timestamp.to_datetime64() to complement Timedelta.to_timedelta64() (GH9255)
• tseries.frequencies.to_offset() now accepts Timedelta as input (GH9064)
• Lag parameter was added to the autocorrelation method of Series, defaults to lag-1 autocorrelation (GH9192)
• Timedelta will now accept nanoseconds keyword in constructor (GH9273)
• SQL code now safely escapes table and column names (GH8986)
• Added auto-complete for Series.str.<tab>, Series.dt.<tab> and Series.cat.<tab>
(GH9322)
• Index.get_indexer now supports method='pad' and method='backfill' even for any target ar-
ray, not just monotonic targets. These methods also work for monotonic decreasing as well as monotonic
increasing indexes (GH9258).
• Index.asof now works on all index types (GH9258).
• A verbose argument has been augmented in io.read_excel(), defaults to False. Set to True to print
sheet names as they are parsed. (GH9450)
• Added days_in_month (compatibility alias daysinmonth) property to Timestamp, DatetimeIndex,
Period, PeriodIndex, and Series.dt (GH9572)
• Added decimal option in to_csv to provide formatting for non-‘.’ decimal separators (GH781)
• Added normalize option for Timestamp to normalized to midnight (GH8794)
• Added example for DataFrame import to R using HDF5 file and rhdf5 library. See the documentation for
more (GH9636).
In v0.15.0 a new scalar type Timedelta was introduced, that is a sub-class of datetime.timedelta. Mentioned
here was a notice of an API change w.r.t. the .seconds accessor. The intent was to provide a user-friendly set of
accessors that give the ‘natural’ value for that unit, e.g. if you had a Timedelta('1 day, 10:11:12'), then
.seconds would return 12. However, this is at odds with the definition of datetime.timedelta, which defines
.seconds as 10 * 3600 + 11 * 60 + 12 == 36672.
So in v0.16.0, we are restoring the API to match that of datetime.timedelta. Further, the component values are
still available through the .components accessor. This affects the .seconds and .microseconds accessors,
and removes the .hours, .minutes, .milliseconds accessors. These changes affect TimedeltaIndex and
the Series .dt accessor as well. (GH9185, GH9139)
Previous Behavior
In [3]: t.days
Out[3]: 1
In [5]: t.microseconds
Out[5]: 123
New Behavior
In [34]: t.days
Out[34]: 1
In [35]: t.seconds
\\\\\\\\\\\Out[35]: 36672
In [36]: t.microseconds
\\\\\\\\\\\\\\\\\\\\\\\\\\Out[36]: 100123
In [37]: t.components
Out[37]: Components(days=1, hours=10, minutes=11, seconds=12, milliseconds=100,
˓→microseconds=123, nanoseconds=0)
In [38]: t.components.seconds
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→12
The behavior of a small sub-set of edge cases for using .loc have changed (GH8613). Furthermore we have improved
the content of the error messages that are raised:
• Slicing with .loc where the start and/or stop bound is not found in the index is now allowed; this previously
would raise a KeyError. This makes the behavior the same as .ix in this case. This change is only for
slicing, not when indexing with a single label.
In [39]: df = DataFrame(np.random.randn(5,4),
....: columns=list('ABCD'),
....: index=date_range('20130101',periods=5))
....:
In [40]: df
Out[40]:
A B C D
2013-01-01 -0.322795 0.841675 2.390961 0.076200
2013-01-02 -0.566446 0.036142 -2.074978 0.247792
2013-01-03 -0.897157 -0.136795 0.018289 0.755414
2013-01-04 0.215269 0.841009 -1.445810 -1.401973
2013-01-05 -0.100918 -0.548242 -0.144620 0.354020
In [41]: s = Series(range(5),[-2,-1,1,2,3])
In [42]: s
(continues on next page)
Previous Behavior
In [4]: df.loc['2013-01-02':'2013-01-10']
KeyError: 'stop bound [2013-01-10] is not in the [index]'
In [6]: s.loc[-10:3]
KeyError: 'start bound [-10] is not the [index]'
New Behavior
In [43]: df.loc['2013-01-02':'2013-01-10']
Out[43]:
A B C D
2013-01-02 -0.566446 0.036142 -2.074978 0.247792
2013-01-03 -0.897157 -0.136795 0.018289 0.755414
2013-01-04 0.215269 0.841009 -1.445810 -1.401973
2013-01-05 -0.100918 -0.548242 -0.144620 0.354020
In [44]: s.loc[-10:3]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
-2 0
-1 1
1 2
2 3
3 4
dtype: int64
• Allow slicing with float-like values on an integer index for .ix. Previously this was only enabled for .loc:
Previous Behavior
In [8]: s.ix[-1.0:2]
TypeError: the slice start value [-1.0] is not a proper indexer for this index
˓→type (Int64Index)
New Behavior
In [2]: s.ix[-1.0:2]
Out[2]:
-1 1
1 2
2 3
dtype: int64
• Provide a useful exception for indexing with an invalid type for that index when using .loc. For example
trying to use .loc on an index of type DatetimeIndex or PeriodIndex or TimedeltaIndex, with an
integer (or a float).
Previous Behavior
In [4]: df.loc[2:3]
KeyError: 'start bound [2] is not the [index]'
New Behavior
In [4]: df.loc[2:3]
TypeError: Cannot do slice indexing on <class 'pandas.tseries.index.DatetimeIndex
˓→'> with <type 'int'> keys
In prior versions, Categoricals that had an unspecified ordering (meaning no ordered keyword was passed)
were defaulted as ordered Categoricals. Going forward, the ordered keyword in the Categorical constructor
will default to False. Ordering must now be explicit.
Furthermore, previously you could change the ordered attribute of a Categorical by just setting the attribute,
e.g. cat.ordered=True; This is now deprecated and you should use cat.as_ordered() or cat.
as_unordered(). These will by default return a new object and not modify the existing object. (GH9347,
GH9190)
Previous Behavior
In [4]: s
Out[4]:
0 0
1 1
2 2
dtype: category
Categories (3, int64): [0 < 1 < 2]
In [5]: s.cat.ordered
Out[5]: True
In [7]: s
Out[7]:
0 0
1 1
2 2
dtype: category
Categories (3, int64): [0, 1, 2]
New Behavior
In [46]: s
Out[46]:
0 0
1 1
2 2
dtype: category
Categories (3, int64): [0, 1, 2]
(continues on next page)
In [47]: s.cat.ordered
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[47]:
˓→False
In [48]: s = s.cat.as_ordered()
In [49]: s
Out[49]:
0 0
1 1
2 2
dtype: category
Categories (3, int64): [0 < 1 < 2]
In [50]: s.cat.ordered
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[50]:
˓→True
In [52]: s
Out[52]:
0 0
1 1
2 2
dtype: category
Categories (3, int64): [0 < 1 < 2]
In [53]: s.cat.ordered
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[53]:
˓→True
For ease of creation of series of categorical data, we have added the ability to pass keywords when calling .
astype(). These are passed directly to the constructor.
In [54]: s = Series(["a","b","c","a"]).astype('category',ordered=True)
In [55]: s
Out[55]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): [a < b < c]
In [56]: s = Series(["a","b","c","a"]).astype('category',categories=list('abcdef'),
˓→ordered=False)
In [57]: s
Out[57]:
0 a
1 b
2 c
(continues on next page)
New Behavior. If the input dtypes are integral, the output dtype is also integral and the output values are the
result of the bitwise operation.
• During division involving a Series or DataFrame, 0/0 and 0//0 now give np.nan instead of np.inf.
(GH9144, GH8445)
Previous Behavior
In [3]: p / 0
Out[3]:
0 inf
1 inf
dtype: float64
In [4]: p // 0
Out[4]:
0 inf
1 inf
dtype: float64
New Behavior
In [55]: p / 0
Out[55]:
0 NaN
1 inf
dtype: float64
In [56]: p // 0
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[56]:
0 NaN
1 inf
dtype: float64
• Series.values_counts and Series.describe for categorical data will now put NaN entries at the
end. (GH9443)
• Series.describe for categorical data will now give counts and frequencies of 0, not NaN, for unused
categories (GH9443)
• Due to a bug fix, looking up a partial string label with DatetimeIndex.asof now includes values that
match the string, even if they are after the start of the partial string label (GH9258).
Old behavior:
Fixed behavior:
To reproduce the old behavior, simply add more precision to the label (e.g., use 2000-02-01 instead of
2000-02).
1.21.2.5 Deprecations
• The rplot trellis plotting interface is deprecated and will be removed in a future version. We refer to external
packages like seaborn for similar but more refined functionality (GH3445). The documentation includes some
examples how to convert your existing code using rplot to seaborn: rplot docs.
• The pandas.sandbox.qtpandas interface is deprecated and will be removed in a future version. We refer
users to the external package pandas-qt. (GH9615)
• The pandas.rpy interface is deprecated and will be removed in a future version. Similar functionaility can
be accessed thru the rpy2 project (GH9602)
• Adding DatetimeIndex/PeriodIndex to another DatetimeIndex/PeriodIndex is being depre-
cated as a set-operation. This will be changed to a TypeError in a future version. .union() should be used
for the union set operation. (GH9094)
• Subtracting DatetimeIndex/PeriodIndex from another DatetimeIndex/PeriodIndex is be-
ing deprecated as a set-operation. This will be changed to an actual numeric subtraction yielding a
TimeDeltaIndex in a future version. .difference() should be used for the differencing set operation.
(GH9094)
• DataFrame.pivot_table and crosstab’s rows and cols keyword arguments were removed in favor
of index and columns (GH6581)
• DataFrame.to_excel and DataFrame.to_csv cols keyword argument was removed in favor of
columns (GH6581)
• Removed convert_dummies in favor of get_dummies (GH6581)
• Removed value_range in favor of describe (GH6581)
• Fixed a performance regression for .loc indexing with an array or list-like (GH9126:).
• DataFrame.to_json 30x performance improvement for mixed dtype frames. (GH9037)
• Performance improvements in MultiIndex.duplicated by working with labels instead of values
(GH9125)
• Improved the speed of nunique by calling unique instead of value_counts (GH9129, GH7771)
• Performance improvement of up to 10x in DataFrame.count and DataFrame.dropna by taking advan-
tage of homogeneous/heterogeneous dtypes appropriately (GH9136)
• Performance improvement of up to 20x in DataFrame.count when using a MultiIndex and the level
keyword argument (GH9163)
• Performance and memory usage improvements in merge when key space exceeds int64 bounds (GH9151)
• Performance improvements in multi-key groupby (GH9429)
• Performance improvements in MultiIndex.sortlevel (GH9445)
• Performance and memory usage improvements in DataFrame.duplicated (GH9398)
• Bug in boxplot, scatter and hexbin plot may show an unnecessary warning (GH8877)
• Bug in subplot with layout kw may show unnecessary warning (GH9464)
• Bug in using grouper functions that need passed thru arguments (e.g. axis), when using wrapped function (e.g.
fillna), (GH9221)
• DataFrame now properly supports simultaneous copy and dtype arguments in constructor (GH9099)
• Bug in read_csv when using skiprows on a file with CR line endings with the c engine. (GH9079)
• isnull now detects NaT in PeriodIndex (GH9129)
• Bug in groupby .nth() with a multiple column groupby (GH8979)
• Bug in DataFrame.where and Series.where coerce numerics to string incorrectly (GH9280)
• Bug in DataFrame.where and Series.where raise ValueError when string list-like is passed.
(GH9280)
• Accessing Series.str methods on with non-string values now raises TypeError instead of producing
incorrect results (GH9184)
• Bug in DatetimeIndex.__contains__ when index has duplicates and is not monotonic increasing
(GH9512)
• Fixed division by zero error for Series.kurt() when all values are equal (GH9197)
• Fixed issue in the xlsxwriter engine where it added a default ‘General’ format to cells if no other format
was applied. This prevented other row or column formatting being applied. (GH9167)
• Fixes issue with index_col=False when usecols is also specified in read_csv. (GH9082)
• Bug where wide_to_long would modify the input stubnames list (GH9204)
• Bug in to_sql not storing float64 values using double precision. (GH9009)
• SparseSeries and SparsePanel now accept zero argument constructors (same as their non-sparse coun-
terparts) (GH9272).
• Regression in merging Categorical and object dtypes (GH9426)
• Bug in read_csv with buffer overflows with certain malformed input files (GH9205)
• Bug in groupby MultiIndex with missing pair (GH9049, GH9344)
• Fixed bug in Series.groupby where grouping on MultiIndex levels would ignore the sort argument
(GH9444)
• Fix bug in DataFrame.Groupby where sort=False is ignored in the case of Categorical columns.
(GH8868)
• Fixed bug with reading CSV files from Amazon S3 on python 3 raising a TypeError (GH9452)
• Bug in the Google BigQuery reader where the ‘jobComplete’ key may be present but False in the query results
(GH8728)
• Bug in Series.values_counts with excluding NaN for categorical type Series with dropna=True
(GH9443)
• Fixed mising numeric_only option for DataFrame.std/var/sem (GH9201)
• Support constructing Panel or Panel4D with scalar data (GH8285)
• Series text representation disconnected from max_rows/max_columns (GH7508).
In [2]: pd.options.display.max_rows = 10
In [3]: s = pd.Series([1,1,1,1,1,1,1,1,1,1,0.9999,1,1]*10)
In [4]: s
Out[4]:
0 1
1 1
2 1
...
127 0.9999
128 1.0000
129 1.0000
Length: 130, dtype: float64
New Behavior
0 1.0000
1 1.0000
2 1.0000
3 1.0000
4 1.0000
...
125 1.0000
126 1.0000
127 0.9999
128 1.0000
129 1.0000
dtype: float64
• A Spurious SettingWithCopy Warning was generated when setting a new item in a frame in some cases
(GH8730)
The following would previously report a SettingWithCopy Warning.
This is a minor release from 0.15.1 and includes a large number of bug fixes along with several new features, enhance-
ments, and performance improvements. A small number of API changes were necessary to fix existing bugs. We
recommend that all users upgrade to this version.
• Enhancements
• API Changes
• Performance Improvements
• Bug Fixes
• Indexing in MultiIndex beyond lex-sort depth is now supported, though a lexically sorted index will have a
better performance. (GH2646)
In [2]: df
Out[2]:
jolie
jim joe
0 x 0.123943
x 0.119381
1 z 0.738523
y 0.587304
In [3]: df.index.lexsort_depth
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→1
jolie
jim joe
1 z 0.738523
# lexically sorting
In [5]: df2 = df.sort_index()
In [6]: df2
Out[6]:
jolie
jim joe
0 x 0.123943
x 0.119381
1 y 0.587304
z 0.738523
In [7]: df2.index.lexsort_depth
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→2
In [8]: df2.loc[(1,'z')]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
jolie
jim joe
1 z 0.738523
• Bug in unique of Series with category dtype, which returned all categories regardless whether they were
“used” or not (see GH8559 for the discussion). Previous behaviour was to return all categories:
In [4]: cat
Out[4]:
[a, b, a]
Categories (3, object): [a < b < c]
In [5]: cat.unique()
Out[5]: array(['a', 'b', 'c'], dtype=object)
Now, only the categories that do effectively occur in the array are returned:
In [10]: cat.unique()
Out[10]:
[a, b]
Categories (2, object): [a, b]
• Series.all and Series.any now support the level and skipna parameters. Series.all,
Series.any, Index.all, and Index.any no longer support the out and keepdims parameters, which
existed for compatibility with ndarray. Various index types no longer support the all and any aggregation
functions and will now raise TypeError. (GH8302).
• Allow equality comparisons of Series with a categorical dtype and object dtype; previously these would raise
TypeError (GH8938)
• Bug in NDFrame: conflicting attribute/column names now behave consistently between getting and setting.
Previously, when both a column and attribute named y existed, data.y would return the attribute, while
data.y = z would update the column (GH8994)
In [12]: data.y = 2
In [14]: data
Out[14]:
x y
0 1 2
1 2 4
2 3 6
Old behavior:
In [6]: data.y
Out[6]: 2
In [7]: data['y'].values
Out[7]: array([5, 5, 5])
New behavior:
In [16]: data.y
Out[16]: 5
In [17]: data['y'].values
\\\\\\\\\\\Out[17]: array([2, 4, 6])
• Timestamp('now') is now equivalent to Timestamp.now() in that it returns the local time rather than
UTC. Also, Timestamp('today') is now equivalent to Timestamp.today() and both have tz as a
possible argument. (GH9000)
• Fix negative step support for label-based slices (GH8753)
Old behavior:
In [2]: s.loc['c':'a':-1]
Out[2]:
c 2
dtype: int64
New behavior:
In [19]: s.loc['c':'a':-1]
Out[19]:
c 2
b 1
a 0
dtype: int64
1.22.2 Enhancements
Categorical enhancements:
• Added ability to export Categorical data to Stata (GH8633). See here for limitations of categorical variables
exported to Stata data files.
• Added flag order_categoricals to StataReader and read_stata to select whether to order im-
ported categorical data (GH8836). See here for more information on importing categorical variables from Stata
data files.
• Added ability to export Categorical data to to/from HDF5 (GH7621). Queries work the same as if it was an
object array. However, the category dtyped data is stored in a more efficient manner. See here for an
example and caveats w.r.t. prior versions of pandas.
• Added support for searchsorted() on Categorical class (GH8420).
Other enhancements:
• Added the ability to specify the SQL type of columns when writing a DataFrame to a database (GH8778). For
example, specifying to use the sqlalchemy String type instead of the default Text type for string columns:
• Series.all and Series.any now support the level and skipna parameters (GH8302):
In [21]: s.any(level=0)
Out[21]:
0 True
1 False
dtype: bool
• Panel now supports the all and any aggregation functions. (GH8302):
In [23]: p.all()
Out[23]:
0 1 2 3
0 True True True True
1 True True False True
2 True True True True
3 True True False True
4 True True True True
1.22.3 Performance
• Performance boost for to_datetime conversions with a passed format=, and the exact=False
(GH8904)
• Bug in concat of Series with category dtype which were coercing to object. (GH8641)
• Bug in Timestamp-Timestamp not returning a Timedelta type and datelike-datelike ops with timezones
(GH8865)
• Made consistent a timezone mismatch exception (either tz operated with None or incompatible timezone), will
now return TypeError rather than ValueError (a couple of edge cases only), (GH8865)
• Bug in using a pd.Grouper(key=...) with no level/axis or level only (GH8795, GH8866)
• Report a TypeError when invalid/no parameters are passed in a groupby (GH8015)
• Bug in packaging pandas with py2app/cx_Freeze (GH8602, GH8831)
• Bug in groupby signatures that didn’t include *args or **kwargs (GH8733).
• io.data.Options now raises RemoteDataError when no expiry dates are available from Yahoo and
when it receives no data from Yahoo (GH8761), (GH8783).
• Unclear error message in csv parsing when passing dtype and names and the parsed data is a different data type
(GH8833)
• Bug in slicing a multi-index with an empty list and at least one boolean indexer (GH8781)
• io.data.Options now raises RemoteDataError when no expiry dates are available from Yahoo
(GH8761).
• Timedelta kwargs may now be numpy ints and floats (GH8757).
• Fixed several outstanding bugs for Timedelta arithmetic and comparisons (GH8813, GH5963, GH5436).
• sql_schema now generates dialect appropriate CREATE TABLE statements (GH8697)
• slice string method now takes step into account (GH8754)
• Bug in BlockManager where setting values with different type would break block integrity (GH8850)
• Bug in DatetimeIndex when using time object as key (GH8667)
• Bug in merge where how='left' and sort=False would not preserve left frame order (GH7331)
• Bug in MultiIndex.reindex where reindexing at level would not reorder labels (GH4088)
• Bug in certain operations with dateutil timezones, manifesting with dateutil 2.3 (GH8639)
• Regression in DatetimeIndex iteration with a Fixed/Local offset timezone (GH8890)
• Bug in to_datetime when parsing a nanoseconds using the %f format (GH8989)
• io.data.Options now raises RemoteDataError when no expiry dates are available from Yahoo and
when it receives no data from Yahoo (GH8761), (GH8783).
• Fix: The font size was only set on x axis if vertical or the y axis if horizontal. (GH8765)
• Fixed division by 0 when reading big csv files in python 3 (GH8621)
• Bug in outputting a Multindex with to_html,index=False which would add an extra column (GH8452)
• Imported categorical variables from Stata files retain the ordinal information in the underlying data (GH8836).
• Defined .size attribute across NDFrame objects to provide compat with numpy >= 1.9.1; buggy with np.
array_split (GH8846)
This is a minor bug-fix release from 0.15.0 and includes a small number of API changes, several new features, en-
hancements, and performance improvements along with a large number of bug fixes. We recommend that all users
upgrade to this version.
• Enhancements
• API Changes
• Bug Fixes
• s.dt.hour and other .dt accessors will now return np.nan for missing values (rather than previously -1),
(GH8689)
In [1]: s = Series(date_range('20130101',periods=5,freq='D'))
In [3]: s
Out[3]:
0 2013-01-01
1 2013-01-02
2 NaT
3 2013-01-04
4 2013-01-05
dtype: datetime64[ns]
previous behavior:
In [6]: s.dt.hour
Out[6]:
0 0
1 0
2 -1
3 0
4 0
dtype: int64
current behavior:
In [4]: s.dt.hour
Out[4]:
0 0.0
1 0.0
2 NaN
3 0.0
4 0.0
dtype: float64
• groupby with as_index=False will not add erroneous extra columns to result (GH8582):
In [5]: np.random.seed(2718281)
In [7]: df.head()
Out[7]:
jim joe
0 61 81
1 96 49
2 55 65
3 72 51
4 77 12
previous behavior:
current behavior:
• groupby will not erroneously exclude columns if the column name conflics with the grouper name (GH8112):
In [11]: df
Out[11]:
jim joe
0 0 5
1 1 6
2 2 7
3 3 8
4 4 9
In [4]: gr.apply(sum)
Out[4]:
joe
jim
False 24
True 11
current behavior:
In [13]: gr.apply(sum)
Out[13]:
jim joe
jim
False 9 24
True 1 11
• Support for slicing with monotonic decreasing indexes, even if start or stop is not found in the index
(GH7860):
In [15]: s
Out[15]:
4 a
(continues on next page)
previous behavior:
In [8]: s.loc[3.5:1.5]
KeyError: 3.5
current behavior:
In [16]: s.loc[3.5:1.5]
Out[16]:
3 b
2 c
dtype: object
• io.data.Options has been fixed for a change in the format of the Yahoo Options page (GH8612),
(GH8741)
Note: As a result of a change in Yahoo’s option page layout, when an expiry date is given, Options methods
now return data for a single expiry date. Previously, methods returned all data for the selected month.
The month and year parameters have been undeprecated and can be used to get all options data for a given
month.
If an expiry date that is not valid is given, data for the next expiry after the given date is returned.
Option data frames are now saved on the instance as callsYYMMDD or putsYYMMDD. Previously they were
saved as callsMMYY and putsMMYY. The next expiry is saved as calls and puts.
New features:
– The expiry parameter can now be a single date or a list-like object containing dates.
– A new property expiry_dates was added, which returns all available expiry dates.
Current behavior:
In [17]: from pandas.io.data import Options
In [19]: aapl.get_call_data().iloc[0:5,0:1]
Out[19]:
Last
Strike Expiry Type Symbol
80 2014-11-14 call AAPL141114C00080000 29.05
84 2014-11-14 call AAPL141114C00084000 24.80
85 2014-11-14 call AAPL141114C00085000 24.05
86 2014-11-14 call AAPL141114C00086000 22.76
87 2014-11-14 call AAPL141114C00087000 21.74
In [20]: aapl.expiry_dates
Out[20]:
[datetime.date(2014, 11, 14),
(continues on next page)
In [21]: aapl.get_near_stock_price(expiry=aapl.expiry_dates[0:3]).iloc[0:5,0:1]
Out[21]:
Last
Strike Expiry Type Symbol
109 2014-11-22 call AAPL141122C00109000 1.48
2014-11-28 call AAPL141128C00109000 1.79
110 2014-11-14 call AAPL141114C00110000 0.55
2014-11-22 call AAPL141122C00110000 1.02
2014-11-28 call AAPL141128C00110000 1.32
• pandas now also registers the datetime64 dtype in matplotlib’s units registry to plot such values as datetimes.
This is activated once pandas is imported. In previous versions, plotting an array of datetime64 values will
have resulted in plotted integer values. To keep the previous behaviour, you can do del matplotlib.
units.registry[np.datetime64] (GH8614).
1.23.2 Enhancements
• concat permits a wider variety of iterables of pandas objects to be passed as the first parameter (GH8645):
previous behavior:
current behavior:
• Represent MultiIndex labels with a dtype that utilizes memory based on the level size. In prior versions,
the memory usage was a constant 8 bytes per element in each level. In addition, in prior versions, the reported
memory usage was incorrect as it didn’t show the usage for the memory occupied by the underling data array.
(GH8456)
previous behavior:
current behavior:
In [22]: dfi.memory_usage(index=True)
Out[22]:
Index 52080
A 8000
dtype: int64
• Bug in comparing Categorical of datetime raising when being compared to a scalar datetime (GH8687)
• Bug in selecting from a Categorical with .iloc (GH8623)
• Bug in groupby-transform with a Categorical (GH8623)
• Bug in duplicated/drop_duplicates with a Categorical (GH8623)
• Bug in Categorical reflected comparison operator raising if the first argument was a numpy array scalar
(e.g. np.int64) (GH8658)
• Bug in Panel indexing with a list-like (GH8710)
• Compat issue is DataFrame.dtypes when options.mode.use_inf_as_null is True (GH8722)
• Bug in read_csv, dialect parameter would not take a string (GH8703)
• Bug in slicing a multi-index level with an empty-list (GH8737)
• Bug in numeric index operations of add/sub with Float/Index Index with numpy arrays (GH8608)
• Bug in setitem with empty indexer and unwanted coercion of dtypes (GH8669)
• Bug in ix/loc block splitting on setitem (manifests with integer-like dtypes, e.g. datetime64) (GH8607)
• Bug when doing label based indexing with integers not found in the index for non-unique but monotonic indexes
(GH8680).
• Bug when indexing a Float64Index with np.nan on numpy 1.7 (GH8980).
• Fix shape attribute for MultiIndex (GH8609)
• Bug in GroupBy where a name conflict between the grouper and columns would break groupby operations
(GH7115, GH8112)
• Fixed a bug where plotting a column y and specifying a label would mutate the index name of the original
DataFrame (GH8494)
• Fix regression in plotting of a DatetimeIndex directly with matplotlib (GH8614).
• Bug in date_range where partially-specified dates would incorporate current date (GH6961)
• Bug in Setting by indexer to a scalar value with a mixed-dtype Panel4d was failing (GH8702)
• Bug where DataReader’s would fail if one of the symbols passed was invalid. Now returns data for valid
symbols and np.nan for invalid (GH8494)
• Bug in get_quote_yahoo that wouldn’t allow non-float return values (GH5229).
This is a major release from 0.14.1 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Warning: pandas >= 0.15.0 will no longer support compatibility with NumPy versions < 1.7.0. If you want to
use the latest versions of pandas, please upgrade to NumPy >= 1.7.0 (GH7711)
• Highlights include:
– The Categorical type was integrated as a first-class pandas type, see here
– New scalar type Timedelta, and a new index type TimedeltaIndex, see here
– New datetimelike properties accessor .dt for Series, see Datetimelike Properties
– New DataFrame default display for df.info() to include memory usage, see Memory Usage
– read_csv will now by default ignore blank lines when parsing, see here
– API change in using Indexes in set operations, see here
– Enhancements in the handling of timezones, see here
– A lot of improvements to the rolling and expanding moment functions, see here
– Internal refactoring of the Index class to no longer sub-class ndarray, see Internal Refactoring
– dropping support for PyTables less than version 3.0.0, and numexpr less than version 2.1 (GH7990)
– Split indexing documentation into Indexing and Selecting Data and MultiIndex / Advanced Indexing
– Split out string methods documentation into Working with Text Data
• Check the API Changes and deprecations before updating
• Other Enhancements
• Performance Improvements
• Bug Fixes
Warning: In 0.15.0 Index has internally been refactored to no longer sub-class ndarray but instead subclass
PandasObject, similarly to the rest of the pandas objects. This change allows very easy sub-classing and
creation of new index types. This should be a transparent change with only very limited API implications (See the
Internal Refactoring)
Warning: The refactorings in Categorical changed the two argument constructor from “codes/labels and
levels” to “values and levels (now called ‘categories’)”. This can lead to subtle bugs. If you use Categorical
directly, please audit your code before updating to this pandas version and change it to use the from_codes()
constructor. See more on Categorical here
Categorical can now be included in Series and DataFrames and gained new methods to manipulate. Thanks to Jan
Schulz for much of this API/implementation. (GH3943, GH5313, GH5314, GH7444, GH7839, GH7848, GH7864,
GH7914, GH7768, GH8006, GH3678, GH8075, GH8076, GH8143, GH8453, GH8518).
For full docs, see the categorical introduction and the API documentation.
In [1]: df = DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e
˓→']})
In [3]: df["grade"]
Out[3]:
0 a
1 b
(continues on next page)
In [6]: df["grade"]
Out[6]:
0 very good
1 good
2 good
3 very good
4 very good
5 very bad
Name: grade, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]
In [7]: df.sort_values("grade")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
id raw_grade grade
5 6 e very bad
1 2 b good
2 3 b good
0 1 a very good
3 4 a very good
4 5 a very good
In [8]: df.groupby("grade").size()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
grade
very bad 1
bad 0
medium 0
good 2
very good 3
dtype: int64
1.24.1.2 TimedeltaIndex/Scalar
We introduce a new scalar type Timedelta, which is a subclass of datetime.timedelta, and behaves in a
similar manner, but allows compatibility with np.timedelta64 types as well as a host of custom representation,
parsing, and attributes. This type is very similar to how Timestamp works for datetimes. It is a nice-API box for
the type. See the docs. (GH3009, GH4533, GH8209, GH8187, GH8190, GH7869, GH7661, GH8345, GH8471)
Warning: Timedelta scalars (and TimedeltaIndex) component fields are not the same as the component
fields on a datetime.timedelta object. For example, .seconds on a datetime.timedelta object
returns the total number of seconds combined between hours, minutes and seconds. In contrast, the pandas
Timedelta breaks out hours, minutes, microseconds and nanoseconds separately.
# Timedelta accessor
In [9]: tds = Timedelta('31 days 5 min 3 sec')
In [10]: tds.minutes
Out[10]: 5L
In [11]: tds.seconds
Out[11]: 3L
# datetime.timedelta accessor
# this is 5 minutes * 60 + 3 seconds
In [12]: tds.to_pytimedelta().seconds
Out[12]: 303
Note: this is no longer true starting from v0.16.0, where full compatibility with datetime.timedelta is
introduced. See the 0.16.0 whatsnew entry
Warning: Prior to 0.15.0 pd.to_timedelta would return a Series for list-like/Series input, and a np.
timedelta64 for scalar input. It will now return a TimedeltaIndex for list-like input, Series for Series
input, and Timedelta for scalar input.
The arguments to pd.to_timedelta are now (arg,unit='ns',box=True,coerce=False), previ-
ously were (arg,box=True,unit='ns') as these are more logical.
Consruct a scalar
In [9]: Timedelta('1 days 06:05:01.00003')
Out[9]: Timedelta('1 days 06:05:01.000030')
In [10]: Timedelta('15.5us')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[10]: Timedelta('0 days 00:00:00.000015
˓→')
# a NaT
In [13]: Timedelta('nan')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→NaT
In [15]: td.seconds
Out[15]: 3780
In [16]: td.microseconds
\\\\\\\\\\\\\\Out[16]: 16
In [17]: td.nanoseconds
\\\\\\\\\\\\\\\\\\\\\\\\\\Out[17]: 500
Construct a TimedeltaIndex
In [21]: s = Series(np.arange(5),
....: index=timedelta_range('1 days',periods=5,freq='s'))
....:
In [22]: s
Out[22]:
1 days 00:00:00 0
1 days 00:00:01 1
1 days 00:00:02 2
1 days 00:00:03 3
1 days 00:00:04 4
Freq: S, dtype: int64
Finally, the combination of TimedeltaIndex with DatetimeIndex allow certain combination operations that
are NaT preserving:
In [26]: tdi.tolist()
Out[26]: [Timedelta('1 days 00:00:00'), NaT, Timedelta('2 days 00:00:00')]
In [28]: dti.tolist()
Out[28]:
[Timestamp('2013-01-01 00:00:00', freq='D'),
Timestamp('2013-01-02 00:00:00', freq='D'),
Timestamp('2013-01-03 00:00:00', freq='D')]
Implemented methods to find memory usage of a DataFrame. See the FAQ for more. (GH6852).
A new display option display.memory_usage (see Options and Settings) sets the default behavior of the
memory_usage argument in the df.info() method. By default display.memory_usage is True.
In [32]: n = 5000
In [34]: df = DataFrame(data)
In [36]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5000 entries, 0 to 4999
Data columns (total 8 columns):
int64 5000 non-null int64
float64 5000 non-null float64
datetime64[ns] 5000 non-null datetime64[ns]
timedelta64[ns] 5000 non-null timedelta64[ns]
complex128 5000 non-null complex128
object 5000 non-null object
bool 5000 non-null bool
categorical 5000 non-null category
dtypes: bool(1), category(1), complex128(1), datetime64[ns](1), float64(1), int64(1),
˓→object(1), timedelta64[ns](1)
Additionally memory_usage() is an available method for a dataframe object which returns the memory usage of
each column.
In [37]: df.memory_usage(index=True)
Out[37]:
Index 80
int64 40000
float64 40000
datetime64[ns] 40000
timedelta64[ns] 40000
complex128 80000
object 40000
bool 5000
categorical 10920
dtype: int64
Series has gained an accessor to succinctly return datetime like properties for the values of the Series, if its a
datetime/period like Series. (GH7207) This will return a Series, indexed like the existing Series. See the docs
# datetime
In [38]: s = Series(date_range('20130101 09:10:12',periods=4))
(continues on next page)
In [39]: s
Out[39]:
0 2013-01-01 09:10:12
1 2013-01-02 09:10:12
2 2013-01-03 09:10:12
3 2013-01-04 09:10:12
dtype: datetime64[ns]
In [40]: s.dt.hour
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 9
1 9
2 9
3 9
dtype: int64
In [41]: s.dt.second
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 12
1 12
2 12
3 12
dtype: int64
In [42]: s.dt.day
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1
1 2
2 3
3 4
dtype: int64
In [43]: s.dt.freq
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→'D'
In [46]: stz
Out[46]:
0 2013-01-01 09:10:12-05:00
1 2013-01-02 09:10:12-05:00
2 2013-01-03 09:10:12-05:00
3 2013-01-04 09:10:12-05:00
(continues on next page)
In [47]: stz.dt.tz
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→<DstTzInfo 'US/Eastern' LMT-1 day, 19:04:00 STD>
In [48]: s.dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
Out[48]:
0 2013-01-01 04:10:12-05:00
1 2013-01-02 04:10:12-05:00
2 2013-01-03 04:10:12-05:00
3 2013-01-04 04:10:12-05:00
dtype: datetime64[ns, US/Eastern]
# period
In [49]: s = Series(period_range('20130101',periods=4,freq='D'))
In [50]: s
Out[50]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: object
In [51]: s.dt.year
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[51]:
˓→
0 2013
1 2013
2 2013
3 2013
dtype: int64
In [52]: s.dt.day
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1
1 2
2 3
3 4
dtype: int64
# timedelta
In [53]: s = Series(timedelta_range('1 day 00:00:05',periods=4,freq='s'))
In [54]: s
Out[54]:
0 1 days 00:00:05
1 1 days 00:00:06
2 1 days 00:00:07
3 1 days 00:00:08
(continues on next page)
In [55]: s.dt.days
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1
1 1
2 1
3 1
dtype: int64
In [56]: s.dt.seconds
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 5
1 6
2 7
3 8
dtype: int64
In [57]: s.dt.components
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
• tz_localize(None) for tz-aware Timestamp and DatetimeIndex now removes timezone holding
local time, previously this resulted in Exception or TypeError (GH7812)
In [59]: ts
Out[59]: Timestamp('2014-08-01 09:00:00-0400', tz='US/Eastern')
In [60]: ts.tz_localize(None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[60]:
˓→Timestamp('2014-08-01 09:00:00')
In [62]: didx
Out[62]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00', '2014-08-01 12:00:00-04:00',
'2014-08-01 13:00:00-04:00', '2014-08-01 14:00:00-04:00',
'2014-08-01 15:00:00-04:00', '2014-08-01 16:00:00-04:00',
'2014-08-01 17:00:00-04:00', '2014-08-01 18:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')
• tz_localize now accepts the ambiguous keyword which allows for passing an array of bools indicating
whether the date belongs in DST or not, ‘NaT’ for setting transition times to NaT, ‘infer’ for inferring DST/non-
DST, and ‘raise’ (default) for an AmbiguousTimeError to be raised. See the docs for more details (GH7943)
• DataFrame.tz_localize and DataFrame.tz_convert now accepts an optional level argument
for localizing a specific level of a MultiIndex (GH7846)
• Timestamp.tz_localize and Timestamp.tz_convert now raise TypeError in error cases, rather
than Exception (GH8025)
• a timeseries/index localized to UTC when inserted into a Series/DataFrame will preserve the UTC timezone
(rather than being a naive datetime64[ns]) as object dtype (GH8411)
• Timestamp.__repr__ displays dateutil.tz.tzoffset info (GH7907)
New behavior
• rolling_window() now normalizes the weights properly in rolling mean mode (mean=True) so that the
calculated weighted means (e.g. ‘triang’, ‘gaussian’) are distributed about the same means as those calculated
without weighting (i.e. ‘boxcar’). See the note on normalization for further details. (GH7618)
New behavior
• Removed center argument from all expanding_ functions (see list), as the results produced when
center=True did not make much sense. (GH7925)
• Added optional ddof argument to expanding_cov() and rolling_cov(). The default value of 1 is
backwards-compatible. (GH8279)
• Documented the ddof argument to expanding_var(), expanding_std(), rolling_var(), and
rolling_std(). These functions’ support of a ddof argument (with a default value of 1) was previously
undocumented. (GH8064)
• ewma(), ewmstd(), ewmvol(), ewmvar(), ewmcov(), and ewmcorr() now interpret min_periods
in the same manner that the rolling_*() and expanding_*() functions do: a given result entry will be
NaN if the (expanding, in this case) window does not contain at least min_periods values. The previous
behavior was to set to NaN the min_periods entries starting with the first non- NaN value. (GH7977)
Prior behavior (note values start at index 2, which is min_periods after index 0 (the index of the first non-
empty value)):
New behavior (note values start at index 4, the location of the 2nd (since min_periods=2) non-empty value):
• ewmstd(), ewmvol(), ewmvar(), ewmcov(), and ewmcorr() now have an optional adjust argu-
ment, just like ewma() does, affecting how the weights are calculated. The default value of adjust is True,
which is backwards-compatible. See Exponentially weighted moment functions for details. (GH7911)
• ewma(), ewmstd(), ewmvol(), ewmvar(), ewmcov(), and ewmcorr() now have an optional
ignore_na argument. When ignore_na=False (the default), missing values are taken into account in
the weights calculation. When ignore_na=True (which reproduces the pre-0.15.0 behavior), missing values
are ignored in the weights calculation. (GH7543)
Out[8]:
0 1.0
1 1.0
2 5.2
dtype: float64
Warning: By default (ignore_na=False) the ewm*() functions’ weights calculation in the presence
of missing values is different than in pre-0.15.0 versions. To reproduce the pre-0.15.0 calculation of weights
in the presence of missing values one must specify explicitly ignore_na=True.
Note that entry 0 is approximately 0, and the debiasing factors are a constant 1.25. By comparison, the following
0.15.0 results have a NaN for entry 0, and the debiasing factors are decreasing (towards 1.25):
• Added support for a chunksize parameter to to_sql function. This allows DataFrame to be written in
chunks and avoid packet-size overflow errors (GH8062).
• Added support for a chunksize parameter to read_sql function. Specifying this argument will return an
iterator through chunks of the query result (GH2908).
• Added support for writing datetime.date and datetime.time object columns with to_sql (GH6932).
• Added support for specifying a schema to read from/write to with read_sql_table and to_sql
(GH7441, GH7952). For example:
API changes related to the introduction of the Timedelta scalar (see above for more details):
• Prior to 0.15.0 to_timedelta() would return a Series for list-like/Series input, and a np.
timedelta64 for scalar input. It will now return a TimedeltaIndex for list-like input, Series for
Series input, and Timedelta for scalar input.
For API changes related to the rolling and expanding functions, see detailed overview above.
Other notable API changes:
• Consistency when indexing with .loc and a list-like indexer when no values are found.
In [68]: df = DataFrame([['a'],['b']],index=[1,2])
In [69]: df
Out[69]:
0
1 a
2 b
In [3]: df.loc[[1,3]]
Out[3]:
0
1 a
3 NaN
In [4]: df.loc[[1,3],:]
Out[4]:
0
1 a
3 NaN
In [70]: p = Panel(np.arange(2*3*4).reshape(2,3,4),
....: items=['ItemA','ItemB'],
....: major_axis=[1,2,3],
....: minor_axis=['A','B','C','D'])
....:
(continues on next page)
In [71]: p
Out[71]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 3 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemB
Major_axis axis: 1 to 3
Minor_axis axis: A to D
In [5]:
Out[5]:
ItemA ItemD
1 3 NaN
2 7 NaN
3 11 NaN
Furthermore, .loc will raise If no values are found in a multi-index with a list-like indexer:
In [72]: s = Series(np.arange(3,dtype='int64'),
....: index=MultiIndex.from_product([['A'],['foo','bar','baz']],
....: names=['one','two'])
....: ).sort_index()
....:
In [73]: s
Out[73]:
one two
A bar 1
baz 2
foo 0
dtype: int64
In [74]: try:
....: s.loc[['D']]
....: except KeyError as e:
....: print("KeyError: " + str(e))
....:
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\KeyError:
˓→"['D'] not in index"
• Assigning values to None now considers the dtype when choosing an ‘empty’ value (GH7941).
Previously, assigning to None in numeric containers changed the dtype to object (or errored, depending on the
call). It now uses NaN:
In [77]: s
Out[77]:
0 NaN
1 2.0
2 3.0
dtype: float64
In [80]: s
Out[80]:
0 None
1 b
2 c
dtype: object
To insert a NaN, you must explicitly use np.nan. See the docs.
• In prior versions, updating a pandas object inplace would not reflect in other python references to this object.
(GH8511, GH5104)
In [81]: s = Series([1, 2, 3])
In [82]: s2 = s
In [83]: s += 1.5
• Made both the C-based and Python engines for read_csv and read_table ignore empty lines in input as well as
whitespace-filled lines, as long as sep is not whitespace. This is an API change that can be controlled by the
keyword parameter skip_blank_lines. See the docs (GH4466)
• A timeseries/index localized to UTC when inserted into a Series/DataFrame will preserve the UTC timezone
and inserted as object dtype rather than being converted to a naive datetime64[ns] (GH8411).
• Bug in passing a DatetimeIndex with a timezone that was not being retained in DataFrame construction
from a dict (GH7822)
In prior versions this would drop the timezone, now it retains the timezone, but gives a column of object
dtype:
In [87]: i
Out[87]:
DatetimeIndex(['2011-01-01 00:00:00-05:00', '2011-01-01 00:00:10-05:00',
'2011-01-01 00:00:20-05:00'],
dtype='datetime64[ns, US/Eastern]', freq='10S')
In [89]: df
Out[89]:
a
0 2011-01-01 00:00:00-05:00
1 2011-01-01 00:00:10-05:00
2 2011-01-01 00:00:20-05:00
In [90]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a datetime64[ns, US/Eastern]
dtype: object
Previously this would have yielded a column of datetime64 dtype, but without timezone info.
The behaviour of assigning a column to an existing dataframe as df[‘a’] = i remains unchanged (this already
returned an object column with a timezone).
• When passing multiple levels to stack(), it will now raise a ValueError when the levels aren’t all level
names or all level numbers (GH7660). See Reshaping by stacking and unstacking.
• Raise a ValueError in df.to_hdf with ‘fixed’ format, if df has non-unique columns as the resulting file
will be broken (GH7761)
• SettingWithCopy raise/warnings (according to the option mode.chained_assignment) will now be
issued when setting a value on a sliced mixed-dtype DataFrame using chained-assignment. (GH7845, GH7950)
• merge, DataFrame.merge, and ordered_merge now return the same type as the left argument
(GH7737).
• Previously an enlargement with a mixed-dtype frame would act unlike .append which will preserve dtypes
(related GH2578, GH8176):
In [92]: df
Out[92]:
female fitness
0 True 1
1 False 2
In [93]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[93]:
female bool
fitness int64
dtype: object
In [95]: df
Out[95]:
female fitness
0 True 1
1 False 2
2 False 2
In [96]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[96]:
˓→
female bool
fitness int64
dtype: object
• Series.to_csv() now returns a string when path=None, matching the behaviour of DataFrame.
to_csv() (GH8215).
• read_hdf now raises IOError when a file that doesn’t exist is passed in. Previously, a new, empty file was
created, and a KeyError raised (GH7715).
• DataFrame.info() now ends its output with a newline character (GH8114)
• Concatenating no objects will now raise a ValueError rather than a bare Exception.
• Merge errors will now be sub-classes of ValueError rather than raw Exception (GH8501)
• DataFrame.plot and Series.plot keywords are now have consistent orders (GH8037)
In 0.15.0 Index has internally been refactored to no longer sub-class ndarray but instead subclass
PandasObject, similarly to the rest of the pandas objects. This change allows very easy sub-classing and cre-
ation of new index types. This should be a transparent change with only very limited API implications (GH5080,
GH7439, GH7796, GH8024, GH8367, GH7997, GH8522):
• you may need to unpickle pandas version < 0.15.0 pickles using pd.read_pickle rather than pickle.
load. See pickle docs
• when plotting with a PeriodIndex, the matplotlib internal axes will now be arrays of Period rather than a
PeriodIndex (this is similar to how a DatetimeIndex passes arrays of datetimes now)
• MultiIndexes will now raise similarly to other pandas objects w.r.t. truth testing, see here (GH7897).
• When plotting a DatetimeIndex directly with matplotlib’s plot function, the axis labels will no longer be format-
ted as dates but as integers (the internal representation of a datetime64). UPDATE This is fixed in 0.15.1,
see here.
1.24.2.3 Deprecations
• The attributes Categorical labels and levels attributes are deprecated and renamed to codes and
categories.
• The outtype argument to pd.DataFrame.to_dict has been deprecated in favor of orient. (GH7840)
• The convert_dummies method has been deprecated in favor of get_dummies (GH8140)
• The infer_dst argument in tz_localize will be deprecated in favor of ambiguous to allow for more
flexibility in dealing with DST transitions. Replace infer_dst=True with ambiguous='infer' for the
same behavior (GH7943). See the docs for more details.
• The top-level pd.value_range has been deprecated and can be replaced by .describe() (GH8481)
• The Index set operations + and - were deprecated in order to provide these for numeric type operations on
certain index types. + can be replaced by .union() or |, and - by .difference(). Further the method
name Index.diff() is deprecated and can be replaced by Index.difference() (GH8226)
# +
Index(['a','b','c']) + Index(['b','c','d'])
# should be replaced by
Index(['a','b','c']).union(Index(['b','c','d']))
# -
Index(['a','b','c']) - Index(['b','c','d'])
# should be replaced by
Index(['a','b','c']).difference(Index(['b','c','d']))
• The infer_types argument to read_html() now has no effect and is deprecated (GH7762, GH7032).
1.24.3 Enhancements
In [98]: df.describe(include=["object"])
Out[98]:
catA catB
count 24 24
unique 2 4
top foo d
freq 16 6
In [100]: df.describe(include='all')
Out[100]:
catA catB numC numD
count 24 24 24.000000 24.000000
unique 2 4 NaN NaN
top foo d NaN NaN
freq 16 6 NaN NaN
mean NaN NaN 11.500000 12.000000
std NaN NaN 7.071068 7.071068
min NaN NaN 0.000000 0.500000
25% NaN NaN 5.750000 6.250000
50% NaN NaN 11.500000 12.000000
75% NaN NaN 17.250000 17.750000
max NaN NaN 23.000000 23.500000
Without those arguments, describe will behave as before, including only numerical columns or, if none are,
only categorical columns. See also the docs
• Added split as an option to the orient argument in pd.DataFrame.to_dict. (GH7840)
• The get_dummies method can now be used on DataFrames. By default only catagorical columns are encoded
as 0’s and 1’s, while other columns are left untouched.
In [102]: pd.get_dummies(df)
Out[102]:
C A_a A_b B_b B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
# get the first, 4th, and last date index for each month
In [105]: df.groupby((df.index.year, df.index.month)).nth([0, 3, -1])
Out[105]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
In [107]: idx
Out[107]:
PeriodIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00'],
dtype='period[H]', freq='H')
In [111]: idx
Out[111]: PeriodIndex(['2014-07', '2014-08', '2014-09', '2014-10', '2014-11'],
˓→dtype='period[M]', freq='M')
˓→'period[M]', freq='M')
• Added experimental compatibility with openpyxl for versions >= 2.0. The DataFrame.to_excel method
engine keyword now recognizes openpyxl1 and openpyxl2 which will explicitly require openpyxl v1
and v2 respectively, failing if the requested version is not available. The openpyxl engine is a now a meta-
engine that automatically uses whichever version of openpyxl is installed. (GH7177)
• Index.isin now supports a level argument to specify which index level to use for membership tests
(GH7892, GH7890)
In [2]: idx.values
Out[2]: array([(0, 'a'), (0, 'b'), (0, 'c'), (1, 'a'), (1, 'b'), (1, 'c')],
˓→dtype=object)
In [119]: idx
Out[119]: Int64Index([1, 2, 3, 4, 1, 2], dtype='int64')
In [120]: idx.duplicated()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[120]: array([False,
˓→False, False, False, True, True], dtype=bool)
In [121]: idx.drop_duplicates()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Int64Index([1, 2, 3, 4], dtype='int64')
• add copy=True argument to pd.concat to enable pass thru of complete blocks (GH8252)
• Added support for numpy 1.8+ data types (bool_, int_, float_, string_) for conversion to R dataframe
(GH8400)
1.24.4 Performance
• Bug in StataReader which did not read variable labels in 117 files due to difference between Stata docu-
mentation and implementation (GH7816)
• Bug in StataReader where strings were always converted to 244 characters-fixed width irrespective of un-
derlying string size (GH7858)
• Bug in DataFrame.plot and Series.plot may ignore rot and fontsize keywords (GH7844)
• Bug in DatetimeIndex.value_counts doesn’t preserve tz (GH7735)
• Bug in PeriodIndex.value_counts results in Int64Index (GH7735)
• Bug in DataFrame.join when doing left join on index and there are multiple matches (GH5391)
• Bug in GroupBy.transform() where int groups with a transform that didn’t preserve the index were in-
correctly truncated (GH7972).
• Bug in groupby where callable objects without name attributes would take the wrong path, and produce a
DataFrame instead of a Series (GH7929)
• Bug in groupby error message when a DataFrame grouping column is duplicated (GH7511)
• Bug in read_html where the infer_types argument forced coercion of date-likes incorrectly (GH7762,
GH7032).
• Bug in Series.str.cat with an index which was filtered as to not include the first item (GH7857)
• Bug in Timestamp cannot parse nanosecond from string (GH7878)
• Bug in Timestamp with string offset and tz results incorrect (GH7833)
• Bug in tslib.tz_convert and tslib.tz_convert_single may return different results (GH7798)
• Bug in DatetimeIndex.intersection of non-overlapping timestamps with tz raises IndexError
(GH7880)
• Bug in alignment with TimeOps and non-unique indexes (GH8363)
• Bug in GroupBy.filter() where fast path vs. slow path made the filter return a non scalar value that
appeared valid but wasn’t (GH7870).
• Bug in date_range()/DatetimeIndex() when the timezone was inferred from input dates yet incorrect
times were returned when crossing DST boundaries (GH7835, GH7901).
• Bug in to_excel() where a negative sign was being prepended to positive infinity and was absent for negative
infinity (GH7949)
• Bug in area plot draws legend with incorrect alpha when stacked=True (GH8027)
• Period and PeriodIndex addition/subtraction with np.timedelta64 results in incorrect internal rep-
resentations (GH7740)
• Bug in Holiday with no offset or observance (GH7987)
• Bug in DataFrame.to_latex formatting when columns or index is a MultiIndex (GH7982).
• Bug in DateOffset around Daylight Savings Time produces unexpected results (GH5175).
• Bug in DataFrame.shift where empty columns would throw ZeroDivisionError on numpy 1.7
(GH8019)
• Bug in installation where html_encoding/*.html wasn’t installed and therefore some tests were not run-
ning correctly (GH7927).
• Bug in read_html where bytes objects were not tested for in _read (GH7927).
• Bug in DataFrame.stack() when one of the column levels was a datelike (GH8039)
• Bug in NDFrame.loc indexing when row/column names were lost when target was a list/ndarray (GH6552)
• Regression in NDFrame.loc indexing when rows/columns were converted to Float64Index if target was an
empty list/ndarray (GH7774)
• Bug in Series that allows it to be indexed by a DataFrame which has unexpected results. Such indexing is
no longer permitted (GH8444)
• Bug in item assignment of a DataFrame with multi-index columns where right-hand-side columns were not
aligned (GH7655)
• Suppress FutureWarning generated by NumPy when comparing object arrays containing NaN for equality
(GH7065)
• Bug in DataFrame.eval() where the dtype of the not operator (~) was not correctly inferred as bool.
This is a minor release from 0.14.0 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
• Highlights include:
– New methods select_dtypes() to select columns based on the dtype and sem() to calculate the
standard error of the mean.
– Support for dateutil timezones (see docs).
– Support for ignoring full line comments in the read_csv() text parser.
– New documentation section on Options and Settings.
– Lots of bug fixes.
• Enhancements
• API Changes
• Performance Improvements
• Experimental Changes
• Bug Fixes
• Openpyxl now raises a ValueError on construction of the openpyxl writer instead of warning on pandas import
(GH7284).
• For StringMethods.extract, when no match is found, the result - only containing NaN values - now also
has dtype=object instead of float (GH7242)
• Period objects no longer raise a TypeError when compared using == with another object that isn’t a
Period. Instead when comparing a Period with another object using == if the other object isn’t a Period
False is returned. (GH7376)
• Previously, the behaviour on resetting the time or not in offsets.apply, rollforward and rollback
operations differed between offsets. With the support of the normalize keyword for all offsets(see be-
low) with a default value of False (preserve time), the behaviour changed for certain offsets (BusinessMon-
thBegin, MonthEnd, BusinessMonthEnd, CustomBusinessMonthEnd, BusinessYearBegin, LastWeekOfMonth,
FY5253Quarter, LastWeekOfMonth, Easter):
Starting from 0.14.1 all offsets preserve time by default. The old behaviour can be obtained with
normalize=True
# new behaviour
In [1]: d + offsets.MonthEnd()
Out[1]: Timestamp('2014-01-31 09:00:00')
In [2]: d + offsets.MonthEnd(normalize=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[2]: Timestamp('2014-01-31 00:00:00')
Note that for the other offsets the default behaviour did not change.
• Add back #N/A N/A as a default NA value in text parsing, (regression from 0.12) (GH5521)
• Raise a TypeError on inplace-setting with a .where and a non np.nan value as this is inconsistent with a
set-item expression like df[mask] = None (GH7656)
1.25.2 Enhancements
• Add NotImplementedError for simultaneous use of chunksize and nrows for read_csv() (GH6774).
• Tests for basic reading of public S3 buckets now exist (GH7281).
• read_html now sports an encoding argument that is passed to the underlying parser library. You can use
this to read non-ascii encoded web pages (GH7323).
• read_excel now supports reading from URLs in the same way that read_csv does. (GH6809)
• Support for dateutil timezones, which can now be used in the same way as pytz timezones across pandas.
(GH4688)
In [8]: rng = date_range('3/6/2012 00:00', periods=10, freq='D',
...: tz='dateutil/Europe/London')
...:
In [9]: rng.tz
Out[9]: tzfile('/usr/share/zoneinfo/Europe/London')
1.25.3 Performance
• Improvements in dtype inference for numeric operations involving yielding performance gains for dtypes:
int64, timedelta64, datetime64 (GH7223)
• Improvements in Series.transform for significant performance gains (GH6496)
• Improvements in DataFrame.transform with ufuncs and built-in grouper functions for significant performance
gains (GH7383)
• Regression in groupby aggregation of datetime64 dtypes (GH7555)
• Improvements in MultiIndex.from_product for large iterables (GH7627)
1.25.4 Experimental
• pandas.io.data.Options has a new method, get_all_data method, and now consistently returns a
multi-indexed DataFrame (GH5602)
• io.gbq.read_gbq and io.gbq.to_gbq were refactored to remove the dependency on the Google
bq.py command line client. This submodule now uses httplib2 and the Google apiclient and
oauth2client API client libraries which should be more stable and, therefore, reliable than bq.py. See the
docs. (GH6937).
• Bug in DataFrame.where with a symmetric shaped frame and a passed other of a DataFrame (GH7506)
• Bug in Panel indexing with a multi-index axis (GH7516)
• Regression in datetimelike slice indexing with a duplicated index and non-exact end-points (GH7523)
• Bug in setitem with list-of-lists and single vs mixed types (GH7551:)
• Bug in timeops with non-aligned Series (GH7500)
• Bug in timedelta inference when assigning an incomplete Series (GH7592)
• Bug in groupby .nth with a Series and integer-like column name (GH7559)
• Bug in Series.get with a boolean accessor (GH7407)
• Bug in value_counts where NaT did not qualify as missing (NaN) (GH7423)
• Bug in to_timedelta that accepted invalid units and misinterpreted ‘m/h’ (GH7611, GH6423)
• Bug in line plot doesn’t set correct xlim if secondary_y=True (GH7459)
• Bug in grouped hist and scatter plots use old figsize default (GH7394)
• Bug in plotting subplots with DataFrame.plot, hist clears passed ax even if the number of subplots is
one (GH7391).
• Bug in plotting subplots with DataFrame.boxplot with by kw raises ValueError if the number of
subplots exceeds 1 (GH7391).
• Bug in subplots displays ticklabels and labels in different rule (GH5897)
• Bug in Panel.apply with a multi-index as an axis (GH7469)
• Bug in DatetimeIndex.insert doesn’t preserve name and tz (GH7299)
• Bug in DatetimeIndex.asobject doesn’t preserve name (GH7299)
• Bug in multi-index slicing with datetimelike ranges (strings and Timestamps), (GH7429)
• Bug in Index.min and max doesn’t handle nan and NaT properly (GH7261)
• Bug in PeriodIndex.min/max results in int (GH7609)
• Bug in resample where fill_method was ignored if you passed how (GH2073)
• Bug in TimeGrouper doesn’t exclude column specified by key (GH7227)
• Bug in DataFrame and Series bar and barh plot raises TypeError when bottom and left keyword is
specified (GH7226)
• Bug in DataFrame.hist raises TypeError when it contains non numeric column (GH7277)
• Bug in Index.delete does not preserve name and freq attributes (GH7302)
• Bug in DataFrame.query()/eval where local string variables with the @ sign were being treated as
temporaries attempting to be deleted (GH7300).
• Bug in Float64Index which didn’t allow duplicates (GH7149).
• Bug in DataFrame.replace() where truthy values were being replaced (GH7140).
• Bug in StringMethods.extract() where a single match group Series would use the matcher’s name
instead of the group name (GH7313).
• Bug in isnull() when mode.use_inf_as_null == True where isnull wouldn’t test True when it
encountered an inf/-inf (GH7315).
This is a major release from 0.13.1 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
• Highlights include:
– Officially support Python 3.4
– SQL interfaces updated to use sqlalchemy, See Here.
– Display interface changes, See Here
– MultiIndexing Using Slicers, See Here.
– Ability to join a singly-indexed DataFrame with a multi-indexed DataFrame, see Here
– More consistency in groupby results and more flexible groupby specifications, See Here
– Holiday calendars are now supported in CustomBusinessDay, see Here
– Several improvements in plotting functions, including: hexbin, area and pie plots, see Here.
– Performance doc section on I/O operations, See Here
• Other Enhancements
• API Changes
• Text Parsing API Changes
• Groupby API Changes
• Performance Improvements
• Prior Deprecations
• Deprecations
• Known Issues
• Bug Fixes
Warning: In 0.14.0 all NDFrame based containers have undergone significant internal refactoring. Before that
each block of homogeneous data had its own labels and extra care was necessary to keep those in sync with the
parent container’s labels. This should not have any visible user/API behavior changes (GH6745)
In [2]: dfl
Out[2]:
A B
0 1.583584 -0.438313
1 -0.402537 -0.780572
2 -0.141685 0.542241
3 0.370966 -0.251642
4 0.787484 1.666563
In [3]: dfl.iloc[:,2:3]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Empty DataFrame
Columns: []
Index: [0, 1, 2, 3, 4]
In [4]: dfl.iloc[:,1:3]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
B
0 -0.438313
1 -0.780572
2 0.542241
3 -0.251642
4 1.666563
(continues on next page)
In [5]: dfl.iloc[4:6]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B
4 0.787484 1.666563
dfl.iloc[[4,5,6]]
IndexError: positional indexers are out-of-bounds
dfl.iloc[:,4]
IndexError: single positional indexer is out-of-bounds
• Slicing with negative start, stop & step values handles corner cases better (GH6531):
– df.iloc[:-len(df)] is now empty
– df.iloc[len(df)::-1] now enumerates all elements in reverse
• The DataFrame.interpolate() keyword downcast default has been changed from infer to None.
This is to preseve the original dtype unless explicitly requested otherwise (GH6290).
• When converting a dataframe to HTML it used to return Empty DataFrame. This special case has been removed,
instead a header with the column names is returned (GH6062).
• Series and Index now internall share more common operations, e.g. factorize(),nunique(),
value_counts() are now supported on Index types as well. The Series.weekday property from
is removed from Series for API consistency. Using a DatetimeIndex/PeriodIndex method on a Series
will now raise a TypeError. (GH4551, GH4056, GH5519, GH6380, GH7206).
• Add is_month_start, is_month_end, is_quarter_start, is_quarter_end,
is_year_start, is_year_end accessors for DateTimeIndex / Timestamp which return a
boolean array of whether the timestamp(s) are at the start/end of the month/quarter/year defined by the
frequency of the DateTimeIndex / Timestamp (GH4565, GH6998)
• Local variable usage has changed in pandas.eval()/DataFrame.eval()/DataFrame.query()
(GH5987). For the DataFrame methods, two things have changed
– Column names are now given precedence over locals
– Local variables must be referred to explicitly. This means that even if you have a local variable that is not
a column you must still refer to it with the '@' prefix.
– You can have an expression like df.query('@a < a') with no complaints from pandas about am-
biguity of the name a.
– The top-level pandas.eval() function does not allow you use the '@' prefix and provides you with
an error message telling you so.
– NameResolutionError was removed because it isn’t necessary anymore.
• Define and document the order of column vs index names in query/eval (GH6676)
• concat will now concatenate mixed Series and DataFrames using the Series name or numbering columns as
needed (GH2385). See the docs
• Slicing and advanced/boolean indexing operations on Index classes as well as Index.delete() and
Index.drop() methods will no longer change the type of the resulting index (GH6440, GH7040)
In [7]: i[[0,1,2]]
Out[7]: Index([1, 2, 3], dtype='object')
Previously, the above operation would return Int64Index. If you’d like to do this manually, use Index.
astype()
In [9]: i[[0,1,2]].astype(np.int_)
Out[9]: Int64Index([1, 2, 3], dtype='int64')
• set_index no longer converts MultiIndexes to an Index of tuples. For example, the old behavior returned an
Index in this case (GH6459):
In [11]: df_multi.set_index(tuple_ind)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[11]:
˓→
0 1
(a, c) 0.471435 -1.190976
(a, d) 1.432707 -0.312652
(b, c) -0.720589 0.887163
(b, d) 0.859588 -0.636524
# New behavior
In [12]: mi
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [13]: df_multi.set_index(mi)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1
a c 0.471435 -1.190976
d 1.432707 -0.312652
b c -0.720589 0.887163
d 0.859588 -0.636524
• pairwise keyword was added to the statistical moment functions rolling_cov, rolling_corr,
ewmcov, ewmcorr, expanding_cov, expanding_corr to allow the calculation of moving window
covariance and correlation matrices (GH4950). See Computing rolling pairwise covariances and correlations
in the docs.
In [1]: df = DataFrame(np.random.randn(10,4),columns=list('ABCD'))
In [5]: covs[df.index[-1]]
Out[5]:
B C D
A 0.035310 0.326593 -0.505430
B 0.137748 -0.006888 -0.005383
C -0.006888 0.861040 0.020762
• Series.iteritems() is now lazy (returns an iterator rather than a list). This was the documented behavior
prior to 0.14. (GH6760)
• Added nunique and value_counts functions to Index for counting unique elements. (GH6734)
• stack and unstack now raise a ValueError when the level keyword refers to a non-unique item in the
Index (previously raised a KeyError). (GH6738)
• drop unused order argument from Series.sort; args now are in the same order as Series.order; add
na_position arg to conform to Series.order (GH6847)
• default sorting algorithm for Series.order is now quicksort, to conform with Series.sort (and
numpy defaults)
• add inplace keyword to Series.order/sort to make them inverses (GH6859)
• DataFrame.sort now places NaNs at the beginning or end of the sort according to the na_position
parameter. (GH3917)
• accept TextFileReader in concat, which was affecting a common user idiom (GH6583), this was a
regression from 0.13.1
• Added factorize functions to Index and Series to get indexer and unique values (GH7090)
• describe on a DataFrame with a mix of Timestamp and string like objects returns a different Index (GH7088).
Previously the index was unintentionally sorted.
• Arithmetic operations with only bool dtypes now give a warning indicating that they are evaluated in Python
space for +, -, and * operations and raise for all others (GH7011, GH6762, GH7015, GH7210)
• In HDFStore, select_as_multiple will always raise a KeyError, when a key or the selector is not
found (GH6177)
• df['col'] = value and df.loc[:,'col'] = value are now completely equivalent; previously the
.loc would not necessarily coerce the dtype of the resultant series (GH6149)
• dtypes and ftypes now return a series with dtype=object on empty containers (GH5740)
• df.to_csv will now return a string of the CSV data if neither a target path nor a buffer is provided (GH6061)
• pd.infer_freq() will now raise a TypeError if given an invalid Series/Index type (GH6407,
GH6463)
• A tuple passed to DataFame.sort_index will be interpreted as the levels of the index, rather than requiring
a list of tuple (GH4370)
• all offset operations now return Timestamp types (rather than datetime), Business/Week frequencies were
incorrect (GH4069)
• to_excel now converts np.inf into a string representation, customizable by the inf_rep keyword argu-
ment (Excel has no native inf representation) (GH6782)
• Replace pandas.compat.scipy.scoreatpercentile with numpy.percentile (GH6810)
• .quantile on a datetime[ns] series now returns Timestamp instead of np.datetime64 objects
(GH6810)
• change AssertionError to TypeError for invalid types passed to concat (GH6583)
• Raise a TypeError when DataFrame is passed an iterator as the data argument (GH5357)
• The default way of printing large DataFrames has changed. DataFrames exceeding max_rows and/or
max_columns are now displayed in a centrally truncated view, consistent with the printing of a pandas.
Series (GH5603).
In previous versions, a DataFrame was truncated once the dimension constraints were reached and an ellipse
(. . . ) signaled that part of the data was cut off.
In the current version, large DataFrames are centrally truncated, showing a preview of head and tail in both
dimensions.
• allow option 'truncate' for display.show_dimensions to only show the dimensions if the frame is
truncated (GH6547).
The default for display.show_dimensions will now be truncate. This is consistent with how Series
display length.
[5 rows x 5 columns]
• Regression in the display of a MultiIndexed Series with display.max_rows is less than the length of the
series (GH7101)
• Fixed a bug in the HTML repr of a truncated Series or DataFrame not showing the class name with the large_repr
set to ‘info’ (GH7105)
• The verbose keyword in DataFrame.info(), which controls whether to shorten the info representation,
is now None by default. This will follow the global setting in display.max_info_columns. The global
setting can be overridden with verbose=True or verbose=False.
• Fixed a bug with the info repr not honoring the display.max_info_columns setting (GH6939)
• Offset/freq info now in Timestamp __repr__ (GH4553)
read_csv()/read_table() will now be noiser w.r.t invalid options rather than falling back to the
PythonParser.
• Raise ValueError when sep specified with delim_whitespace=True in
read_csv()/read_table() (GH6607)
• Raise ValueError when engine='c' specified with unsupported options in
read_csv()/read_table() (GH6607)
• Raise ValueError when fallback to python parser causes options to be ignored (GH6607)
• Produce ParserWarning on fallback to python parser when no options are ignored (GH6607)
• Translate sep='\s+' to delim_whitespace=True in read_csv()/read_table() if no other C-
unsupported options specified (GH6607)
In [20]: g = df.groupby('A')
In [23]: g[['B']].head(1)
Out[23]:
B
0 2
2 6
• groupby nth now reduces by default; filtering can be achieved by passing as_index=False. With an
optional dropna argument to ignore NaN. See the docs.
Reducing
In [25]: g = df.groupby('A')
In [26]: g.nth(0)
Out[26]:
B
A
1 NaN
5 6.0
B
A
1 4.0
5 6.0
Filtering
In [29]: gf = df.groupby('A',as_index=False)
In [30]: gf.nth(0)
Out[30]:
A B
0 1 NaN
2 5 6.0
• groupby will now not return the grouped column for non-cython functions (GH5610, GH5614, GH6732), as its
already the index
In [32]: df = DataFrame([[1, np.nan], [1, 4], [5, 6], [5, 8]], columns=['A', 'B'])
In [33]: g = df.groupby('A')
In [34]: g.count()
Out[34]:
B
A
1 1
5 2
In [35]: g.describe()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[35]:
B
count mean std min 25% 50% 75% max
A
1 1.0 4.0 NaN 4.0 4.0 4.0 4.0 4.0
5 2.0 7.0 1.414214 6.0 6.5 7.0 7.5 8.0
• passing as_index will leave the grouped column in-place (this is not change in 0.14.0)
In [36]: df = DataFrame([[1, np.nan], [1, 4], [5, 6], [5, 8]], columns=['A', 'B'])
In [37]: g = df.groupby('A',as_index=False)
In [38]: g.count()
Out[38]:
A B
0 1 1
1 5 2
In [39]: g.describe()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[39]:
A B
˓→
count mean std min 25% 50% 75% max count mean std min 25% 50% 75
˓→% max
0 2.0 1.0 0.0 1.0 1.0 1.0 1.0 1.0 1.0 4.0 NaN 4.0 4.0 4.0 4.
˓→0 4.0 (continues on next page)
• Allow specification of a more complex groupby via pd.Grouper, such as grouping by a Time and a string
field simultaneously. See the docs. (GH3794)
• Better propagation/preservation of Series names when performing groupby operations:
– SeriesGroupBy.agg will ensure that the name attribute of the original series is propagated to the
result (GH6265).
– If the function provided to GroupBy.apply returns a named series, the name of the series will be kept as
the name of the column index of the DataFrame returned by GroupBy.apply (GH6124). This facilitates
DataFrame.stack operations where the name of the column index is used as the name of the inserted
column containing the pivoted data.
1.26.5 SQL
The SQL reading and writing functions now support more database flavors through SQLAlchemy (GH2717, GH4163,
GH5950, GH6292). All databases supported by SQLAlchemy can be used, such as PostgreSQL, MySQL, Oracle,
Microsoft SQL server (see documentation of SQLAlchemy on included dialects).
The functionality of providing DBAPI connection objects will only be supported for sqlite3 in the future. The
'mysql' flavor is deprecated.
The new functions read_sql_query() and read_sql_table() are introduced. The function read_sql()
is kept as a convenience wrapper around the other two and will delegate to specific function depending on the provided
input (database table name or sql query).
In practice, you have to provide a SQLAlchemy engine to the sql functions. To connect with SQLAlchemy you use
the create_engine() function to create an engine object from database URI. You only need to create the engine
once per database you are connecting to. For an in-memory sqlite database:
This engine can then be used to write or read data to/from this database:
You can read data from a database by specifying the table name:
Warning: Some of the existing functions or function aliases have been deprecated and will be removed in future
versions. This includes: tquery, uquery, read_frame, frame_query, write_frame.
Warning: The support for the ‘mysql’ flavor when using DBAPI connection objects has been deprecated. MySQL
will be further supported with SQLAlchemy engines (GH6900).
In 0.14.0 we added a new way to slice multi-indexed objects. You can slice a multi-index by providing multiple
indexers.
You can provide any of the selectors as if you are indexing by label, see Selection by Label, including slices, lists of
labels, labels, and boolean indexers.
You can use slice(None) to select all the contents of that level. You do not need to specify all the deeper levels,
they will be implied as slice(None).
As usual, both sides of the slicers are included as this is label indexing.
See the docs See also issues (GH6134, GH4036, GH3057, GH2598, GH5641, GH7106)
Warning: You should specify all axes in the .loc specifier, meaning the indexer for the index and for the
columns. Their are some ambiguous cases where the passed indexer could be mis-interpreted as indexing both
axes, rather than into say the MuliIndex for the rows.
You should do this:
df.loc[(slice('A1','A3'),.....),:]
Warning: You will need to make sure that the selection axes are fully lexsorted!
In [49]: df = DataFrame(np.arange(len(index)*len(columns)).reshape((len(index),
˓→len(columns))),
....: index=index,
....: columns=columns).sort_index().sort_index(axis=1)
....:
In [50]: df
Out[50]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9 8 11 10
D1 13 12 15 14
C2 D0 17 16 19 18
D1 21 20 23 22
C3 D0 25 24 27 26
... ... ... ... ...
A3 B1 C0 D1 229 228 231 230
C1 D0 233 232 235 234
D1 237 236 239 238
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249 248 251 250
D1 253 252 255 254
In [53]: df.loc[idx[:,:,['C1','C3']],idx[:,'foo']]
Out[53]:
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
D1 44 46
C3 D0 56 58
... ... ...
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
It is possible to perform quite complicated selections using this method on multiple axes at the same time.
In [54]: df.loc['A1',(slice(None),'foo')]
Out[54]:
lvl0 a b
lvl1 foo foo
B0 C0 D0 64 66
D1 68 70
C1 D0 72 74
D1 76 78
C2 D0 80 82
D1 84 86
C3 D0 88 90
... ... ...
B1 C0 D1 100 102
C1 D0 104 106
D1 108 110
C2 D0 112 114
D1 116 118
C3 D0 120 122
D1 124 126
(continues on next page)
In [55]: df.loc[idx[:,:,['C1','C3']],idx[:,'foo']]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
D1 44 46
C3 D0 56 58
... ... ...
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
Using a boolean indexer you can provide selection related to the values.
In [56]: mask = df[('a','foo')]>200
In [57]: df.loc[idx[mask,:,['C1','C3']],idx[:,'foo']]
Out[57]:
lvl0 a b
lvl1 foo foo
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
You can also specify the axis argument to .loc to interpret the passed slicers on a single axis.
In [58]: df.loc(axis=0)[:,:,['C1','C3']]
Out[58]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C1 D0 9 8 11 10
D1 13 12 15 14
C3 D0 25 24 27 26
D1 29 28 31 30
B1 C1 D0 41 40 43 42
D1 45 44 47 46
C3 D0 57 56 59 58
... ... ... ... ...
A3 B0 C1 D1 205 204 207 206
(continues on next page)
In [61]: df2
Out[61]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
C2 D0 17 16 19 18
D1 21 20 23 22
C3 D0 -10 -10 -10 -10
... ... ... ... ...
A3 B1 C0 D1 229 228 231 230
C1 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
In [64]: df2
Out[64]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9000 8000 11000 10000
D1 13000 12000 15000 14000
C2 D0 17 16 19 18
D1 21 20 23 22
C3 D0 25000 24000 27000 26000
... ... ... ... ...
A3 B1 C0 D1 229 228 231 230
C1 D0 233000 232000 235000 234000
(continues on next page)
1.26.7 Plotting
• Hexagonal bin plots from DataFrame.plot with kind='hexbin' (GH5478), See the docs.
• DataFrame.plot and Series.plot now supports area plot with specifying kind='area' (GH6656),
See the docs
• Pie plots from Series.plot and DataFrame.plot with kind='pie' (GH6976), See the docs.
• Plotting with Error Bars is now supported in the .plot method of DataFrame and Series objects (GH3796,
GH6834), See the docs.
• DataFrame.plot and Series.plot now support a table keyword for plotting matplotlib.Table,
See the docs. The table keyword can receive the following values.
– False: Do nothing (default).
– True: Draw a table using the DataFrame or Series called plot method. Data will be transposed to
meet matplotlib’s default layout.
– DataFrame or Series: Draw matplotlib.table using the passed data. The data will be drawn as
displayed in print method (not transposed automatically). Also, helper function pandas.tools.
plotting.table is added to create a table from DataFrame and Series, and add it to an
matplotlib.Axes.
• plot(legend='reverse') will now reverse the order of legend labels for most plot kinds. (GH6014)
• Line plot and area plot can be stacked by stacked=True (GH6656)
• Following keywords are now acceptable for DataFrame.plot() with kind='bar' and kind='barh':
– width: Specify the bar width. In previous versions, static value 0.5 was passed to matplotlib and it cannot
be overwritten. (GH6604)
– align: Specify the bar alignment. Default is center (different from matplotlib). In previous versions,
pandas passes align=’edge’ to matplotlib and adjust the location to center by itself, and it results align
keyword is not applied as expected. (GH4525)
– position: Specify relative alignments for bar plot layout. From 0 (left/bottom-end) to 1(right/top-end).
Default is 0.5 (center). (GH6604)
Because of the default align value changes, coordinates of bar plots are now located on integer values (0.0, 1.0,
2.0 . . . ). This is intended to make bar plot be located on the same coordinates as line plot. However, bar plot
may differs unexpectedly when you manually adjust the bar location or drawing area, such as using set_xlim,
set_ylim, etc. In this cases, please modify your script to meet with new coordinates.
• The parallel_coordinates() function now takes argument color instead of colors. A
FutureWarning is raised to alert that the old colors argument will not be supported in a future release.
(GH6956)
• The parallel_coordinates() and andrews_curves() functions now take positional argument
frame instead of data. A FutureWarning is raised if the old data argument is used by name. (GH6956)
There are prior version deprecations that are taking effect as of 0.14.0.
• Remove DateRange in favor of DatetimeIndex (GH6816)
• Remove column keyword from DataFrame.sort (GH4370)
• Remove precision keyword from set_eng_float_format() (GH395)
• Remove force_unicode keyword from DataFrame.to_string(), DataFrame.to_latex(), and
DataFrame.to_html(); these function encode in unicode by default (GH2224, GH2225)
• Remove nanRep keyword from DataFrame.to_csv() and DataFrame.to_string() (GH275)
• Remove unique keyword from HDFStore.select_column() (GH3256)
• Remove inferTimeRule keyword from Timestamp.offset() (GH391)
• Remove name keyword from get_data_yahoo() and get_data_google() ( commit b921d1a )
• Remove offset keyword from DatetimeIndex constructor ( commit 3136390 )
• Remove time_rule from several rolling-moment statistical functions, such as rolling_sum() (GH1042)
• Removed neg - boolean operations on numpy arrays in favor of inv ~, as this is going to be deprecated in numpy
1.9 (GH6960)
1.26.9 Deprecations
Out[1]: 1
In [2]: Series(1,np.arange(5)).iloc[3.0]
pandas/core/index.py:469: FutureWarning: scalar indexers for index type
˓→Int64Index should be integers and not floating point
Out[2]: 1
(continues on next page)
In [3]: Series(1,np.arange(5)).iloc[3.0:4]
pandas/core/index.py:527: FutureWarning: slice indexers when using iloc
˓→should be integers and not floating point
Out[3]:
3 1
dtype: int64
In [5]: Series(1,np.arange(5.))[3.0]
Out[6]: 1
1.26.11 Enhancements
• DataFrame and Series will create a MultiIndex object if passed a tuples dict, See the docs (GH3323)
In [68]: household
Out[68]:
male wealth
household_id
1 0 196087.3
2 1 316478.7
3 0 294750.0
....: "gb00b03mlx29","lu0197800237",
˓→"nl0000289965",np.nan],
....: ).set_index(['household_id','asset_id'])
....:
In [70]: portfolio
Out[70]:
name share
household_id asset_id
1 nl0000301109 ABN Amro 1.00
2 nl0000289783 Robeco 0.40
gb00b03mlx29 Royal Dutch Shell 0.60
3 gb00b03mlx29 Royal Dutch Shell 0.15
lu0197800237 AAB Eastern Europe Equity Fund 0.60
nl0000289965 Postbank BioTech Fonds 0.25
4 NaN NaN 1.00
• quotechar, doublequote, and escapechar can now be specified when using DataFrame.to_csv
(GH5414, GH4528)
• Partially sort by only the specified levels of a MultiIndex with the sort_remaining boolean kwarg.
(GH3984)
• Added to_julian_date to TimeStamp and DatetimeIndex. The Julian Date is used primarily in
astronomy and represents the number of days from noon, January 1, 4713 BC. Because nanoseconds are used
to define the time in pandas the actual range of dates that you can use is 1678 AD to 2262 AD. (GH4041)
• DataFrame.to_stata will now check data for compatibility with Stata data types and will upcast when
needed. When it is not possible to losslessly upcast, a warning is issued (GH6327)
• DataFrame.to_stata and StataWriter will accept keyword arguments time_stamp and data_label
which allow the time stamp and dataset label to be set when creating a file. (GH6545)
• pandas.io.gbq now handles reading unicode strings properly. (GH5940)
• Holidays Calendars are now available and can be used with the CustomBusinessDay offset (GH6719)
• Float64Index is now backed by a float64 dtype ndarray instead of an object dtype array (GH6471).
• Implemented Panel.pct_change (GH6904)
• Added how option to rolling-moment functions to dictate how to handle resampling; rolling_max() de-
faults to max, rolling_min() defaults to min, and all others default to mean (GH6297)
• CustomBuisnessMonthBegin and CustomBusinessMonthEnd are now available (GH6866)
In [73]: df = DataFrame({
....: 'Branch' : 'A A A A A B'.split(),
....: 'Buyer': 'Carl Mark Carl Carl Joe Joe'.split(),
....: 'Quantity': [1, 3, 5, 1, 8, 1],
....: 'Date' : [datetime.datetime(2013,11,1,13,0), datetime.datetime(2013,9,
˓→1,13,5),
....:
In [74]: df
Out[74]:
Branch Buyer Quantity Date PayDay
0 A Carl 1 2013-11-01 13:00:00 2013-10-04 00:00:00
1 A Mark 3 2013-09-01 13:05:00 2013-10-15 13:05:00
2 A Carl 5 2013-10-01 20:00:00 2013-09-05 20:00:00
3 A Carl 1 2013-10-02 10:00:00 2013-11-02 10:00:00
4 A Joe 8 2013-11-01 20:00:00 2013-10-07 20:00:00
5 B Joe 1 2013-10-02 10:00:00 2013-09-05 10:00:00
In [78]: ps
(continues on next page)
In [79]: ps['2013-01-02']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
• read_excel can now read milliseconds in Excel dates and times with xlrd >= 0.9.3. (GH5945)
• pd.stats.moments.rolling_var now uses Welford’s method for increased numerical stability
(GH6817)
• pd.expanding_apply and pd.rolling_apply now take args and kwargs that are passed on to the func (GH6289)
• DataFrame.rank() now has a percentage rank option (GH5971)
• Series.rank() now has a percentage rank option (GH5971)
• Series.rank() and DataFrame.rank() now accept method='dense' for ranks without gaps
(GH6514)
• Support passing encoding with xlwt (GH3710)
• Refactor Block classes removing Block.items attributes to avoid duplication in item handling (GH6745,
GH6988).
• Testing statements updated to use specialized asserts (GH6175)
1.26.12 Performance
1.26.13 Experimental
• Bug in DataFrame.replace() where regex metacharacters were being treated as regexs even when
regex=False (GH6777).
• Bug in timedelta ops on 32-bit platforms (GH6808)
• Bug in setting a tz-aware index directly via .index (GH6785)
• Bug in expressions.py where numexpr would try to evaluate arithmetic ops (GH6762).
• Bug in Makefile where it didn’t remove Cython generated C files with make clean (GH6768)
• Bug with numpy < 1.7.2 when reading long strings from HDFStore (GH6166)
• Bug in DataFrame._reduce where non bool-like (0/1) integers were being converted into bools. (GH6806)
• Regression from 0.13 with fillna and a Series on datetime-like (GH6344)
• Bug in adding np.timedelta64 to DatetimeIndex with timezone outputs incorrect results (GH6818)
• Bug in DataFrame.replace() where changing a dtype through replacement would only replace the first
occurrence of a value (GH6689)
• Better error message when passing a frequency of ‘MS’ in Period construction (GH5332)
• Bug in Series.__unicode__ when max_rows=None and the Series has more than 1000 rows. (GH6863)
• Bug in groupby.get_group where a datetlike wasn’t always accepted (GH5267)
• Bug in groupBy.get_group created by TimeGrouper raises AttributeError (GH6914)
• Bug in DatetimeIndex.tz_localize and DatetimeIndex.tz_convert converting NaT incor-
rectly (GH5546)
• Bug in arithmetic operations affecting NaT (GH6873)
• Bug in Series.str.extract where the resulting Series from a single group match wasn’t renamed to
the group name
• Bug in DataFrame.to_csv where setting index=False ignored the header kwarg (GH6186)
• Bug in DataFrame.plot and Series.plot, where the legend behave inconsistently when plotting to the
same axes repeatedly (GH6678)
• Internal tests for patching __finalize__ / bug in merge not finalizing (GH6923, GH6927)
• accept TextFileReader in concat, which was affecting a common user idiom (GH6583)
• Bug in C parser with leading whitespace (GH3374)
• Bug in C parser with delim_whitespace=True and \r-delimited lines
• Bug in python parser with explicit multi-index in row following column header (GH6893)
• Bug in Series.rank and DataFrame.rank that caused small floats (<1e-13) to all receive the same rank
(GH6886)
• Bug in DataFrame.apply with functions that used *args or **kwargs and returned an empty result
(GH6952)
• Bug in sum/mean on 32-bit platforms on overflows (GH6915)
• Moved Panel.shift to NDFrame.slice_shift and fixed to respect multiple dtypes. (GH6959)
• Bug in enabling subplots=True in DataFrame.plot only has single column raises TypeError, and
Series.plot raises AttributeError (GH6951)
• Bug in DataFrame.plot draws unnecessary axes when enabling subplots and kind=scatter
(GH6951)
• Bug in query/eval where global constants were not looked up correctly (GH7178)
• Bug in recognizing out-of-bounds positional list indexers with iloc and a multi-axis tuple indexer (GH7189)
• Bug in setitem with a single value, multi-index and integer indices (GH7190, GH7218)
• Bug in expressions evaluation with reversed ops, showing in series-dataframe ops (GH7198, GH7192)
• Bug in multi-axis indexing with > 2 ndim and a multi-index (GH7199)
• Fix a bug where invalid eval/query operations would blow the stack (GH5198)
This is a minor release from 0.13.0 and includes a small number of API changes, several new features, enhancements,
and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this
version.
Highlights include:
• Added infer_datetime_format keyword to read_csv/to_datetime to allow speedups for homo-
geneously formatted datetimes.
• Will intelligently limit display precision for datetime/timedelta formats.
• Enhanced Panel apply() method.
• Suggested tutorials in new Tutorials section.
• Our pandas ecosystem is growing, We now feature related projects in a new Pandas Ecosystem section.
• Much work has been taking place on improving the docs, and a new Contributing section has been added.
• Even though it may only be of interest to devs, we <3 our new CI status page: ScatterCI.
Warning: 0.13.1 fixes a bug that was caused by a combination of having numpy < 1.8, and doing chained
assignment on a string-like array. Please review the docs, chained indexing can have unexpected results and should
generally be avoided.
This would previously segfault:
In [1]: df = DataFrame(dict(A = np.array(['foo','bar','bah','foo','bar'])))
In [3]: df
Out[3]:
A
0 NaN
1 bar
2 bah
3 foo
4 bar
In [6]: df
Out[6]:
A
0 NaN
1 bar
2 bah
3 foo
4 bar
In [11]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 3 columns):
A float64
B float64
C datetime64[ns]
dtypes: datetime64[ns](1), float64(2)
memory usage: 320.0 bytes
In [13]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 3 columns):
A 7 non-null float64
B 10 non-null float64
C 7 non-null datetime64[ns]
dtypes: datetime64[ns](1), float64(2)
memory usage: 320.0 bytes
• Add show_dimensions display option for the new DataFrame repr to control whether the dimensions print.
In [16]: df
Out[16]:
0 1
0 1 2
1 3 4
In [18]: df
Out[18]:
0 1
0 1 2
1 3 4
[2 rows x 2 columns]
• The ArrayFormatter for datetime and timedelta64 now intelligently limit precision based on the
values in the array (GH3401)
Previously output might look like:
In [22]: df
Out[22]:
age today diff
0 2001-01-01 2013-04-19 4491 days
1 2004-06-01 2013-04-19 3244 days
[2 rows x 3 columns]
• Add -NaN and -nan to the default set of NA values (GH5952). See NA Values.
• Added Series.str.get_dummies vectorized string method (GH6021), to extract dummy/indicator vari-
ables for separated string columns:
[4 rows x 3 columns]
• Added the NDFrame.equals() method to compare if two NDFrames are equal have equal axes, dtypes, and
values. Added the array_equivalent function to compare if two ndarrays are equal. NaNs in identical
locations are treated as equal. (GH5283) See also the docs for a motivating example.
In [27]: df.equals(df2)
Out[27]: False
In [28]: df.equals(df2.sort_index())
\\\\\\\\\\\\\\\Out[28]: True
• DataFrame.apply will use the reduce argument to determine whether a Series or a DataFrame
should be returned when the DataFrame is empty (GH6007).
Previously, calling DataFrame.apply an empty DataFrame would return either a DataFrame if there
were no columns, or the function being applied would be called with an empty Series to guess whether a
Series or DataFrame should be returned:
In [34]: empty.apply(applied_func)
Apply function being called with: Series([], Length: 0, dtype: float64)
Out[34]:
a NaN
(continues on next page)
Now, when apply is called on an empty DataFrame: if the reduce argument is True a Series will
returned, if it is False a DataFrame will be returned, and if it is None (the default) the function being
applied will be called with an empty series to try and guess the return type.
[0 rows x 2 columns]
There are no announced changes in 0.13 or prior that are taking effect as of 0.13.1
1.27.4 Deprecations
1.27.5 Enhancements
• date_format and datetime_format keywords can now be specified when writing to excel files
(GH4133)
• MultiIndex.from_product convenience function for creating a MultiIndex from the cartesian product of
a set of iterables (GH6055):
In [37]: panel
Out[37]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: A to D
In [38]: panel['ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2000-01-03 0.694103 1.893534 -1.735349 -0.850346
2000-01-04 0.678630 0.639633 1.210384 1.176812
2000-01-05 0.239556 -0.962029 0.797435 -0.524336
2000-01-06 0.151227 -2.085266 -0.379811 0.700908
2000-01-07 0.816127 1.930247 0.702562 0.984188
[5 rows x 4 columns]
[5 rows x 4 columns]
[4 rows x 3 columns]
This is equivalent to
In [41]: panel.sum('major_axis')
Out[41]:
ItemA ItemB ItemC
A 2.579643 3.062757 0.379252
B 1.416120 -1.960855 0.923558
C 0.595222 -1.079772 -3.118269
D 1.487226 -0.734611 -1.979310
[4 rows x 3 columns]
A transformation operation that returns a Panel, but is computing the z-score across the major_axis
In [43]: result
Out[43]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: A to D
In [44]: result['ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2000-01-03 0.595800 0.907552 -1.556260 -1.244875
2000-01-04 0.544058 0.200868 0.915883 0.953747
2000-01-05 -0.924165 -0.701810 0.569325 -0.891290
2000-01-06 -1.219530 -1.334852 -0.418654 0.437589
2000-01-07 1.003837 0.928242 0.489705 0.744830
[5 rows x 4 columns]
In [47]: result
Out[47]:
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 5 (major_axis) x 3 (minor_axis)
Items axis: A to D
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: ItemA to ItemC
In [48]: result.loc[:,:,'ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2000-01-03 0.331409 1.071034 -0.914540 -0.510587
(continues on next page)
[5 rows x 4 columns]
In [50]: result
Out[50]:
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 5 (major_axis) x 3 (minor_axis)
Items axis: A to D
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: ItemA to ItemC
In [51]: result.loc[:,:,'ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2000-01-03 0.331409 1.071034 -0.914540 -0.510587
2000-01-04 -0.741017 -0.118794 0.383277 0.537212
2000-01-05 0.065042 -0.767353 0.655436 0.069467
2000-01-06 0.027932 -0.569477 0.908202 0.610585
2000-01-07 1.116434 1.133591 0.871287 1.004064
[5 rows x 4 columns]
1.27.6 Performance
1.27.7 Experimental
See V0.13.1 Bug Fixes for an extensive list of bugs that have been fixed in 0.13.1.
See the full release notes or issue tracker on GitHub for a complete list of all API changes, Enhancements and Bug
Fixes.
This is a major release from 0.12.0 and includes a number of API changes, several new features and enhancements
along with a large number of bug fixes.
Highlights include:
• support for a new index type Float64Index, and other Indexing enhancements
• HDFStore has a new string based syntax for query specification
• support for new methods of interpolation
• updated timedelta operations
• a new string manipulation method extract
• Nanosecond support for Offsets
• isin for DataFrames
Several experimental features are added, including:
• new eval/query methods for expression evaluation
• support for msgpack serialization
• an i/o interface to Google’s BigQuery
Their are several new or updated docs sections including:
• Comparison with SQL, which should be useful for those familiar with SQL but still learning pandas.
• Comparison with R, idiom translations from R to pandas.
• Enhancing Performance, ways to enhance pandas performance with eval/query.
Warning: In 0.13.0 Series has internally been refactored to no longer sub-class ndarray but instead subclass
NDFrame, similar to the rest of the pandas containers. This should be a transparent change with only very limited
API implications. See Internal Refactoring
• read_excel now supports an integer in its sheetname argument giving the index of the sheet to read in
(GH4301).
• Text parser now treats anything that reads like inf (“inf”, “Inf”, “-Inf”, “iNf”, etc.) as infinity. (GH4220,
GH4219), affecting read_table, read_csv, etc.
• pandas now is Python 2/3 compatible without the need for 2to3 thanks to @jtratner. As a result, pandas now
uses iterators more extensively. This also led to the introduction of substantive parts of the Benjamin Peterson’s
six library into compat. (GH4384, GH4375, GH4372)
• pandas.util.compat and pandas.util.py3compat have been merged into pandas.compat.
pandas.compat now includes many functions allowing 2/3 compatibility. It contains both list and itera-
tor versions of range, filter, map and zip, plus other necessary elements for Python 3 compatibility. lmap,
lzip, lrange and lfilter all produce lists instead of iterators, for compatibility with numpy, subscripting
and pandas constructors.(GH4384, GH4375, GH4372)
• Series.get with negative indexers now returns the same as [] (GH4390)
• Changes to how Index and MultiIndex handle metadata (levels, labels, and names) (GH4039):
• All division with NDFrame objects is now truedivision, regardless of the future import. This means that operat-
ing on pandas objects will by default use floating point division, and return a floating point dtype. You can use
// and floordiv to do integer division.
Integer division
True Division
if df:
....
df1 and df2
s1 and s2
Added the .bool() method to NDFrame objects to facilitate evaluating of single-element boolean Series:
In [1]: Series([True]).bool()
Out[1]: True
In [2]: Series([False]).bool()
\\\\\\\\\\\\\Out[2]: False
In [3]: DataFrame([[True]]).bool()
\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[3]: True
In [4]: DataFrame([[False]]).bool()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[4]: False
• All non-Index NDFrames (Series, DataFrame, Panel, Panel4D, SparsePanel, etc.), now support the
entire set of arithmetic operators and arithmetic flex methods (add, sub, mul, etc.). SparsePanel does not
support pow or mod with non-scalars. (GH3765)
• Series and DataFrame now have a mode() method to calculate the statistical mode(s) by axis/Series.
(GH5367)
• Chained assignment will now by default warn if the user is assigning to a copy. This can be changed with the
option mode.chained_assignment, allowed options are raise/warn/None. See the docs.
In [6]: pd.set_option('chained_assignment','warn')
In [8]: dfc.loc[0,'A'] = 11
In [9]: dfc
Out[9]:
A B
0 11 1
1 bbb 2
2 ccc 3
[3 rows x 2 columns]
These were announced changes in 0.12 or prior that are taking effect as of 0.13.0
• Remove deprecated Factor (GH3650)
• Remove deprecated set_printoptions/reset_printoptions (GH3046)
• Remove deprecated _verbose_info (GH3215)
• Remove deprecated read_clipboard/to_clipboard/ExcelFile/ExcelWriter from pandas.
io.parsers (GH3717) These are available as functions in the main pandas namespace (e.g. pd.
read_clipboard)
• default for tupleize_cols is now False for both to_csv and read_csv. Fair warning in 0.12
(GH3604)
• default for display.max_seq_len is now 100 rather then None. This activates truncated display (“. . . ”) of long
sequences in various places. (GH3391)
1.28.3 Deprecations
Deprecated in 0.13.0
• deprecated iterkv, which will be removed in a future release (this was an alias of iteritems used to bypass
2to3’s changes). (GH4384, GH4375, GH4372)
• deprecated the string method match, whose role is now performed more idiomatically by extract. In a
future release, the default behavior of match will change to become analogous to contains, which returns
a boolean indexer. (Their distinction is strictness: match relies on re.match while contains relies on
re.search.) In this release, the deprecated behavior is the default, but the new behavior is available through
the keyword argument as_indexer=True.
Prior to 0.13, it was impossible to use a label indexer (.loc/.ix) to set a value that was not contained in the index
of a particular axis. (GH2578). See the docs
In the Series case this is effectively an appending operation
In [10]: s = Series([1,2,3])
In [11]: s
Out[11]:
0 1
1 2
2 3
Length: 3, dtype: int64
In [12]: s[5] = 5.
In [13]: s
Out[13]:
0 1.0
1 2.0
2 3.0
5 5.0
Length: 4, dtype: float64
In [15]: dfi
Out[15]:
A B
0 0 1
1 2 3
2 4 5
[3 rows x 2 columns]
In [17]: dfi
Out[17]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
[3 rows x 3 columns]
In [19]: dfi
Out[19]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
3 5 5 5
A Panel setting operation on an arbitrary axis aligns the input to the Panel
In [20]: p = pd.Panel(np.arange(16).reshape(2,4,2),
....: items=['Item1','Item2'],
....: major_axis=pd.date_range('2001/1/12',periods=4),
....: minor_axis=['A','B'],dtype='float64')
....:
In [21]: p
Out[21]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 4 (major_axis) x 2 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2001-01-12 00:00:00 to 2001-01-15 00:00:00
Minor_axis axis: A to B
In [23]: p
Out[23]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 4 (major_axis) x 3 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2001-01-12 00:00:00 to 2001-01-15 00:00:00
Minor_axis axis: A to C
In [24]: p.loc[:,:,'C']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Item1 Item2
2001-01-12 30.0 32.0
2001-01-13 30.0 32.0
2001-01-14 30.0 32.0
2001-01-15 30.0 32.0
[4 rows x 2 columns]
• Added a new index type, Float64Index. This will be automatically created when passing floating values in
index creation. This enables a pure label-based slicing paradigm that makes [],ix,loc for scalar indexing
and slicing work exactly the same. See the docs, (GH263)
Construction is by default for floating type values.
In [26]: index
Out[26]: Float64Index([1.5, 2.0, 3.0, 4.5, 5.0], dtype='float64')
In [27]: s = Series(range(5),index=index)
In [28]: s
(continues on next page)
Scalar selection for [],.ix,.loc will always be label based. An integer will match an equal float index (e.g.
3 is equivalent to 3.0)
In [29]: s[3]
Out[29]: 2
In [30]: s.loc[3]
\\\\\\\\\\\Out[30]: 2
In [31]: s.iloc[3]
Out[31]: 3
In [32]: s[2:4]
Out[32]:
2.0 1
3.0 2
Length: 2, dtype: int64
In [33]: s.loc[2:4]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[33]:
2.0 1
3.0 2
Length: 2, dtype: int64
In [34]: s.iloc[2:4]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
3.0 2
4.5 3
Length: 2, dtype: int64
In [35]: s[2.1:4.6]
Out[35]:
3.0 2
4.5 3
Length: 2, dtype: int64
In [36]: s.loc[2.1:4.6]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[36]:
3.0 2
(continues on next page)
• Indexing on other index types are preserved (and positional fallback for [],ix), with the exception, that floating
point slicing on indexes on non Float64Index will now raise a TypeError.
In [1]: Series(range(5))[3.5]
TypeError: the label [3.5] is not a proper indexer for this index type
˓→(Int64Index)
In [1]: Series(range(5))[3.5:4.5]
TypeError: the slice start [3.5] is not a proper indexer for this index type
˓→(Int64Index)
Using a scalar float indexer will be deprecated in a future version, but is allowed for now.
In [3]: Series(range(5))[3.0]
Out[3]: 3
• Query Format Changes. A much more string-like query format is now supported. See the docs.
In [37]: path = 'test.h5'
In [39]: dfq.to_hdf(path,'dfq',format='table',data_columns=True)
[6 rows x 2 columns]
• the format keyword now replaces the table keyword; allowed values are fixed(f) or table(t) the
same defaults as prior < 0.13.0 remain, e.g. put implies fixed format and append implies table format.
This default format can be set as an option by setting io.hdf.default_format.
In [43]: df = pd.DataFrame(np.random.randn(10,2))
In [44]: df.to_hdf(path,'df_table',format='table')
In [45]: df.to_hdf(path,'df_table2',append=True)
In [46]: df.to_hdf(path,'df_fixed')
In [49]: df = DataFrame(randn(10,2))
In [52]: store1.append('df',df)
(continues on next page)
In [53]: store2.append('df2',df)
In [54]: store1
Out[54]:
<class 'pandas.io.pytables.HDFStore'>
File path: test.h5
In [55]: store2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[55]:
<class 'pandas.io.pytables.HDFStore'>
File path: test.h5
In [56]: store1.close()
In [57]: store2
Out[57]:
<class 'pandas.io.pytables.HDFStore'>
File path: test.h5
In [58]: store2.close()
In [59]: store2
Out[59]:
<class 'pandas.io.pytables.HDFStore'>
File path: test.h5
• removed the _quiet attribute, replace by a DuplicateWarning if retrieving duplicate rows from a table
(GH4367)
• removed the warn argument from open. Instead a PossibleDataLossError exception will be raised if
you try to use mode='w' with an OPEN file handle (GH4367)
• allow a passed locations array or mask as a where condition (GH4467). See the docs for an example.
• add the keyword dropna=True to append to change whether ALL nan rows are not written to the store
(default is True, ALL nan rows are NOT written), also settable via the option io.hdf.dropna_table
(GH4625)
• pass thru store creation arguments; can be used to support in-memory stores
The HTML and plain text representations of DataFrame now show a truncated view of the table once it exceeds
a certain size, rather than switching to the short info view (GH4886, GH5550). This makes the representation more
consistent as small DataFrames get larger.
To get the info view, call DataFrame.info(). If you prefer the info view as the repr for large DataFrames, you
can set this by running set_option('display.large_repr', 'info').
1.28.8 Enhancements
• df.to_clipboard() learned a new excel keyword that let’s you paste df data directly into excel (enabled
by default). (GH5070).
• read_html now raises a URLError instead of catching and raising a ValueError (GH4303, GH4305)
• Added a test for read_clipboard() and to_clipboard() (GH4282)
• Clipboard functionality now works with PySide (GH4282)
• Added a more informative error message when plot arguments contain overlapping color and style arguments
(GH4402)
• to_dict now takes records as a possible outtype. Returns an array of column-keyed dictionaries. (GH4936)
• NaN handing in get_dummies (GH4446) with dummy_na
[3 rows x 2 columns]
# unless requested
In [61]: get_dummies([1, 2, np.nan], dummy_na=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[61]:
˓→
[3 rows x 3 columns]
Using the new top-level to_timedelta, you can convert a scalar or array from the standard timedelta format
(produced by to_csv) into a timedelta type (np.timedelta64 in nanoseconds).
In [63]: to_timedelta('15.5us')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[63]: Timedelta('0 days 00:00:00.
˓→000015')
˓→'timedelta64[ns]', freq=None)
In [65]: to_timedelta(np.arange(5),unit='s')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→TimedeltaIndex(['00:00:00', '00:00:01', '00:00:02', '00:00:03', '00:00:04'],
˓→dtype='timedelta64[ns]', freq=None)
In [66]: to_timedelta(np.arange(5),unit='d')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], dtype=
˓→'timedelta64[ns]', freq=None)
In [68]: td = Series(date_range('20130101',periods=4))-Series(date_range('20121201
˓→',periods=4))
In [71]: td
Out[71]:
0 31 days 00:00:00
1 31 days 00:00:00
2 31 days 00:05:03
3 NaT
Length: 4, dtype: timedelta64[ns]
# to days
In [72]: td / np.timedelta64(1,'D')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 31.000000
1 31.000000
2 31.003507
3 NaN
Length: 4, dtype: float64
(continues on next page)
In [73]: td.astype('timedelta64[D]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 31.0
1 31.0
2 31.0
3 NaN
Length: 4, dtype: float64
# to seconds
In [74]: td / np.timedelta64(1,'s')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2678400.0
1 2678400.0
2 2678703.0
3 NaN
Length: 4, dtype: float64
In [75]: td.astype('timedelta64[s]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2678400.0
1 2678400.0
2 2678703.0
3 NaN
Length: 4, dtype: float64
In [77]: td * Series([1,2,3,4])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 31 days 00:00:00
1 62 days 00:00:00
2 93 days 00:15:09
3 NaT
Length: 4, dtype: timedelta64[ns]
In [80]: td.fillna(0)
Out[80]:
0 31 days 00:00:00
1 31 days 00:00:00
2 31 days 00:05:03
3 0 days 00:00:00
Length: 4, dtype: timedelta64[ns]
In [81]: td.fillna(timedelta(days=1,seconds=5))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 31 days 00:00:00
1 31 days 00:00:00
2 31 days 00:05:03
3 1 days 00:00:05
Length: 4, dtype: timedelta64[ns]
In [82]: td.mean()
Out[82]: Timedelta('31 days 00:01:41')
In [83]: td.quantile(.1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[83]: Timedelta('31 days 00:00:00')
• plot(kind='kde') now accepts the optional parameters bw_method and ind, passed to
scipy.stats.gaussian_kde() (for scipy >= 0.11.0) to set the bandwidth, and to gkde.evaluate() to specify the in-
dices at which it is evaluated, respectively. See scipy docs. (GH4298)
• DataFrame constructor now accepts a numpy masked record array (GH3478)
• The new vectorized string method extract return regular expression matches more conveniently.
[3 rows x 1 columns]
Elements that do not match return NaN. Extracting a regular expression with more than one group returns a
DataFrame with one column per group.
[3 rows x 2 columns]
Elements that do not match return a row of NaN. Thus, a Series of messy strings can be converted into a like-
indexed Series or DataFrame of cleaned-up or more useful strings, without necessitating get() to access tuples
or re.match objects.
Named groups like
[3 rows x 2 columns]
[3 rows x 2 columns]
Period conversions in the range of seconds and below were reworked and extended up to nanoseconds. Periods
in the nanosecond range are now available.
In [91]: t + pd.tseries.offsets.Nano(123)
Out[91]: Timestamp('2013-01-01 09:01:02.000000123')
• A new method, isin for DataFrames, which plays nicely with boolean indexing. The argument to isin, what
we’re comparing the DataFrame to, can be a DataFrame, Series, dict, or array of values. See the docs for more.
To get the rows where any of the conditions are met:
In [92]: dfi = DataFrame({'A': [1, 2, 3, 4], 'B': ['a', 'b', 'f', 'n']})
In [93]: dfi
Out[93]:
A B
0 1 a
1 2 b
2 3 f
3 4 n
[4 rows x 2 columns]
In [94]: other = DataFrame({'A': [1, 3, 3, 7], 'B': ['e', 'f', 'f', 'e']})
In [96]: mask
Out[96]:
A B
0 True False
1 False False
2 True True
3 False False
[4 rows x 2 columns]
In [97]: dfi[mask.any(1)]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B
0 1 a
2 3 f
[2 rows x 2 columns]
• tz_localize can infer a fall daylight savings transition based on the structure of the unlocalized data
(GH4230), see the docs
• DatetimeIndex is now in the API documentation, see the docs
• json_normalize() is a new method to allow you to create a flat table from semi-structured JSON data. See
the docs (GH1067)
• Added PySide support for the qtpandas DataFrameModel and DataFrameWidget.
• Python csv parser now supports usecols (GH4335)
• Frequencies gained several new offsets:
– LastWeekOfMonth (GH4637)
– FY5253, and FY5253Quarter (GH4511)
• DataFrame has a new interpolate method, similar to Series (GH4434, GH1892)
In [99]: df.interpolate()
Out[99]:
A B
0 1.0 0.25
1 2.1 1.50
2 3.4 2.75
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
[6 rows x 2 columns]
Additionally, the method argument to interpolate has been expanded to include 'nearest',
'zero', 'slinear', 'quadratic', 'cubic', 'barycentric', 'krogh',
'piecewise_polynomial', 'pchip', 'polynomial', 'spline' The new methods re-
quire scipy. Consult the Scipy reference guide and documentation for more information about when the various
methods are appropriate. See the docs.
Interpolate now also accepts a limit keyword argument. This works similar to fillna’s limit:
In [101]: ser.interpolate(limit=2)
Out[101]:
0 1.0
1 3.0
2 5.0
3 7.0
4 NaN
5 11.0
Length: 6, dtype: float64
In [102]: np.random.seed(123)
In [105]: df
Out[105]:
A1970 A1980 B1970 B1980 X id
0 a d 2.5 3.2 -1.085631 0
1 b e 1.2 1.3 0.997345 1
2 c f 0.7 0.1 0.282978 2
[3 rows x 6 columns]
X A B
id year
0 1970 -1.085631 a 2.5
1 1970 0.997345 b 1.2
2 1970 0.282978 c 0.7
0 1980 -1.085631 d 3.2
1 1980 0.997345 e 1.3
2 1980 0.282978 f 0.1
[6 rows x 3 columns]
• to_csv now takes a date_format keyword argument that specifies how output datetime objects should
be formatted. Datetimes encountered in the index, columns, and values will all have this formatting applied.
(GH4313)
• DataFrame.plot will scatter plot x versus y by passing kind='scatter' (GH2215)
• Added support for Google Analytics v3 API segment IDs that also supports v2 IDs. (GH5271)
1.28.9 Experimental
• The new eval() function implements expression evaluation using numexpr behind the scenes. This results
in large speedups for complicated expressions involving large DataFrames/Series. For example,
• query() method has been added that allows you to select elements of a DataFrame using a natural query
syntax nearly identical to Python syntax. For example,
In [113]: n = 20
[2 rows x 3 columns]
selects all the rows of df where a < b < c evaluates to True. For more details see the the docs.
• pd.read_msgpack() and pd.to_msgpack() are now a supported method of serialization of arbitrary
pandas (and python objects) in a lightweight portable binary format. See the docs
Warning: Since this is an EXPERIMENTAL LIBRARY, the storage format may not be stable until a future
release.
In [116]: df = DataFrame(np.random.rand(5,2),columns=list('AB'))
In [118]: pd.read_msgpack('foo.msg')
Out[118]:
A B
0 0.251082 0.017357
1 0.347915 0.929879
2 0.546233 0.203368
3 0.064942 0.031722
4 0.355309 0.524575
[5 rows x 2 columns]
In [119]: s = Series(np.random.rand(5),index=date_range('20130101',periods=5))
In [121]: pd.read_msgpack('foo.msg')
Out[121]:
[ A B
0 0.251082 0.017357
1 0.347915 0.929879
2 0.546233 0.203368
3 0.064942 0.031722
4 0.355309 0.524575
[5 rows x 2 columns]
2013-01-01 0.022321
2013-01-02 0.227025
2013-01-03 0.383282
2013-01-04 0.193225
2013-01-05 0.110977
Freq: D, Length: 5, dtype: float64
• pandas.io.gbq provides a simple way to extract from, and load data into, Google’s BigQuery Data Sets by
way of pandas DataFrames. BigQuery is a high performance SQL-like database service, useful for performing
ad-hoc queries against extremely large datasets. See the docs
> df3
Min Tem Mean Temp Max Temp
MONTH
1 -53.336667 39.827892 89.770968
2 -49.837500 43.685219 93.437932
3 -77.926087 48.708355 96.099998
4 -82.892858 55.070087 97.317240
5 -92.378261 61.428117 102.042856
6 -77.703334 65.858888 102.900000
7 -87.821428 68.169663 106.510714
8 -89.431999 68.614215 105.500000
9 -86.611112 63.436935 107.142856
10 -78.209677 56.880838 92.103333
11 -50.125000 48.861228 94.996428
12 -50.332258 42.286879 94.396774
Warning: To use this module, you will need a BigQuery account. See <https://cloud.google.com/products/
big-query> for details.
As of 10/10/13, there is a bug in Google’s API preventing result sets from being larger than 100,000 rows.
A patch is scheduled for the week of 10/14/13.
In 0.13.0 there is a major refactor primarily to subclass Series from NDFrame, which is the base class currently
for DataFrame and Panel, to unify methods and behaviors. Series formerly subclassed directly from ndarray.
(GH4080, GH3862, GH816)
Numpy Usage
In [124]: np.ones_like(s)
Out[124]: array([1, 1, 1, 1])
In [125]: np.diff(s)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[125]: array([1, 1, 1])
In [126]: np.where(s>1,s,np.nan)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[126]: array([ nan,
˓→ 2., 3., 4.])
Pandonic Usage
In [127]: Series(1,index=s.index)
Out[127]:
0 1
1 1
2 1
3 1
Length: 4, dtype: int64
In [128]: s.diff()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[128]:
0 NaN
1 1.0
2 1.0
3 1.0
Length: 4, dtype: float64
In [129]: s.where(s>1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 NaN
1 2.0
2 3.0
3 4.0
Length: 4, dtype: float64
• Passing a Series directly to a cython function expecting an ndarray type will no long work directly, you
must pass Series.values, See Enhancing Performance
• Series(0.5) would previously return the scalar 0.5, instead this will return a 1-element Series
• This change breaks rpy2<=2.3.8. an Issue has been opened against rpy2 and a workaround is detailed in
GH5698. Thanks @JanSchulz.
• Pickle compatibility is preserved for pickles created prior to 0.13. These must be unpickled with pd.
read_pickle, see Pickling.
• Refactor of series.py/frame.py/panel.py to move common code to generic.py
– added _setup_axes to created generic NDFrame structures
– moved methods
* from_axes,_wrap_array,axes,ix,loc,iloc,shape,empty,swapaxes,
transpose,pop
* __iter__,keys,__contains__,__len__,__neg__,__invert__
* convert_objects,as_blocks,as_matrix,values
* __getstate__,__setstate__ (compat remains in frame/panel)
* __getattr__,__setattr__
* _indexed_same,reindex_like,align,where,mask
* fillna,replace (Series replace is now consistent with DataFrame)
* filter (also added axis argument to selectively filter on a different axis)
* reindex,reindex_axis,take
* truncate (moved to become part of NDFrame)
• These are API changes which make Panel more consistent with DataFrame
– swapaxes on a Panel with the same axes specified now return a copy
– support attribute access for setting
– filter supports the same API as the original DataFrame filter
• Reindex called with no arguments will now return a copy of the input object
• TimeSeries is now an alias for Series. the property is_time_series can be used to distinguish (if
desired)
• Refactor of Sparse objects to use BlockManager
– Created a new block type in internals, SparseBlock, which can hold multi-dtypes and is non-
consolidatable. SparseSeries and SparseDataFrame now inherit more methods from there hi-
erarchy (Series/DataFrame), and no longer inherit from SparseArray (which instead is the object of
the SparseBlock)
– Sparse suite now supports integration with non-sparse data. Non-float sparse data is supportable (partially
implemented)
– Operations on sparse structures within DataFrames should preserve sparseness, merging type operations
will convert to dense (and back to sparse), so might be somewhat inefficient
– enable setitem on SparseSeries for boolean/integer/slices
– SparsePanels implementation is unchanged (e.g. not using BlockManager, needs work)
• added ftypes method to Series/DataFrame, similar to dtypes, but indicates if the underlying is sparse/dense
(as well as the dtype)
• All NDFrame objects can now use __finalize__() to specify various values to propagate to new objects
from an existing one (e.g. name in Series will follow more automatically now)
• Internal type checking is now done via a suite of generated classes, allowing isinstance(value, klass)
without having to directly import the klass, courtesy of @jtratner
• Bug in Series update where the parent frame is not updating its cache based on changes (GH4080) or types
(GH3217), fillna (GH3386)
• Indexing with dtype conversions fixed (GH4463, GH4204)
• Refactor Series.reindex to core/generic.py (GH4604, GH4618), allow method= in reindexing on a Se-
ries to work
• Series.copy no longer accepts the order parameter and is now consistent with NDFrame copy
• Refactor rename methods to core/generic.py; fixes Series.rename for (GH4605), and adds rename with
the same signature for Panel
• Refactor clip methods to core/generic.py (GH4798)
• Refactor of _get_numeric_data/_get_bool_data to core/generic.py, allowing Series/Panel function-
ality
• Series (for index) / Panel (for items) now allow attribute access to its elements (GH1903)
In [130]: s = Series([1,2,3],index=list('abc'))
In [131]: s.b
Out[131]: 2
In [132]: s.a = 5
In [133]: s
Out[133]:
a 5
b 2
c 3
Length: 3, dtype: int64
See V0.13.0 Bug Fixes for an extensive list of bugs that have been fixed in 0.13.0.
See the full release notes or issue tracker on GitHub for a complete list of all API changes, Enhancements and Bug
Fixes.
This is a major release from 0.11.0 and includes several new features and enhancements along with a large number of
bug fixes.
Highlights include a consistent I/O API naming scheme, routines to read html, write multi-indexes to csv files, read
& write STATA data files, read & write JSON format files, Python 3 support for HDFStore, filtering of groupby
expressions via filter, and a revamped replace routine that accepts regular expressions.
• The I/O API is now much more consistent with a set of top level reader functions accessed like pd.
read_csv() that generally return a pandas object.
– read_csv
– read_excel
– read_hdf
– read_sql
– read_json
– read_html
– read_stata
– read_clipboard
The corresponding writer functions are object methods that are accessed like df.to_csv()
– to_csv
– to_excel
– to_hdf
– to_sql
– to_json
– to_html
– to_stata
– to_clipboard
• Fix modulo and integer division on Series,DataFrames to act similarly to float dtypes to return np.nan
or np.inf as appropriate (GH3590). This correct a numpy bug that treats integer and float dtypes
differently.
In [1]: p = DataFrame({ 'first' : [4,5,8], 'second' : [0,0,3] })
In [2]: p % 0
Out[2]:
first second
0 NaN NaN
1 NaN NaN
2 NaN NaN
[3 rows x 2 columns]
In [3]: p % p
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
first second
0 0.0 NaN
1 0.0 NaN
2 0.0 0.0
[3 rows x 2 columns]
In [4]: p / p
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
first second
0 1.0 NaN
1 1.0 NaN
2 1.0 1.0
(continues on next page)
[3 rows x 2 columns]
In [5]: p / 0
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
first second
0 inf NaN
1 inf NaN
2 inf inf
[3 rows x 2 columns]
• Add squeeze keyword to groupby to allow reduction from DataFrame -> Series if groups are unique. This
is a Regression from 0.10.1. We are reverting back to the prior behavior. This means groupby will return the
same shaped objects whether the groups are unique or not. Revert this issue (GH2893) with (GH3596).
In [6]: df2 = DataFrame([{"val1": 1, "val2" : 20}, {"val1":1, "val2": 19},
...: {"val1":1, "val2": 27}, {"val1":1, "val2": 12}])
...:
val2 0 1 2 3
val1
1 0.5 -0.5 7.5 -7.5
[1 rows x 4 columns]
• Raise on iloc when boolean indexing with a label based indexer mask e.g. a boolean Series, even with integer
labels, will raise. Since iloc is purely positional based, the labels on the Series are not alignable (GH3631)
This case is rarely used, and there are plently of alternatives. This preserves the iloc API to be purely positional
based.
In [10]: df = DataFrame(lrange(5), list('ABCDE'), columns=['a'])
In [12]: mask
Out[12]:
A True
(continues on next page)
a
A 0
C 2
E 4
[3 rows x 1 columns]
a
A 0
C 2
E 4
[3 rows x 1 columns]
With
import pandas as pd
pd.read_excel('path_to_file.xls', 'Sheet1', index_col=None, na_values=['NA'])
• DataFrame.to_html and DataFrame.to_latex now accept a path for their first argument (GH3702)
• Do not allow astypes on datetime64[ns] except to object, and timedelta64[ns] to object/int
(GH3425)
• The behavior of datetime64 dtypes has changed with respect to certain so-called reduction operations
(GH3726). The following operations now raise a TypeError when performed on a Series and return
an empty Series when performed on a DataFrame similar to performing these operations on, for example,
a DataFrame of slice objects:
– sum, prod, mean, std, var, skew, kurt, corr, and cov
• read_html now defaults to None when reading, and falls back on bs4 + html5lib when lxml fails to
parse. a list of parsers to try until success is also valid
• The internal pandas class hierarchy has changed (slightly). The previous PandasObject now is called
PandasContainer and a new PandasObject has become the baseclass for PandasContainer as well
as Index, Categorical, GroupBy, SparseList, and SparseArray (+ their base classes). Currently,
PandasObject provides string methods (from StringMixin). (GH4090, GH4092)
• New StringMixin that, given a __unicode__ method, gets python 2 and python 3 compatible string
methods (__str__, __bytes__, and __repr__). Plus string safety throughout. Now employed in many
places throughout the pandas library. (GH4090, GH4092)
• pd.read_html() can now parse HTML strings, files or urls and return DataFrames, courtesy of @cpcloud.
(GH3477, GH3605, GH3606, GH3616). It works with a single parser backend: BeautifulSoup4 + html5lib See
the docs
You can use pd.read_html() to read the output from DataFrame.to_html() like so
In [16]: print(df)
a b
0 0 a
1 1 b
2 2 c
[3 rows x 2 columns]
[3 rows x 2 columns]
Note that alist here is a Python list so pd.read_html() and DataFrame.to_html() are not in-
verses.
– pd.read_html() no longer performs hard conversion of date strings (GH3656).
Warning: You may have to install an older version of BeautifulSoup4, See the installation docs
• Added module for reading and writing Stata files: pandas.io.stata (GH1512) accessible via
read_stata top-level function for reading, and to_stata DataFrame method for writing, See the docs
• Added module for reading and writing json format files: pandas.io.json accessible via read_json top-
level function for reading, and to_json DataFrame method for writing, See the docs various issues (GH1226,
GH3804, GH3876, GH3867, GH1305)
• MultiIndex column support for reading and writing csv format files
– The header option in read_csv now accepts a list of the rows from which to read the index.
– The option, tupleize_cols can now be specified in both to_csv and read_csv, to provide com-
patibility for the pre 0.12 behavior of writing and reading MultIndex columns via a list of tuples. The
default in 0.12 is to write lists of tuples and not interpret list of tuples as a MultiIndex column.
Note: The default behavior in 0.12 remains unchanged from prior versions, but starting with 0.13, the
default to write and read MultiIndex columns will be in the new format. (GH3571, GH1651, GH3141)
– If an index_col is not specified (e.g. you don’t have an index, or wrote it with df.to_csv(...,
index=False), then any names on the columns index will be lost.
In [20]: from pandas.util.testing import makeCustomDataframe as mkdf
In [22]: df.to_csv('mi.csv')
In [23]: print(open('mi.csv').read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
C1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2
[5 rows x 3 columns]
In [26]: DataFrame(randn(10,2)).to_hdf(path,'df',table=True)
• read_csv will now throw a more informative error message when a file contains no columns, e.g., all newline
characters
• DataFrame.replace() now allows regular expressions on contained Series with object dtype. See the
examples section in the regular docs Replacing via String Expression
For example you can do
In [25]: df = DataFrame({'a': list('ab..'), 'b': [1, 2, 3, 4]})
[4 rows x 2 columns]
to replace all occurrences of the string '.' with zero or more instances of surrounding whitespace with NaN.
Regular string replacement still works as expected. For example, you can do
[4 rows x 2 columns]
In [28]: pd.get_option('a.b')
Out[28]: 2
In [29]: pd.get_option('b.c')
\\\\\\\\\\\Out[29]: 3
In [31]: pd.get_option('a.b')
Out[31]: 1
In [32]: pd.get_option('b.c')
\\\\\\\\\\\Out[32]: 4
• The filter method for group objects returns a subset of the original object. Suppose we want to take only
elements that belong to groups with a group sum greater than 2.
The argument of filter must a function that, applied to the group as a whole, returns True or False.
Another useful operation is filtering out elements that belong to groups with only a couple members.
[4 rows x 2 columns]
Alternatively, instead of dropping the offending groups, we can return a like-indexed objects where the groups
that do not pass the filter are filled with NaNs.
In [37]: dff.groupby('B').filter(lambda x: len(x) > 2, dropna=False)
Out[37]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
[8 rows x 2 columns]
• Series and DataFrame hist methods now take a figsize argument (GH3834)
• DatetimeIndexes no longer try to convert mixed-integer indexes during join operations (GH3877)
• Timestamp.min and Timestamp.max now represent valid Timestamp instances instead of the default date-
time.min and datetime.max (respectively), thanks @SleepingPills
• read_html now raises when no tables are found and BeautifulSoup==4.2.0 is detected (GH4214)
• Added experimental CustomBusinessDay class to support DateOffsets with custom holiday calendars
and custom weekmasks. (GH2301)
Note: This uses the numpy.busdaycalendar API introduced in Numpy 1.7 and therefore requires Numpy
1.7.0 or newer.
2013-04-30 Tue
2013-05-02 Thu
2013-05-05 Sun
2013-05-06 Mon
2013-05-07 Tue
Freq: C, Length: 5, dtype: object
• Plotting functions now raise a TypeError before trying to plot anything if the associated objects have have a
dtype of object (GH1818, GH3572, GH3911, GH3912), but they will try to convert object arrays to numeric
arrays if possible so that you can still plot, for example, an object array with floats. This happens before any
drawing takes place which elimnates any spurious plots from showing up.
• fillna methods now raise a TypeError if the value parameter is a list or tuple.
• Series.str now supports iteration (GH3638). You can iterate over the individual elements of each string in
the Series. Each iteration yields yields a Series with either a single character at each index of the original
Series or NaN. For example,
In [48]: ds = Series(strs)
In [50]: s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 NaN
1 NaN
2 NaN
3 w
Length: 4, dtype: object
The last element yielded by the iterator will be a Series containing the last element of the longest string in
the Series with all other elements being NaN. Here since 'slow' is the longest string and there are no other
strings with the same length 'w' is the only non-null string in the yielded Series.
• HDFStore
– will retain index attributes (freq,tz,name) on recreation (GH3499)
– will warn with a AttributeConflictWarning if you are attempting to append an index with a
different frequency than the existing, or attempting to append an index with a different name than the
existing
– support datelike columns with a timezone as data_columns (GH2852)
• Non-unique index support clarified (GH3468).
– Fix assigning a new index to a duplicate index in a DataFrame would fail (GH3468)
– Fix construction of a DataFrame with a duplicate index
– ref_locs support to allow duplicative indices across dtypes, allows iget support to always find the index
(even across dtypes) (GH2194)
– applymap on a DataFrame with a non-unique index now works (removed warning) (GH2786), and fix
(GH3230)
– Fix to_csv to handle non-unique columns (GH3495)
– Duplicate indexes with getitem will return items in the correct order (GH3455, GH3457) and handle miss-
ing elements like unique indices (GH3561)
– Duplicate indexes with and empty DataFrame.from_records will return a correct frame (GH3562)
– Concat to produce a non-unique columns when duplicates are across dtypes is fixed (GH3602)
– Allow insert/delete to non-unique columns (GH3679)
– Non-unique indexing with a slice via loc and friends fixed (GH3659)
– Allow insert/delete to non-unique columns (GH3679)
This is a major release from 0.10.1 and includes many new features and enhancements along with a large number of
bug fixes. The methods of Selecting Data have had quite a number of additions, and Dtype support is now full-fledged.
There are also a number of important API changes that long-time pandas users should pay close attention to.
There is a new section in the documentation, 10 Minutes to Pandas, primarily geared to new users.
There is a new section in the documentation, Cookbook, a collection of useful recipes in pandas (and that we want
contributions!).
There are several libraries that are now Recommended Dependencies
Starting in 0.11.0, object selection has had a number of user-requested additions in order to support more explicit
location based indexing. Pandas now supports three types of multi-axis indexing.
• .loc is strictly label based, will raise KeyError when the items are not found, allowed inputs are:
– A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index. This use is not an integer
position along the index)
– A list or array of labels ['a', 'b', 'c']
– A slice object with labels 'a':'f', (note that contrary to usual python slices, both the start and the stop
are included!)
– A boolean array
See more at Selection by Label
• .iloc is strictly integer position based (from 0 to length-1 of the axis), will raise IndexError when the
requested indicies are out of bounds. Allowed inputs are:
– An integer e.g. 5
– A list or array of integers [4, 3, 0]
– A slice object with ints 1:7
– A boolean array
See more at Selection by Position
• .ix supports mixed integer and label based access. It is primarily label based, but will fallback to integer
positional access. .ix is the most general and will support any of the inputs to .loc and .iloc, as well as
support for floating point label schemes. .ix is especially useful when dealing with mixed positional and label
based hierarchial indexes.
As using integer slices with .ix have different behavior depending on whether the slice is interpreted as position
based or label based, it’s usually better to be explicit and use .iloc or .loc.
See more at Advanced Indexing and Advanced Hierarchical.
1.30.3 Dtypes
Numeric dtypes will propagate and can coexist in DataFrames. If a dtype is passed (either directly via the dtype
keyword, a passed ndarray, or a passed Series, then it will be preserved in DataFrame operations. Furthermore,
different numeric dtypes will NOT be combined. The following example will give you a taste.
In [1]: df1 = DataFrame(randn(8, 1), columns = ['A'], dtype = 'float32')
In [2]: df1
Out[2]:
A
0 1.392665
1 -0.123497
2 -0.402761
3 -0.246604
4 -0.288433
5 -0.763434
6 2.069526
7 -1.203569
[8 rows x 1 columns]
In [3]: df1.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A float32
Length: 1, dtype: object
In [5]: df2
Out[5]:
A B C
0 0.591797 -0.038605 0
1 0.841309 -0.460478 1
2 -0.500977 -0.310458 0
3 -0.816406 0.866493 254
4 -0.207031 0.245972 0
5 -0.664062 0.319442 1
6 0.580566 1.378512 1
7 -0.965820 0.292502 255
[8 rows x 3 columns]
In [6]: df2.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A float16
B float64
C uint8
Length: 3, dtype: object
[8 rows x 3 columns]
In [9]: df3.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A float32
B float64
C float64
Length: 3, dtype: object
This is lower-common-denominator upcasting, meaning you get the dtype which can accommodate all of the types
In [10]: df3.values.dtype
Out[10]: dtype('float64')
Conversion
In [11]: df3.astype('float32').dtypes
Out[11]:
A float32
B float32
C float32
Length: 3, dtype: object
Mixed Conversion
In [12]: df3['D'] = '1.'
In [14]: df3.convert_objects(convert_numeric=True).dtypes
Out[14]:
A float32
B float64
C float64
D float64
E int64
Length: 5, dtype: object
In [17]: df3.dtypes
Out[17]:
A float32
B float64
C float64
D float16
E int32
Length: 5, dtype: object
In [20]: s.convert_objects(convert_dates='coerce')
Out[20]:
0 2001-01-01
1 NaT
2 NaT
3 NaT
4 2001-01-04
5 2001-01-05
Length: 6, dtype: datetime64[ns]
Platform Gotchas
Starting in 0.11.0, construction of DataFrame/Series will use default dtypes of int64 and float64, regardless
of platform. This is not an apparent change from earlier versions of pandas. If you specify dtypes, they WILL be
respected, however (GH2837)
The following will all result in int64 dtypes
In [21]: DataFrame([1,2],columns=['a']).dtypes
Out[21]:
a int64
Length: 1, dtype: object
a int64
Length: 1, dtype: object
In [26]: dfi
Out[26]:
A B C D E
0 1 0 0 1 1
1 0 0 1 1 1
2 0 0 0 1 1
3 -1 0 254 1 1
4 0 0 0 1 1
5 -1 0 1 1 1
6 2 1 1 1 1
7 -2 0 255 1 1
[8 rows x 5 columns]
In [27]: dfi.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A int32
B int32
C int32
D int64
E int32
Length: 5, dtype: object
In [29]: casted
Out[29]:
A B C D E
0 1.0 NaN NaN 1 1
1 NaN NaN 1.0 1 1
2 NaN NaN NaN 1 1
3 NaN NaN 254.0 1 1
4 NaN NaN NaN 1 1
5 NaN NaN 1.0 1 1
6 2.0 1.0 1.0 1 1
7 NaN NaN 255.0 1 1
[8 rows x 5 columns]
In [30]: casted.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A float64
B float64
C float64
D int64
E int32
(continues on next page)
In [33]: df4.dtypes
Out[33]:
A float32
B float64
C float64
D float16
E int32
Length: 5, dtype: object
In [35]: casted
Out[35]:
A B C D E
0 1.984462 NaN NaN 1.0 1
1 0.717812 NaN 1.0 1.0 1
2 NaN NaN NaN 1.0 1
3 NaN 0.866493 254.0 1.0 1
4 NaN 0.245972 NaN 1.0 1
5 NaN 0.319442 1.0 1.0 1
6 2.650092 1.378512 1.0 1.0 1
7 NaN 0.292502 255.0 1.0 1
[8 rows x 5 columns]
In [36]: casted.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A float32
B float64
C float64
D float16
E int32
Length: 5, dtype: object
Datetime64[ns] columns in a DataFrame (or a Series) allow the use of np.nan to indicate a nan value, in ad-
dition to the traditional NaT, or not-a-time. This allows convenient nan setting in a generic way. Furthermore
datetime64[ns] columns are created by default, when passed datetimelike objects (this change was introduced in
0.10.1) (GH2809, GH2810)
In [37]: df = DataFrame(randn(6,2),date_range('20010102',periods=6),columns=['A','B'])
In [39]: df
(continues on next page)
[6 rows x 3 columns]
float64 2
datetime64[ns] 1
Length: 2, dtype: int64
In [42]: df
Out[42]:
A B timestamp
2001-01-02 1.023958 0.660103 2001-01-03
2001-01-03 1.236475 -2.170629 2001-01-03
2001-01-04 NaN -1.685677 NaT
2001-01-05 NaN -0.115070 NaT
2001-01-06 -0.632102 -0.585977 2001-01-03
2001-01-07 -1.444787 -0.201135 2001-01-03
[6 rows x 3 columns]
In [45]: s.dtype
Out[45]: dtype('<M8[ns]')
In [47]: s
Out[47]:
0 2001-01-02
1 NaT
2 2001-01-02
Length: 3, dtype: datetime64[ns]
In [48]: s.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[48]:
˓→dtype('<M8[ns]')
In [49]: s = s.astype('O')
(continues on next page)
In [50]: s
Out[50]:
0 2001-01-02 00:00:00
1 NaT
2 2001-01-02 00:00:00
Length: 3, dtype: object
In [51]: s.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→dtype('O')
1.30.8 Enhancements
In [53]: df.to_hdf('store.h5','table',append=True)
[2 rows x 2 columns]
– provide dotted attribute access to get from stores, e.g. store.df == store['df']
– new keywords iterator=boolean, and chunksize=number_in_a_chunk are provided to sup-
port iteration on select and select_as_multiple (GH3076)
• You can now select timestamps from an unordered timeseries similarly to an ordered timeseries (GH2437)
• You can now select with a string from a DataFrame with a datelike index, in a similar way to a Series (GH3070)
In [56]: ts = Series(np.random.rand(len(idx)),index=idx)
In [57]: ts['2001']
Out[57]:
2001-10-31 0.663256
2001-11-30 0.079126
2001-12-31 0.587699
Freq: M, Length: 3, dtype: float64
In [59]: df['2001']
Out[59]:
A
2001-10-31 0.663256
2001-11-30 0.079126
2001-12-31 0.587699
[3 rows x 1 columns]
In [60]: p = Panel(randn(3,4,4),items=['ItemA','ItemB','ItemC'],
....: major_axis=date_range('20010102',periods=4),
....: minor_axis=['A','B','C','D'])
....:
In [61]: p
Out[61]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 4 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2001-01-02 00:00:00 to 2001-01-05 00:00:00
Minor_axis axis: A to D
In [62]: p.reindex(items=['ItemA']).squeeze()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2001-01-02 -1.203403 0.425882 -0.436045 -0.982462
2001-01-03 0.348090 -0.969649 0.121731 0.202798
2001-01-04 1.215695 -0.218549 -0.631381 -0.337116
2001-01-05 0.404238 0.907213 -0.865657 0.483186
[4 rows x 4 columns]
In [63]: p.reindex(items=['ItemA'],minor=['B']).squeeze()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2001-01-02 0.425882
2001-01-03 -0.969649
2001-01-04 -0.218549
2001-01-05 0.907213
Freq: D, Name: B, Length: 4, dtype: float64
• In pd.io.data.Options,
– Fix bug when trying to fetch data for the current month when already past expiry.
– Now using lxml to scrape html instead of BeautifulSoup (lxml was faster).
– New instance variables for calls and puts are automatically created when a method that creates them is
called. This works for current month where the instance variables are simply calls and puts. Also
works for future expiry months and save the instance variable as callsMMYY or putsMMYY, where
MMYY are, respectively, the month and year of the option’s expiry.
– Options.get_near_stock_price now allows the user to specify the month for which to get rele-
vant options data.
– Options.get_forward_data now has optional kwargs near and above_below. This allows the
user to specify if they would like to only return forward looking data for options near the current stock
price. This just obtains the data from Options.get_near_stock_price instead of Options.get_xxx_data()
(GH2758).
• Cursor coordinate information is now displayed in time-series plots.
• added option display.max_seq_items to control the number of elements printed per sequence pprinting it.
(GH2979)
• added option display.chop_threshold to control display of small numerical values. (GH2739)
• added option display.max_info_rows to prevent verbose_info from being calculated for frames above 1M rows
(configurable). (GH2807, GH2918)
• value_counts() now accepts a “normalize” argument, for normalized histograms. (GH2710).
• DataFrame.from_records now accepts not only dicts but any instance of the collections.Mapping ABC.
• added option display.mpl_style providing a sleeker visual style for plots. Based on https://gist.github.com/
huyng/816622 (GH3075).
• Treat boolean values as integers (values 1 and 0) for numeric operations. (GH2641)
• to_html() now accepts an optional “escape” argument to control reserved HTML character escaping (enabled
by default) and escapes &, in addition to < and >. (GH2919)
See the full release notes or issue tracker on GitHub for a complete list.
This is a minor release from 0.10.0 and includes new features, enhancements, and bug fixes. In particular, there is
substantial new HDFStore functionality contributed by Jeff Reback.
An undesired API breakage with functions taking the inplace option has been reverted and deprecation warnings
added.
• Functions taking an inplace option return the calling object as before. A deprecation message has been added
• Groupby aggregations Max/Min no longer exclude non-numeric data (GH2700)
• Resampling an empty DataFrame now returns an empty DataFrame instead of raising an exception (GH2640)
• The file reader will now raise an exception when NA values are found in an explicitly specified integer column
instead of converting the column to float (GH2631)
• DatetimeIndex.unique now returns a DatetimeIndex with the same name and
1.31.3 HDFStore
You may need to upgrade your existing data files. Please visit the compatibility section in the main docs.
You can designate (and index) certain columns that you want to be able to perform queries on a table, by passing a list
to data_columns
In [1]: store = HDFStore('store.h5')
In [7]: df
Out[7]:
A B C string string2
2000-01-01 1.885136 -0.183873 2.550850 foo cool
2000-01-02 0.180759 -1.117089 0.061462 foo cool
2000-01-03 -0.294467 -0.591411 -0.876691 foo cool
2000-01-04 3.127110 1.451130 0.045152 foo cool
2000-01-05 -0.242846 1.195819 1.533294 NaN cool
2000-01-06 0.820521 -0.281201 1.651561 NaN cool
2000-01-07 -0.034086 0.252394 -0.498772 foo cool
2000-01-08 -2.290958 -1.601262 -0.256718 bar cool
[8 rows x 5 columns]
# on-disk operations
In [8]: store.append('df', df, data_columns = ['B','C','string','string2'])
[2 rows x 5 columns]
[2 rows x 5 columns]
In [16]: df_mixed1
Out[16]:
A B C string string2 datetime64
2000-01-01 1.885136 -0.183873 2.550850 foo cool 2001-01-02
2000-01-02 0.180759 -1.117089 0.061462 foo cool 2001-01-02
2000-01-03 -0.294467 -0.591411 -0.876691 foo cool 2001-01-02
2000-01-04 NaN NaN 0.045152 foo cool 2001-01-02
2000-01-05 -0.242846 1.195819 1.533294 NaN cool 2001-01-02
2000-01-06 0.820521 -0.281201 1.651561 NaN cool 2001-01-02
2000-01-07 -0.034086 0.252394 -0.498772 foo cool 2001-01-02
2000-01-08 -2.290958 -1.601262 -0.256718 bar cool 2001-01-02
[8 rows x 6 columns]
In [17]: df_mixed1.get_dtype_counts()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
float64 3
object 2
datetime64[ns] 1
Length: 3, dtype: int64
You can pass columns keyword to select to filter a list of the return columns, this is equivalent to passing a
Term('columns',list_of_columns_to_filter)
[8 rows x 2 columns]
In [21]: df
Out[21]:
A B C
foo bar
foo one 0.239369 0.174122 -1.131794
two -1.948006 0.980347 -0.674429
three -0.361633 -0.761218 1.768215
bar one 0.152288 -0.862613 -0.210968
two -0.859278 1.498195 0.462413
baz two -0.647604 1.511487 -0.727189
three -0.342928 -0.007364 1.427674
qux one 0.104020 2.052171 -1.230963
two -0.019240 -1.713238 0.838912
three -0.637855 0.215109 -1.515362
In [22]: store.append('mi',df)
In [23]: store.select('mi')
Out[23]:
A B C
foo bar
foo one 0.239369 0.174122 -1.131794
two -1.948006 0.980347 -0.674429
three -0.361633 -0.761218 1.768215
bar one 0.152288 -0.862613 -0.210968
two -0.859278 1.498195 0.462413
baz two -0.647604 1.511487 -0.727189
three -0.342928 -0.007364 1.427674
qux one 0.104020 2.052171 -1.230963
two -0.019240 -1.713238 0.838912
three -0.637855 0.215109 -1.515362
A B C
foo bar
bar one 0.152288 -0.862613 -0.210968
two -0.859278 1.498195 0.462413
[2 rows x 3 columns]
Multi-table creation via append_to_multiple and selection via select_as_multiple can create/select from
multiple tables and return a combined result, by using where on a selector table.
In [28]: store
Out[28]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
[8 rows x 2 columns]
In [30]: store.select('df2_mt')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
C D E F foo
2000-01-01 -1.573998 0.630925 -0.071659 -1.277640 bar
2000-01-02 1.275280 -1.199212 1.060780 1.673018 bar
2000-01-03 -0.710542 0.825392 1.557329 1.993441 bar
2000-01-04 0.132104 0.580923 -0.128750 1.445964 bar
2000-01-05 0.904578 -1.645852 -0.688741 0.228006 bar
2000-01-06 0.831767 0.228760 0.932498 -2.200069 bar
2000-01-07 -0.540770 -0.370038 1.298390 1.662964 bar
2000-01-08 -0.096145 1.717830 -0.462446 -0.112019 bar
(continues on next page)
[8 rows x 5 columns]
# as a multiple
In [31]: store.select_as_multiple(['df1_mt','df2_mt'], where = [ 'A>0','B>0' ],
˓→selector = 'df1_mt')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D E F foo
2000-01-03 1.249874 1.458210 -0.710542 0.825392 1.557329 1.993441 bar
2000-01-07 1.239198 0.185437 -0.540770 -0.370038 1.298390 1.662964 bar
[2 rows x 7 columns]
Enhancements
• HDFStore now can read native PyTables table format tables
• You can pass nan_rep = 'my_nan_rep' to append, to change the default nan representation on disk
(which converts to/from np.nan), this defaults to nan.
• You can pass index to append. This defaults to True. This will automagically create indicies on the
indexables and data columns of the table
• You can pass chunksize=an integer to append, to change the writing chunksize (default is 50000).
This will significantly lower your memory usage on writing.
• You can pass expectedrows=an integer to the first append, to set the TOTAL number of expectedrows
that PyTables will expected. This will optimize read/write performance.
• Select now supports passing start and stop to provide selection space limiting in selection.
• Greatly improved ISO8601 (e.g., yyyy-mm-dd) date parsing for file parsers (GH2698)
• Allow DataFrame.merge to handle combinatorial sizes too large for 64-bit integer (GH2690)
• Series now has unary negation (-series) and inversion (~series) operators (GH2686)
• DataFrame.plot now includes a logx parameter to change the x-axis to log scale (GH2327)
• Series arithmetic operators can now handle constant and ndarray input (GH2574)
• ExcelFile now takes a kind argument to specify the file type (GH2613)
• A faster implementation for Series.str methods (GH2602)
Bug Fixes
• HDFStore tables can now store float32 types correctly (cannot be mixed with float64 however)
• Fixed Google Analytics prefix when specifying request segment (GH2713).
• Function to reset Google Analytics token store so users can recover from improperly setup client secrets
(GH2687).
• Fixed groupby bug resulting in segfault when passing in MultiIndex (GH2706)
• Fixed bug where passing a Series with datetime64 values into to_datetime results in bogus output values
(GH2699)
• Fixed bug in pattern in HDFStore expressions when pattern is not a valid regex (GH2694)
• Fixed performance issues while aggregating boolean data (GH2692)
• When given a boolean mask key and a Series of new values, Series __setitem__ will now align the incoming
values with the original Series (GH2686)
• Fixed MemoryError caused by performing counting sort on sorting MultiIndex levels with a very large number
of combinatorial values (GH2684)
• Fixed bug that causes plotting to fail when the index is a DatetimeIndex with a fixed-offset timezone (GH2683)
• Corrected businessday subtraction logic when the offset is more than 5 bdays and the starting date is on a
weekend (GH2680)
• Fixed C file parser behavior when the file has more columns than data (GH2668)
• Fixed file reader bug that misaligned columns with data in the presence of an implicit column and a specified
usecols value
• DataFrames with numerical or datetime indices are now sorted prior to plotting (GH2609)
• Fixed DataFrame.from_records error when passed columns, index, but empty records (GH2633)
• Several bug fixed for Series operations when dtype is datetime64 (GH2689, GH2629, GH2626)
See the full release notes or issue tracker on GitHub for a complete list.
This is a major release from 0.9.1 and includes many new features and enhancements along with a large number of
bug fixes. There are also a number of important API changes that long-time pandas users should pay close attention
to.
The delimited file parsing engine (the guts of read_csv and read_table) has been rewritten from the ground up
and now uses a fraction the amount of memory while parsing, while being 40% or more faster in most use cases (in
some cases much faster).
There are also many new features:
• Much-improved Unicode handling via the encoding option.
• Column filtering (usecols)
• Dtype specification (dtype argument)
• Ability to specify strings to be recognized as True/False
• Ability to yield NumPy record arrays (as_recarray)
• High performance delim_whitespace option
• Decimal format (e.g. European format) specification
• Easier CSV dialect options: escapechar, lineterminator, quotechar, etc.
• More robust handling of many exceptional kinds of files observed in the wild
In [3]: df
Out[3]:
0 1 2 3
2000-01-01 -0.134024 -0.205969 1.348944 -1.198246
2000-01-02 -1.626124 0.982041 0.059493 -0.460111
2000-01-03 -1.565401 -0.025706 0.942864 2.502156
2000-01-04 -0.302741 0.261551 -0.066342 0.897097
2000-01-05 0.268766 -1.225092 0.582752 -1.490764
2000-01-06 -0.639757 -0.952750 -0.892402 0.505987
[6 rows x 4 columns]
# deprecated now
In [4]: df - df[0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
[6 rows x 10 columns]
0 1 2 3
2000-01-01 0.0 -0.071946 1.482967 -1.064223
2000-01-02 0.0 2.608165 1.685618 1.166013
2000-01-03 0.0 1.539695 2.508265 4.067556
2000-01-04 0.0 0.564293 0.236399 1.199839
(continues on next page)
[6 rows x 4 columns]
You will get a deprecation warning in the 0.10.x series, and the deprecated functionality will be removed in 0.11 or
later.
Altered resample default behavior
The default time series resample binning behavior of daily D and higher frequencies has been changed to
closed='left', label='left'. Lower nfrequencies are unaffected. The prior defaults were causing a great
deal of confusion for users, especially resampling data to daily frequency (which labeled the aggregated group with
the end of the interval: the next day).
In [3]: series
Out[3]:
2000-01-01 00:00:00 0
2000-01-01 04:00:00 1
2000-01-01 08:00:00 2
2000-01-01 12:00:00 3
2000-01-01 16:00:00 4
2000-01-01 20:00:00 5
2000-01-02 00:00:00 6
2000-01-02 04:00:00 7
2000-01-02 08:00:00 8
2000-01-02 12:00:00 9
2000-01-02 16:00:00 10
2000-01-02 20:00:00 11
2000-01-03 00:00:00 12
2000-01-03 04:00:00 13
2000-01-03 08:00:00 14
2000-01-03 12:00:00 15
2000-01-03 16:00:00 16
2000-01-03 20:00:00 17
2000-01-04 00:00:00 18
2000-01-04 04:00:00 19
2000-01-04 08:00:00 20
2000-01-04 12:00:00 21
2000-01-04 16:00:00 22
2000-01-04 20:00:00 23
2000-01-05 00:00:00 24
Freq: 4H, dtype: int64
• Infinity and negative infinity are no longer treated as NA by isnull and notnull. That they ever were was
a relic of early pandas. This behavior can be re-enabled globally by the mode.use_inf_as_null option:
In [7]: pd.isnull(s)
Out[7]:
0 False
1 False
2 False
3 False
Length: 4, dtype: bool
In [8]: s.fillna(0)
Out[8]:
0 1.500000
1 inf
2 3.400000
3 -inf
Length: 4, dtype: float64
In [10]: pd.isnull(s)
Out[10]:
0 False
1 True
2 False
3 True
Length: 4, dtype: bool
In [11]: s.fillna(0)
Out[11]:
0 1.5
1 0.0
2 3.4
3 0.0
Length: 4, dtype: float64
In [12]: pd.reset_option('use_inf_as_null')
• Methods with the inplace option now all return None instead of the calling object. E.g. code written like
df = df.fillna(0, inplace=True) may stop working. To fix, simply delete the unnecessary variable
assignment.
• pandas.merge no longer sorts the group keys (sort=False) by default. This was done for performance
reasons: the group-key sorting is often one of the more expensive parts of the computation and is often unnec-
essary.
• The default column names for a file with no header have been changed to the integers 0 through N - 1. This
is to create consistency with the DataFrame constructor with no columns specified. The v0.9.0 behavior (names
X0, X1, . . . ) can be reproduced by specifying prefix='X':
In [7]: print(data)
a,b,c
1,Yes,2
3,No,4
[3 rows x 3 columns]
X0 X1 X2
0 a b c
1 1 Yes 2
2 3 No 4
[3 rows x 3 columns]
• Values like 'Yes' and 'No' are not interpreted as boolean by default, though this can be controlled by new
true_values and false_values arguments:
In [10]: print(data)
a,b,c
1,Yes,2
3,No,4
In [11]: pd.read_csv(StringIO(data))
\\\\\\\\\\\\\\\\\\\\\Out[11]:
a b c
0 1 Yes 2
1 3 No 4
[2 rows x 3 columns]
a b c
0 1 True 2
1 3 False 4
[2 rows x 3 columns]
• The file parsers will not recognize non-string values arising from a converter function as NA if passed in the
na_values argument. It’s better to do post-processing using the replace function instead.
• Calling fillna on Series or DataFrame with no arguments is no longer valid code. You must either specify a
fill value or an interpolation method:
In [13]: s = Series([np.nan, 1., 2., np.nan, 4])
In [14]: s
Out[14]:
0 NaN
1 1.0
2 2.0
3 NaN
4 4.0
Length: 5, dtype: float64
In [15]: s.fillna(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[15]:
˓→
0 0.0
1 1.0
2 2.0
3 0.0
4 4.0
Length: 5, dtype: float64
In [16]: s.fillna(method='pad')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 NaN
1 1.0
2 2.0
3 2.0
4 4.0
Length: 5, dtype: float64
• Series.apply will now operate on a returned value from the applied function, that is itself a series, and
possibly upcast the result to a DataFrame
In [18]: def f(x):
....: return Series([ x, x**2 ], index = ['x', 'x^2'])
....:
In [19]: s = Series(np.random.rand(5))
In [20]: s
Out[20]:
0 0.717478
1 0.815199
(continues on next page)
In [21]: s.apply(f)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
x x^2
0 0.717478 0.514775
1 0.815199 0.664550
2 0.452478 0.204737
3 0.848385 0.719757
4 0.235477 0.055449
[5 rows x 2 columns]
In [22]: get_option("display.max_rows")
Out[22]: 15
Instead of printing the summary information, pandas now splits the string representation across multiple rows by
default:
In [24]: wide_frame
Out[24]:
0 1 2 3 4 5 6 ...
˓→ 9 10 11 12 13 14 15
0 -0.681624 0.191356 1.180274 -0.834179 0.703043 0.166568 -0.583599 ... -0.
˓→882554 1.209871 -0.941235 0.863067 -0.336232 -0.976847 0.033862
1 0.441522 -0.316864 -0.017062 1.570114 -0.360875 -0.880096 0.235532 ... -1.
˓→702547 -1.621234 -0.906840 1.014601 -0.475108 -0.358944 1.262942
2 -0.412451 -0.462580 0.422194 0.288403 -0.487393 -0.777639 0.055865 ... 0.
˓→246392 0.965887 0.246354 -0.727728 -0.094414 -0.276854 0.158399
3 -0.277255 1.331263 0.585174 -0.568825 -0.719412 1.191340 -0.456362 ... 0.
˓→752889 -1.195795 -1.425911 -0.548829 0.774225 0.740501 1.510263
4 -1.642511 0.432560 1.218080 -0.564705 -0.581790 0.286071 0.048725 ... 0.
˓→054399 0.241963 -0.471786 0.314510 -0.059986 -2.069319 -1.115104
(continues on next page)
[5 rows x 16 columns]
The old behavior of printing out summary information can be achieved via the ‘expand_frame_repr’ print option:
In [25]: pd.set_option('expand_frame_repr', False)
In [26]: wide_frame
Out[26]:
0 1 2 3 4 5 6 7
˓→ 8 9 10 11 12 13 14 15
0 -0.681624 0.191356 1.180274 -0.834179 0.703043 0.166568 -0.583599 -1.201796 -1.
˓→422811 -0.882554 1.209871 -0.941235 0.863067 -0.336232 -0.976847 0.033862
1 0.441522 -0.316864 -0.017062 1.570114 -0.360875 -0.880096 0.235532 0.207232 -1.
˓→983857 -1.702547 -1.621234 -0.906840 1.014601 -0.475108 -0.358944 1.262942
2 -0.412451 -0.462580 0.422194 0.288403 -0.487393 -0.777639 0.055865 1.383381 0.
˓→085638 0.246392 0.965887 0.246354 -0.727728 -0.094414 -0.276854 0.158399
3 -0.277255 1.331263 0.585174 -0.568825 -0.719412 1.191340 -0.456362 0.089931 0.
˓→776079 0.752889 -1.195795 -1.425911 -0.548829 0.774225 0.740501 1.510263
4 -1.642511 0.432560 1.218080 -0.564705 -0.581790 0.286071 0.048725 1.002440 1.
˓→276582 0.054399 0.241963 -0.471786 0.314510 -0.059986 -2.069319 -1.115104
[5 rows x 16 columns]
The width of each line can be changed via ‘line_width’ (80 by default):
In [27]: pd.set_option('line_width', 40)
---------------------------------------------------------------------------
OptionError Traceback (most recent call last)
<ipython-input-27-b8740c4a0a1b> in <module>()
----> 1 pd.set_option('line_width', 40)
In [28]: wide_frame
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→ (continues on next page)
[5 rows x 16 columns]
Docs for PyTables Table format & several enhancements to the api. Here is a taste of what to expect.
In [31]: df
Out[31]:
A B C
2000-01-01 -0.369325 -1.502617 -0.376280
2000-01-02 0.511936 -0.116412 -0.625256
2000-01-03 -0.550627 1.261433 -0.552429
2000-01-04 1.695803 -1.025917 -0.910942
2000-01-05 0.426805 -0.131749 0.432600
2000-01-06 0.044671 -0.341265 1.844536
2000-01-07 -2.036047 0.000830 -0.955697
2000-01-08 -0.898872 -0.725411 0.059904
[8 rows x 3 columns]
In [36]: store
Out[36]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
[8 rows x 3 columns]
In [39]: wp
Out[39]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to D
# storing a panel
In [40]: store.append('wp',wp)
In [43]: store.select('wp')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 3 (major_axis) x 4 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-03 00:00:00
Minor_axis axis: A to D
# deleting a store
In [44]: del store['df']
In [45]: store
(continues on next page)
Enhancements
• added ability to hierarchical keys
In [46]: store.put('foo/bar/bah', df)
In [49]: store
Out[49]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [51]: store
Out[51]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [53]: df['int'] = 1
In [54]: store.append('df',df)
In [56]: df1
Out[56]:
A B C string int
2000-01-01 -0.369325 -1.502617 -0.376280 string 1
2000-01-02 0.511936 -0.116412 -0.625256 string 1
2000-01-03 -0.550627 1.261433 -0.552429 string 1
2000-01-04 1.695803 -1.025917 -0.910942 string 1
2000-01-05 0.426805 -0.131749 0.432600 string 1
2000-01-06 0.044671 -0.341265 1.844536 string 1
2000-01-07 -2.036047 0.000830 -0.955697 string 1
2000-01-08 -0.898872 -0.725411 0.059904 string 1
[8 rows x 5 columns]
In [57]: df1.get_dtype_counts()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
float64 3
object 1
int64 1
(continues on next page)
Adding experimental support for Panel4D and factory functions to create n-dimensional named panels. Here is a taste
of what to expect.
In [59]: p4d
Out[59]:
<class 'pandas.core.panelnd.Panel4D'>
Dimensions: 2 (labels) x 2 (items) x 5 (major_axis) x 4 (minor_axis)
Labels axis: Label1 to Label2
Items axis: Item1 to Item2
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to D
See the full release notes or issue tracker on GitHub for a complete list.
This is a bugfix release from 0.9.0 and includes several new features and enhancements along with a large number of
bug fixes. The new features include by-column sort order for DataFrame and Series, improved NA handling for the
rank method, masking functions for DataFrame, and intraday time-series filtering for DataFrame.
• Series.sort, DataFrame.sort, and DataFrame.sort_index can now be specified in a per-column manner to support
multiple sort orders (GH928)
Out[3]:
A B C
3 0 1 1
4 0 1 1
2 0 0 1
0 1 0 0
1 1 0 0
5 1 0 0
• DataFrame.rank now supports additional argument values for the na_option parameter so missing values can
be assigned either the largest or the smallest rank (GH1508, GH2159)
In [3]: df.rank()
Out[3]:
A B C
0 3.0 1.0 3.0
1 1.0 3.0 2.0
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
5 2.0 2.0 1.0
[6 rows x 3 columns]
In [4]: df.rank(na_option='top')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C
0 6.0 4.0 6.0
1 4.0 6.0 5.0
2 2.0 2.0 2.0
3 2.0 2.0 2.0
4 2.0 2.0 2.0
5 5.0 5.0 4.0
[6 rows x 3 columns]
(continues on next page)
In [5]: df.rank(na_option='bottom')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C
0 3.0 1.0 3.0
1 1.0 3.0 2.0
2 5.0 5.0 5.0
3 5.0 5.0 5.0
4 5.0 5.0 5.0
5 2.0 2.0 1.0
[6 rows x 3 columns]
• DataFrame has new where and mask methods to select values according to a given boolean mask (GH2109,
GH2151)
DataFrame currently supports slicing via a boolean vector the same length as the DataFrame (inside
the []). The returned DataFrame has the same number of columns as the original, but is sliced on its
index.
In [7]: df
Out[7]:
A B C
0 -1.101581 -1.187831 0.630693
1 2.369983 0.333769 -0.870464
2 1.118760 -0.224382 0.642489
3 0.961751 -1.848369 0.440883
4 1.235390 1.615529 -0.303272
[5 rows x 3 columns]
A B C
1 2.369983 0.333769 -0.870464
2 1.118760 -0.224382 0.642489
3 0.961751 -1.848369 0.440883
4 1.235390 1.615529 -0.303272
[4 rows x 3 columns]
If a DataFrame is sliced with a DataFrame based boolean condition (with the same size as the original
DataFrame), then a DataFrame the same size (index and columns) as the original is returned, with
elements that do not meet the boolean condition as NaN. This is accomplished via the new method
DataFrame.where. In addition, where takes an optional other argument for replacement.
In [9]: df[df>0]
Out[9]:
A B C
0 NaN NaN 0.630693
1 2.369983 0.333769 NaN
2 1.118760 NaN 0.642489
(continues on next page)
[5 rows x 3 columns]
In [10]: df.where(df>0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C
0 NaN NaN 0.630693
1 2.369983 0.333769 NaN
2 1.118760 NaN 0.642489
3 0.961751 NaN 0.440883
4 1.235390 1.615529 NaN
[5 rows x 3 columns]
In [11]: df.where(df>0,-df)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C
0 1.101581 1.187831 0.630693
1 2.369983 0.333769 0.870464
2 1.118760 0.224382 0.642489
3 0.961751 1.848369 0.440883
4 1.235390 1.615529 0.303272
[5 rows x 3 columns]
Furthermore, where now aligns the input boolean condition (ndarray or DataFrame), such that partial
selection with setting is possible. This is analogous to partial setting via .ix (but on the contents rather
than the axis labels)
In [14]: df2
Out[14]:
A B C
0 -1.101581 -1.187831 0.630693
1 3.000000 3.000000 -0.870464
2 3.000000 -0.224382 3.000000
3 3.000000 -1.848369 3.000000
4 1.235390 1.615529 -0.303272
[5 rows x 3 columns]
In [15]: df.mask(df<=0)
Out[15]:
A B C
0 NaN NaN 0.630693
1 2.369983 0.333769 NaN
2 1.118760 NaN 0.642489
(continues on next page)
[5 rows x 3 columns]
In [16]: xl = ExcelFile('data/test.xls')
[7 rows x 4 columns]
• Added option to disable pandas-style tick locators and formatters using series.plot(x_compat=True) or pan-
das.plot_params[‘x_compat’] = True (GH2205)
• Existing TimeSeries methods at_time and between_time were added to DataFrame (GH2149)
• DataFrame.dot can now accept ndarrays (GH2042)
• DataFrame.drop now supports non-unique indexes (GH2101)
• Panel.shift now supports negative periods (GH2164)
• DataFrame now support unary ~ operator (GH2110)
• Upsampling data with a PeriodIndex will result in a higher frequency TimeSeries that spans the original time
window
In [4]: s.resample('M')
Out[4]:
2012-01 -1.471992
2012-02 NaN
2012-03 NaN
2012-04 -0.493593
2012-05 NaN
2012-06 NaN
Freq: M, dtype: float64
• Period.end_time now returns the last nanosecond in the time interval (GH2124, GH2125, GH1764)
In [18]: p = Period('2012')
In [19]: p.end_time
Out[19]: Timestamp('2012-12-31 23:59:59.999999999')
• File parsers no longer coerce to float or bool for columns that have custom converters specified (GH2184)
[2 rows x 3 columns]
See the full release notes or issue tracker on GitHub for a complete list.
This is a major release from 0.8.1 and includes several new features and enhancements along with a large number of
bug fixes. New features include vectorized unicode encoding/decoding for Series.str, to_latex method to DataFrame,
more flexible parsing of boolean values, and enabling the download of options data from Yahoo! Finance.
• Add encode and decode for unicode handling to vectorized string processing methods in Series.str (GH1706)
• Add DataFrame.to_latex method (GH1735)
• Add convenient expanding window equivalents of all rolling_* ops (GH1785)
• Add Options class to pandas.io.data for fetching options data from Yahoo! Finance (GH1748, GH1739)
• More flexible parsing of boolean values (Yes, No, TRUE, FALSE, etc) (GH1691, GH1295)
• Add level parameter to Series.reset_index
• TimeSeries.between_time can now select times across midnight (GH1871)
• Series constructor can now handle generator as input (GH1679)
• DataFrame.dropna can now take multiple axes (tuple/list) as input (GH924)
• Enable skip_footer parameter in ExcelFile.parse (GH1843)
• The default column names when header=None and no columns names passed to functions like read_csv
has changed to be more Pythonic and amenable to attribute access:
In [3]: df
Out[3]:
0 1 2
0 0 0 1
1 1 1 0
2 0 1 0
[3 rows x 3 columns]
• Creating a Series from another Series, passing an index, will cause reindexing to happen inside rather than treat-
ing the Series like an ndarray. Technically improper usages like Series(df[col1], index=df[col2])
that worked before “by accident” (this was never intended) will lead to all NA Series in some cases. To be per-
fectly clear:
In [5]: s1
Out[5]:
0 1
1 2
2 3
Length: 3, dtype: int64
In [7]: s2
Out[7]:
foo NaN
bar NaN
baz NaN
Length: 3, dtype: float64
This release includes a few new features, performance enhancements, and over 30 bug fixes from 0.8.0. New features
include notably NA friendly string processing functionality and a series of new plot types and options.
This is a major release from 0.7.3 and includes extensive work on the time series handling and processing infrastructure
as well as a great deal of new functionality throughout the library. It includes over 700 commits from more than 20
distinct authors. Most pandas 0.7.3 and earlier users should not experience any issues upgrading, but due to the
migration to the NumPy datetime64 dtype, there may be a number of bugs and incompatibilities lurking. Lingering
incompatibilities will be fixed ASAP in a 0.8.1 release if necessary. See the full release notes or issue tracker on
GitHub for a complete list.
All objects can now work with non-unique indexes. Data alignment / join operations work according to SQL join
semantics (including, if application, index duplication in many-to-many joins)
Time series data are now represented using NumPy’s datetime64 dtype; thus, pandas 0.8.0 now requires at least NumPy
1.6. It has been tested and verified to work with the development version (1.7+) of NumPy as well which includes some
significant user-facing API changes. NumPy 1.6 also has a number of bugs having to do with nanosecond resolution
data, so I recommend that you steer clear of NumPy 1.6’s datetime64 API functions (though limited as they are) and
only interact with this data using the interface that pandas provides.
See the end of the 0.8.0 section for a “porting” guide listing potential issues for users migrating legacy codebases from
pandas 0.7 or earlier to 0.8.0.
Bug fixes to the 0.7.x series for legacy NumPy < 1.6 users will be provided as they arise. There will be no more further
development in 0.7.x beyond bug fixes.
Note: With this release, legacy scikits.timeseries users should be able to port their code to use pandas.
• New datetime64 representation speeds up join operations and data alignment, reduces memory usage, and
improve serialization / deserialization performance significantly over datetime.datetime
• High performance and flexible resample method for converting from high-to-low and low-to-high frequency.
Supports interpolation, user-defined aggregation functions, and control over how the intervals and result labeling
are defined. A suite of high performance Cython/C-based resampling functions (including Open-High-Low-
Close) have also been implemented.
• Revamp of frequency aliases and support for frequency shortcuts like ‘15min’, or ‘1h30min’
• New DatetimeIndex class supports both fixed frequency and irregular time series. Replaces now deprecated
DateRange class
• New PeriodIndex and Period classes for representing time spans and performing calendar logic, in-
cluding the 12 fiscal quarterly frequencies <timeseries.quarterly>. This is a partial port of, and a substantial
enhancement to, elements of the scikits.timeseries codebase. Support for conversion between PeriodIndex and
DatetimeIndex
• New Timestamp data type subclasses datetime.datetime, providing the same interface while enabling working
with nanosecond-resolution data. Also provides easy time zone conversions.
• Enhanced support for time zones. Add tz_convert and tz_lcoalize methods to TimeSeries and DataFrame.
All timestamps are stored as UTC; Timestamps from DatetimeIndex objects with time zone set will be localized
to localtime. Time zone conversions are therefore essentially free. User needs to know very little about pytz
library now; only time zone names as as strings are required. Time zone-aware timestamps are equal if and only
if their UTC timestamps match. Operations between time zone-aware time series with different time zones will
result in a UTC-indexed time series.
• Time series string indexing conveniences / shortcuts: slice years, year and month, and index values with strings
• Enhanced time series plotting; adaptation of scikits.timeseries matplotlib-based plotting code
• New date_range, bdate_range, and period_range factory functions
• Robust frequency inference function infer_freq and inferred_freq property of DatetimeIndex, with option
to infer frequency on construction of DatetimeIndex
• to_datetime function efficiently parses array of strings to DatetimeIndex. DatetimeIndex will parse array or
list of strings to datetime64
• Optimized support for datetime64-dtype data in Series and DataFrame columns
• New NaT (Not-a-Time) type to represent NA in timestamp arrays
• Optimize Series.asof for looking up “as of” values for arrays of timestamps
• Milli, Micro, Nano date offset objects
• Can index time series with datetime.time objects to select all data at particular time of day (TimeSeries.
at_time) or between two times (TimeSeries.between_time)
• Add tshift method for leading/lagging using the frequency (if any) of the index, as opposed to a naive lead/lag
using shift
• New cut and qcut functions (like R’s cut function) for computing a categorical variable from a continuous
variable by binning values either into value-based (cut) or quantile-based (qcut) bins
• Rename Factor to Categorical and add a number of usability features
• Add limit argument to fillna/reindex
• More flexible multiple function application in GroupBy, and can pass list (name, function) tuples to get result in
particular order with given names
• Add flexible replace method for efficiently substituting values
• Enhanced read_csv/read_table for reading time series data and converting multiple columns to dates
• Add comments option to parser functions: read_csv, etc.
• Add dayfirst option to parser functions for parsing international DD/MM/YYYY dates
• Allow the user to specify the CSV reader dialect to control quoting etc.
• Handling thousands separators in read_csv to improve integer parsing.
• Enable unstacking of multiple levels in one shot. Alleviate pivot_table bugs (empty columns being intro-
duced)
• Move to klib-based hash tables for indexing; better performance and less memory usage than Python’s dict
• Add first, last, min, max, and prod optimized GroupBy functions
• New ordered_merge function
• Add flexible comparison instance methods eq, ne, lt, gt, etc. to DataFrame, Series
• Improve scatter_matrix plotting function and add histogram or kernel density estimates to diagonal
• Add ‘kde’ plot option for density plots
• Support for converting DataFrame to R data.frame through rpy2
• Improved support for complex numbers in Series and DataFrame
• Add pct_change method to all data structures
• Add max_colwidth configuration option for DataFrame console output
• Interpolate Series values using index values
• Can select multiple columns from GroupBy
In [1]: plt.figure()
Out[1]: <Figure size 640x480 with 0 Axes>
In [2]: fx['FR'].plot(style='g')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[2]: <matplotlib.axes._subplots.
˓→AxesSubplot at 0x7f20c12d9cc0>
Vytautas Jancauskas, the 2012 GSOC participant, has added many new plot types. For example, 'kde' is a new
option:
In [4]: s = Series(np.concatenate((np.random.randn(1000),
...: np.random.randn(1000) * 0.5 + 3)))
...:
In [5]: plt.figure()
Out[5]: <Figure size 640x480 with 0 Axes>
In [7]: s.plot(kind='kde')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→<matplotlib.axes._subplots.AxesSubplot at 0x7f20d55b69b0>
• Deprecation of offset, time_rule, and timeRule arguments names in time series functions. Warnings
will be printed until pandas 0.9 or 1.0.
The major change that may affect you in pandas 0.8.0 is that time series indexes use NumPy’s datetime64 data
type instead of dtype=object arrays of Python’s built-in datetime.datetime objects. DateRange has been
replaced by DatetimeIndex but otherwise behaved identically. But, if you have code that converts DateRange
or Index objects that used to contain datetime.datetime values to plain NumPy arrays, you may have bugs
lurking with code using scalar values because you are handing control over to NumPy:
In [10]: rng[5]
Out[10]: Timestamp('2000-01-06 00:00:00', freq='D')
In [14]: type(scalar_val)
Out[14]: numpy.datetime64
pandas’s Timestamp object is a subclass of datetime.datetime that has nanosecond support (the
nanosecond field store the nanosecond value between 0 and 999). It should substitute directly into any code that
used datetime.datetime values before. Thus, I recommend not casting DatetimeIndex to regular NumPy
arrays.
If you have code that requires an array of datetime.datetime objects, you have a couple of options. First, the
astype(object) method of DatetimeIndex produces an array of Timestamp objects:
In [16]: stamp_array
Out[16]:
Index([2000-01-01 00:00:00, 2000-01-02 00:00:00, 2000-01-03 00:00:00,
2000-01-04 00:00:00, 2000-01-05 00:00:00, 2000-01-06 00:00:00,
2000-01-07 00:00:00, 2000-01-08 00:00:00, 2000-01-09 00:00:00,
2000-01-10 00:00:00],
dtype='object')
In [17]: stamp_array[5]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timestamp('2000-01-06 00:00:00', freq='D')
In [19]: dt_array
Out[19]:
array([datetime.datetime(2000, 1, 1, 0, 0),
datetime.datetime(2000, 1, 2, 0, 0),
datetime.datetime(2000, 1, 3, 0, 0),
datetime.datetime(2000, 1, 4, 0, 0),
datetime.datetime(2000, 1, 5, 0, 0),
datetime.datetime(2000, 1, 6, 0, 0),
datetime.datetime(2000, 1, 7, 0, 0),
datetime.datetime(2000, 1, 8, 0, 0),
datetime.datetime(2000, 1, 9, 0, 0),
datetime.datetime(2000, 1, 10, 0, 0)], dtype=object)
In [20]: dt_array[5]
(continues on next page)
matplotlib knows how to handle datetime.datetime but not Timestamp objects. While I recommend that you
plot time series using TimeSeries.plot, you can either use to_pydatetime or register a converter for the
Timestamp type. See matplotlib documentation for more on this.
Warning: There are bugs in the user-facing API with the nanosecond datetime64 unit in NumPy 1.6. In particular,
the string version of the array shows garbage values, and conversion to dtype=object is similarly broken.
In [21]: rng = date_range('1/1/2000', periods=10)
In [22]: rng
Out[22]:
DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04',
'2000-01-05', '2000-01-06', '2000-01-07', '2000-01-08',
'2000-01-09', '2000-01-10'],
dtype='datetime64[ns]', freq='D')
In [23]: np.asarray(rng)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
array(['2000-01-01T00:00:00.000000000', '2000-01-02T00:00:00.000000000',
'2000-01-03T00:00:00.000000000', '2000-01-04T00:00:00.000000000',
'2000-01-05T00:00:00.000000000', '2000-01-06T00:00:00.000000000',
'2000-01-07T00:00:00.000000000', '2000-01-08T00:00:00.000000000',
'2000-01-09T00:00:00.000000000', '2000-01-10T00:00:00.000000000'], dtype=
˓→'datetime64[ns]')
In [25]: converted[5]
Out[25]: 947116800000000000
Trust me: don’t panic. If you are using NumPy 1.6 and restrict your interaction with datetime64 values to
pandas’s API you will be just fine. There is nothing wrong with the data-type (a 64-bit integer internally); all of the
important data processing happens in pandas and is heavily tested. I strongly recommend that you do not work
directly with datetime64 arrays in NumPy 1.6 and only use the pandas API.
Support for non-unique indexes: In the latter case, you may have code inside a try:... catch: block that
failed due to the index not being unique. In many cases it will no longer fail (some method like append still check for
uniqueness unless disabled). However, all is not lost: you can inspect index.is_unique and raise an exception
explicitly if it is False or go to a different code branch.
This is a minor release from 0.7.2 and fixes many minor bugs and adds a number of nice new features. There are
also a couple of API changes to note; these should not affect very many users, and we are inclined to call them “bug
fixes” even though they do constitute a change in behavior. See the full release notes or issue tracker on GitHub for a
complete list.
• Add stacked argument to Series and DataFrame’s plot method for stacked bar plots.
df.plot(kind='bar', stacked=True)
df.plot(kind='barh', stacked=True)
Reverted some changes to how NA values (represented typically as NaN or None) are handled in non-numeric Series:
In [1]: series = Series(['Steve', np.nan, 'Joe'])
In comparisons, NA / NaN will always come through as False except with != which is True. Be very careful with
boolean arithmetic, especially negation, in the presence of NA data. You may wish to add an explicit NA filter into
boolean array operations if you are worried about this:
In [4]: mask = series == 'Steve'
While propagating NA in comparisons may seem like the right behavior to some users (and you could argue on purely
technical grounds that this is the right thing to do), the evaluation was made that propagating NA everywhere, including
in numerical arrays, would cause a large amount of problems for users. Thus, a “practicality beats purity” approach
was taken. This issue may be revisited at some point in the future.
When calling apply on a grouped Series, the return value will also be a Series, to be more consistent with the
groupby behavior with DataFrame:
In [6]: df = DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
...: 'foo', 'bar', 'foo', 'foo'],
...: 'B' : ['one', 'one', 'two', 'three',
...: 'two', 'two', 'one', 'three'],
...: 'C' : np.random.randn(8), 'D' : np.random.randn(8)})
...:
In [7]: df
Out[7]:
A B C D
0 foo one -0.841015 0.459840
1 bar one 0.114219 -0.253040
2 foo two -0.405617 -0.261128
3 bar three 1.240678 0.406604
4 foo two -0.122828 -1.022256
5 bar two 1.525196 -0.882785
6 foo one 0.520047 1.793331
7 foo three 0.163834 -0.429688
[8 rows x 4 columns]
In [9]: grouped.describe()
(continues on next page)
[2 rows x 8 columns]
A
bar 3 1.240678
5 1.525196
foo 7 0.163834
6 0.520047
Name: C, Length: 4, dtype: float64
This release targets bugs in 0.7.1, and adds a few minor features.
This release includes a few new features and addresses over a dozen bugs in 0.7.0.
• Add to_clipboard function to pandas namespace for writing objects to the system clipboard (GH774)
• Add itertuples method to DataFrame for iterating through the rows of a dataframe as tuples (GH818)
• Add ability to pass fill_value and method to DataFrame and Series align method (GH806, GH807)
• Add fill_value option to reindex, align methods (GH784)
• Enable concat to produce DataFrame from Series (GH787)
• Add between method to Series (GH802)
• Add HTML representation hook to DataFrame for the IPython HTML notebook (GH773)
• Support for reading Excel 2007 XML documents using openpyxl
• New unified merge function for efficiently performing full gamut of database / relational-algebra operations.
Refactored existing join methods to use the new infrastructure, resulting in substantial performance gains
(GH220, GH249, GH267)
• New unified concatenation function for concatenating Series, DataFrame or Panel objects along an axis. Can
form union or intersection of the other axes. Improves performance of Series.append and DataFrame.
append (GH468, GH479, GH273)
• Can pass multiple DataFrames to DataFrame.append to concatenate (stack) and multiple Series to Series.
append too
• Can pass list of dicts (e.g., a list of JSON objects) to DataFrame constructor (GH526)
• You can now set multiple columns in a DataFrame via __getitem__, useful for transformation (GH342)
• Handle differently-indexed output values in DataFrame.apply (GH498)
[8 rows x 4 columns]
One of the potentially riskiest API changes in 0.7.0, but also one of the most important, was a complete review of how
integer indexes are handled with regard to label-based indexing. Here is an example:
In [4]: s
Out[4]:
0 0.446246
2 -0.500268
4 0.814725
6 -0.312744
8 1.098892
10 1.306330
12 -0.366970
14 -0.030890
16 1.608095
18 -0.023287
Length: 10, dtype: float64
In [5]: s[0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→0.44624598505731339
In [6]: s[2]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→-0.500268093241102
In [7]: s[4]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→0.8147247587659604
This is all exactly identical to the behavior before. However, if you ask for a key not contained in the Series, in
versions 0.6.1 and prior, Series would fall back on a location-based lookup. This now raises a KeyError:
In [2]: s[1]
KeyError: 1
In [4]: df
0 1 2 3
0 0.88427 0.3363 -0.1787 0.03162
2 0.14451 -0.1415 0.2504 0.58374
4 -1.44779 -0.9186 -1.4996 0.27163
6 -0.26598 -2.4184 -0.2658 0.11503
8 -0.58776 0.3144 -0.8566 0.61941
10 0.10940 -0.7175 -1.0108 0.47990
12 -1.16919 -0.3087 -0.6049 -0.43544
14 -0.07337 0.3410 0.0424 -0.16037
In [5]: df.ix[3]
KeyError: 3
In order to support purely integer-based indexing, the following methods have been added:
Method Description
Series.iget_value(i) Retrieve value stored at location i
Series.iget(i) Alias for iget_value
DataFrame.irow(i) Retrieve the i-th row
DataFrame.icol(j) Retrieve the j-th column
DataFrame.iget_value(i, j) Retrieve the value at row i and column j
Label-based slicing using ix now requires that the index be sorted (monotonic) unless both the start and endpoint are
contained in the index:
In [1]: s = Series(randn(6), index=list('gmkaec'))
In [2]: s
Out[2]:
g -1.182230
m -0.276183
k -0.243550
a 1.628992
e 0.073308
c -0.539890
dtype: float64
If the index had been sorted, the “range selection” would have been possible:
In [4]: s2 = s.sort_index()
In [5]: s2
Out[5]:
a 1.628992
c -0.539890
e 0.073308
g -1.182230
k -0.243550
m -0.276183
dtype: float64
In [6]: s2.ix['b':'h']
Out[6]:
c -0.539890
(continues on next page)
As as notational convenience, you can pass a sequence of labels or a label slice to a Series when getting and setting
values via [] (i.e. the __getitem__ and __setitem__ methods). The behavior will be the same as passing
similar input to ix except in the case of integer indexing:
In [8]: s = Series(randn(6), index=list('acegkm'))
In [9]: s
Out[9]:
a -0.800734
c -0.229737
e -0.781940
g 0.756053
k 2.613373
m -0.159310
Length: 6, dtype: float64
m -0.159310
a -0.800734
c -0.229737
e -0.781940
Length: 4, dtype: float64
In [11]: s['b':'l']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
c -0.229737
e -0.781940
g 0.756053
k 2.613373
Length: 4, dtype: float64
In [12]: s['c':'k']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
c -0.229737
e -0.781940
g 0.756053
k 2.613373
Length: 4, dtype: float64
In the case of integer indexes, the behavior will be exactly as before (shadowing ndarray):
In [13]: s = Series(randn(6), index=range(0, 12, 2))
In [15]: s[1:5]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[15]:
˓→
2 -0.707337
4 0.022862
6 0.306713
8 -0.162222
Length: 4, dtype: float64
If you wish to do indexing with sequences and slicing on an integer index with label semantics, use ix.
• Cythonized GroupBy aggregations no longer presort the data, thus achieving a significant speedup (GH93).
GroupBy aggregations with Python functions significantly sped up by clever manipulation of the ndarray data
type in Cython (GH496).
• Better error message in DataFrame constructor when passed column labels don’t match data (GH497)
• Substantially improve performance of multi-GroupBy aggregation when a Python function is passed, reuse
ndarray object in Cython (GH496)
• Can store objects indexed by tuples and floats in HDFStore (GH492)
• Don’t print length by default in Series.to_string, add length option (GH489)
• Improve Cython code for multi-groupby to aggregate without having to sort the data (GH93)
• Improve MultiIndex reindexing speed by storing tuples in the MultiIndex, test for backwards unpickling com-
patibility
• Improve column reindexing performance by using specialized Cython take function
• Further performance tweaking of Series.__getitem__ for standard use cases
• Avoid Index dict creation in some cases (i.e. when getting slices, etc.), regression from prior versions
• Friendlier error message in setup.py if NumPy not installed
• Use common set of NA-handling operations (sum, mean, etc.) in Panel class also (GH536)
• Default name assignment when calling reset_index on DataFrame with a regular (non-hierarchical) index
(GH476)
• Use Cythonized groupers when possible in Series/DataFrame stat ops with level parameter passed (GH545)
• Ported skiplist data structure to C to speed up rolling_median by about 5-10x in most typical use cases
(GH374)
• Improve memory usage of DataFrame.describe (do not copy data unnecessarily) (PR #425)
• Optimize scalar value lookups in the general case by 25% or more in Series and DataFrame
• Fix performance regression in cross-sectional count in DataFrame, affecting DataFrame.dropna speed
• Column deletion in DataFrame copies no data (computes views on blocks) (GH #158)
• Added convenience set_index function for creating a DataFrame index from its existing columns
• Implemented groupby hierarchical index level name (GH223)
• Added support for different delimiters in DataFrame.to_csv (GH244)
• TODO: DOCS ABOUT TAKE METHODS
• VBENCH Major performance improvements in file parsing functions read_csv and read_table
• VBENCH Added Cython function for converting tuples to ndarray very fast. Speeds up many MultiIndex-related
operations
• VBENCH Refactored merging / joining code into a tidy class and disabled unnecessary computations in the
float/object case, thus getting about 10% better performance (GH211)
• VBENCH Improved speed of DataFrame.xs on mixed-type DataFrame objects by about 5x, regression from
0.3.0 (GH215)
• VBENCH With new DataFrame.align method, speeding up binary operations between differently-indexed
DataFrame objects by 10-25%.
• VBENCH Significantly sped up conversion of nested dict into DataFrame (GH212)
• VBENCH Significantly speed up DataFrame __repr__ and count on large mixed-type DataFrame objects
• Altered binary operations on differently-indexed SparseSeries objects to use the integer-based (dense) alignment
logic which is faster with a larger number of blocks (GH205)
• Wrote faster Cython data alignment / merging routines resulting in substantial speed increases
• Improved performance of isnull and notnull, a regression from v0.3.0 (GH187)
• Refactored code related to DataFrame.join so that intermediate aligned copies of the data in each
DataFrame argument do not need to be created. Substantial performance increases result (GH176)
• Substantially improved performance of generic Index.intersection and Index.union
• Implemented BlockManager.take resulting in significantly faster take performance on mixed-type
DataFrame objects (GH104)
• Improved performance of Series.sort_index
• Significant groupby performance enhancement: removed unnecessary integrity checks in DataFrame internals
that were slowing down slicing operations to retrieve groups
• Optimized _ensure_index function resulting in performance savings in type-checking Index objects
• Wrote fast time series merging / joining methods in Cython. Will be integrated later into DataFrame.join and
related functions
TWO
INSTALLATION
The easiest way to install pandas is to install it as part of the Anaconda distribution, a cross platform distribution for
data analysis and scientific computing. This is the recommended installation method for most users.
Instructions for installing from source, PyPI, ActivePython, various Linux distributions, or a development version are
also provided.
The Python core team plans to stop supporting Python 2.7 on January 1st, 2020. In line with NumPy’s plans, all
pandas releases through December 31, 2018 will support Python 2.
The final release before December 31, 2018 will be the last release to support Python 2. The released package will
continue to be available on PyPI and through conda.
Starting January 1, 2019, all releases will be Python 3 only.
If there are people interested in continued support for Python 2.7 past December 31, 2018 (either backporting bugfixes
or funding) please reach out to the maintainers on the issue tracker.
For more information, see the Python 3 statement and the Porting to Python 3 guide.
Installing pandas and the rest of the NumPy and SciPy stack can be a little difficult for inexperienced users.
The simplest way to install not only pandas, but Python and the most popular packages that make up the SciPy
stack (IPython, NumPy, Matplotlib, . . . ) is with Anaconda, a cross-platform (Linux, Mac OS X, Windows) Python
distribution for data analytics and scientific computing.
After running the installer, the user will have access to pandas and the rest of the SciPy stack without needing to install
anything else, and without needing to wait for any software to be compiled.
Installation instructions for Anaconda can be found here.
453
pandas: powerful Python data analysis toolkit, Release 0.23.4
A full list of the packages available as part of the Anaconda distribution can be found here.
Another advantage to installing Anaconda is that you don’t need admin rights to install it. Anaconda can install in the
user’s home directory, which makes it trivial to delete Anaconda if you decide (just delete that folder).
The previous section outlined how to get pandas installed as part of the Anaconda distribution. However this approach
means you will install well over one hundred packages and involves downloading the installer which is a few hundred
megabytes in size.
If you want to have more control on which packages, or have a limited internet bandwidth, then installing pandas with
Miniconda may be a better solution.
Conda is the package manager that the Anaconda distribution is built upon. It is a package manager that is both
cross-platform and language agnostic (it can play a similar role to a pip and virtualenv combination).
Miniconda allows you to create a minimal self contained Python installation, and then use the Conda command to
install additional packages.
First you will need Conda to be installed and downloading and running the Miniconda will do this for you. The
installer can be found here
The next step is to create a new conda environment. A conda environment is like a virtualenv that allows you to specify
a specific version of Python and set of libraries. Run the following commands from a terminal window:
This will create a minimal environment with only Python installed in it. To put your self inside this environment run:
activate name_of_my_env
The final step required is to install pandas. This can be done with the following command:
If you need packages that are available to pip but not conda, then install pip, and then use pip to install those packages:
Installation instructions for ActivePython can be found here. Versions 2.7 and 3.5 include pandas.
The commands in this table will install pandas for Python 3 from your distribution. To install pandas for Python 2,
you may need to use the python-pandas package.
However, the packages in the linux package managers are often a few versions behind, so to get the newest version of
pandas, it’s recommended to install using the pip or conda methods described above.
See the contributing documentation for complete instructions on building from the git source tree. Further, see creating
a development environment if you wish to create a pandas development environment.
pandas is equipped with an exhaustive set of unit tests, covering about 97% of the codebase as of this writing. To
run it on your machine to verify that everything is working (and that you have all of the dependencies, soft and hard,
installed), make sure you have pytest and run:
>>> import pandas as pd
>>> pd.test()
running: pytest --skip-slow --skip-network C:\Users\TP\Anaconda3\envs\py36\lib\site-
˓→packages\pandas
..................................................................S......
........S................................................................
.........................................................................
2.5 Dependencies
• numexpr: for accelerating certain numerical operations. numexpr uses multiple cores as well as smart chunk-
ing and caching to achieve large speedups. If installed, must be Version 2.4.6 or higher.
• bottleneck: for accelerating certain types of nan evaluations. bottleneck uses specialized cython routines
to achieve large speedups. If installed, must be Version 1.0.0 or higher.
Note: You are highly encouraged to install these libraries, as they provide speed improvements, especially when
working with large data sets.
Warning:
– if you install BeautifulSoup4 you must install either lxml or html5lib or both. read_html() will
not work with only BeautifulSoup4 installed.
– You are highly encouraged to read HTML Table Parsing gotchas. It explains issues surrounding the
installation and usage of the above three libraries.
Note:
– if you’re on a system with apt-get you can do
to get the necessary dependencies for installation of lxml. This will prevent further headaches down the
line.
Note: Without the optional dependencies, many useful features will not work. Hence, it is highly recommended that
you install these. A packaged distribution like Anaconda, ActivePython (version 2.7 or 3.5), or Enthought Canopy
may be worth considering.
THREE
CONTRIBUTING TO PANDAS
Table of contents:
• Where to start?
• Bug reports and enhancement requests
• Working with the code
– Version control, Git, and GitHub
– Getting started with Git
– Forking
– Creating a development environment
* Installing a C Compiler
* Creating a Python Environment
* Creating a Python Environment (pip)
– Creating a branch
• Contributing to the documentation
– About the pandas documentation
– How to build the pandas documentation
* Requirements
* Building the documentation
* Building master branch documentation
• Contributing to the code base
– Code standards
* C (cpplint)
* Python (PEP8)
* Backwards Compatibility
– Testing With Continuous Integration
– Test-driven development/code writing
* Writing tests
459
pandas: powerful Python data analysis toolkit, Release 0.23.4
* Transitioning to pytest
* Using pytest
– Running the test suite
– Running the performance test suite
– Documenting your code
• Contributing your changes to pandas
– Committing your code
– Pushing your changes
– Review your code
– Finally, make the pull request
– Updating your pull request
– Delete your merged branch (optional)
All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
If you are brand new to pandas or open-source development, we recommend going through the GitHub “issues” tab to
find issues that interest you. There are a number of issues listed under Docs and good first issue where you could start
out. Once you’ve found an interesting issue, you can return here to get your development environment setup.
Feel free to ask questions on the mailing list or on Gitter.
Bug reports are an important part of making pandas more stable. Having a complete bug report will allow others to
reproduce the bug and provide insight into fixing. See this stackoverflow article and this blogpost for tips on writing a
good bug report.
Trying the bug-producing code out on the master branch is often a worthwhile exercise to confirm the bug still exists.
It is also worth searching existing bug reports and pull requests to see if the issue has already been reported and/or
fixed.
Bug reports must:
1. Include a short, self-contained Python snippet reproducing the problem. You can format the code nicely by
using GitHub Flavored Markdown:
```python
>>> from pandas import DataFrame
>>> df = DataFrame(...)
...
```
2. Include the full version string of pandas and its dependencies. You can use the built in function:
3. Explain why the current behavior is wrong/not desired and what you expect instead.
The issue will then show up to the pandas community and be open to comments/ideas from others.
Now that you have an issue you want to fix, enhancement to add, or documentation to improve, you need to learn how
to work with GitHub and the pandas code base.
To the new user, working with Git is one of the more daunting aspects of contributing to pandas. It can very quickly
become overwhelming, but sticking to the guidelines below will help keep the process straightforward and mostly
trouble free. As always, if you are having difficulties please feel free to ask for help.
The code is hosted on GitHub. To contribute you will need to sign up for a free GitHub account. We use Git for
version control to allow many people to work together on the project.
Some great resources for learning Git:
• the GitHub help pages.
• the NumPy’s documentation.
• Matthew Brett’s Pydagogue.
GitHub has instructions for installing git, setting up your SSH key, and configuring git. All these steps need to be
completed before you can work seamlessly between your local repository and GitHub.
3.3.3 Forking
You will need your own fork to work on the code. Go to the pandas project page and hit the Fork button. You will
want to clone your fork to your machine:
This creates the directory pandas-yourname and connects your repository to the upstream (main project) pandas
repository.
To test out code changes, you’ll need to build pandas from source, which requires a C compiler and Python environ-
ment. If you’re making documentation changes, you can skip to Contributing to the documentation but you won’t be
able to build the documentation locally before pushing your changes.
Pandas uses C extensions (mostly written using Cython) to speed up certain operations. To install pandas from source,
you need to compile these C extensions, which means you need a C compiler. This process depends on which platform
you’re using. Follow the CPython contributing guidelines for getting a compiler installed. You don’t need to do any
of the ./configure or make steps; you only need to install the compiler.
For Windows developers, the following links may be helpful.
• https://blogs.msdn.microsoft.com/pythonengineering/2016/04/11/unable-to-find-vcvarsall-bat/
• https://github.com/conda/conda-recipes/wiki/Building-from-Source-on-Windows-32-bit-and-64-bit
• https://cowboyprogrammer.org/building-python-wheels-for-windows/
• https://blog.ionelmc.ro/2014/12/21/compiling-python-extensions-on-windows/
• https://support.enthought.com/hc/en-us/articles/204469260-Building-Python-extensions-with-Canopy
Let us know if you have any difficulties by opening an issue or reaching out on Gitter.
Now that you have a C compiler, create an isolated pandas development environment:
• Install either Anaconda or miniconda
• Make sure your conda is up to date (conda update conda)
• Make sure that you have cloned the repository
• cd to the pandas source directory
We’ll now kick off a three-step process:
1. Install the build dependencies
2. Build and install pandas
3. Install the optional dependencies
At this point you should be able to import pandas from your locally built version:
This will create the new environment, and not touch any of your existing environments, nor any existing Python
installation.
To view your environments:
conda info -e
conda deactivate
If you aren’t using conda for you development environment, follow these instructions. You’ll need to have at least
python3.5 installed on your system.
You want your master branch to reflect only production-ready code, so create a feature branch for making your changes.
For example:
This changes your working directory to the shiny-new-feature branch. Keep any changes in this branch specific to one
bug or feature so it is clear what the branch brings to pandas. You can have many shiny-new-features and switch in
between them using the git checkout command.
When creating this branch, make sure your master branch is up to date with the latest upstream master version. To
update your local master branch, you can do:
When you want to update the feature branch with changes in master after you created the branch, check the section on
updating a PR.
Contributing to the documentation benefits everyone who uses pandas. We encourage you to help us improve the
documentation, and you don’t have to be an expert on pandas to do so! In fact, there are sections of the docs that are
worse off after being written by experts. If something in the docs doesn’t make sense to you, updating the relevant
section after you figure it out is a great way to ensure it will help the next person.
Documentation:
The documentation is written in reStructuredText, which is almost like writing in plain English, and built using
Sphinx. The Sphinx Documentation has an excellent introduction to reST. Review the Sphinx docs to perform more
complex changes to the documentation as well.
Some other important things to know about the docs:
• The pandas documentation consists of two parts: the docstrings in the code itself and the docs in this folder
pandas/doc/.
The docstrings provide a clear explanation of the usage of the individual functions, while the documentation
in this folder consists of tutorial-like overviews per topic together with some other information (what’s new,
installation, etc).
• The docstrings follow a pandas convention, based on the Numpy Docstring Standard. Follow the pandas
docstring guide for detailed instructions on how to write a correct docstring.
A Python docstring is a string used to document a Python module, class, function or method, so programmers
can understand what it does without having to read the details of the implementation.
Also, it is a common practice to generate online (html) documentation automatically from docstrings. Sphinx
serves this purpose.
This function simply wraps the `+` operator, and does not
do anything interesting, except for illustrating what is
the docstring of a very simple function.
Parameters
----------
num1 : int
First number to add
num2 : int
Second number to add
Returns
-------
int
The sum of `num1` and `num2`
See Also
--------
subtract : Subtract one integer from another
Examples
--------
>>> add(2, 2)
4
>>> add(25, 0)
25
>>> add(10, -10)
0
"""
return num1 + num2
Some standards exist about docstrings, so they are easier to read, and they can be exported to other formats such
as html or pdf.
The first conventions every Python docstring should follow are defined in PEP-257.
As PEP-257 is quite open, and some other standards exist on top of it. In the case of pandas, the numpy docstring
convention is followed. The conventions is explained in this document:
– numpydoc docstring guide (which is based in the original Guide to NumPy/SciPy documentation)
numpydoc is a Sphinx extension to support the numpy docstring convention.
The standard uses reStructuredText (reST). reStructuredText is a markup language that allows encoding styles
in plain text files. Documentation about reStructuredText can be found in:
– Sphinx reStructuredText primer
– Quick reStructuredText reference
– Full reStructuredText specification
Pandas has some helpers for sharing docstrings between related classes, see Sharing Docstrings.
The rest of this document will summarize all the above guides, and will provide additional convention specific
to the pandas project.
Writing a docstring
General rules
Docstrings must be defined with three double-quotes. No blank lines should be left before or after the docstring.
The text starts in the next line after the opening quotes. The closing quotes have their own line (meaning that
they are not at the end of the last sentence).
In rare occasions reST styles like bold text or itallics will be used in docstrings, but is it common to have inline
code, which is presented between backticks. It is considered inline code:
– The name of a parameter
– Python code, a module, function, built-in, type, literal. . . (e.g. os, list, numpy.abs, datetime.
date, True)
– A pandas class (in the form :class:`pandas.Series`)
– A pandas method (in the form :meth:`pandas.Series.sum`)
– A pandas function (in the form :func:`pandas.to_datetime`)
Note: To display only the last component of the linked class, method or function, prefix it with ~. For example,
:class:`~pandas.Series` will link to pandas.Series but only display the last part, Series as the
link text. See Sphinx cross-referencing syntax for details.
Good:
def add_values(arr):
"""
Add the values in `arr`.
Bad:
def func():
"""Some function.
There is a blank line between the docstring and the first line
of code `foo = 1`.
The closing quotes should be in the next line, not in this one."""
foo = 1
(continues on next page)
The short summary is a single sentence that expresses what the function does in a concise way.
The short summary must start with a capital letter, end with a dot, and fit in a single line. It needs to express
what the object does without providing details. For functions and methods, the short summary must start with
an infinitive verb.
Good:
def astype(dtype):
"""
Cast Series type.
Bad:
def astype(dtype):
"""
Casts Series type.
def astype(dtype):
"""
Method to cast Series type.
def astype(dtype):
"""
Cast Series type
def astype(dtype):
"""
Cast Series type from its current type to the new type defined in
the parameter dtype.
The extended summary provides details on what the function does. It should not go into the details of the
parameters, or discuss implementation notes, which go in other sections.
A blank line is left between the short summary and the extended summary. And every paragraph in the extended
summary is finished by a dot.
The extended summary should provide details on why the function is useful and their use cases, if it is not too
generic.
def unstack():
"""
Pivot a row index to columns.
The index level will be automatically removed from the index when added
as columns.
"""
pass
Section 3: Parameters
The details of the parameters will be added in this section. This section has the title “Parameters”, followed by
a line with a hyphen under each letter of the word “Parameters”. A blank line is left before the section title, but
not after, and not between the line with the word “Parameters” and the one with the hyphens.
After the title, each parameter in the signature must be documented, including *args and **kwargs, but not self.
The parameters are defined by their name, followed by a space, a colon, another space, and the type (or types).
Note that the space between the name and the colon is important. Types are not defined for *args and **kwargs,
but must be defined for all other parameters. After the parameter definition, it is required to have a line with the
parameter description, which is indented, and can have multiple lines. The description must start with a capital
letter, and finish with a dot.
For keyword arguments with a default value, the default will be listed after a comma at the end of the type. The
exact form of the type in this case will be “int, default 0”. In some cases it may be useful to explain what the
default argument means, which can be added after a comma “int, default -1, meaning all cpus”.
In cases where the default value is None, meaning that the value will not be used. Instead of “str, default
None”, it is preferred to write “str, optional”. When None is a value being used, we will keep the form “str,
default None”. For example, in df.to_csv(compression=None), None is not a value being used, but means that
compression is optional, and no compression is being used if not provided. In this case we will use str, optional.
Only in cases like func(value=None) and None is being used in the same way as 0 or foo would be used, then
we will specify “str, int or None, default None”.
Good:
class Series:
def plot(self, kind, color='blue', **kwargs):
"""
Generate a plot.
Parameters
----------
kind : str
Kind of matplotlib plot.
color : str, default 'blue'
Color name or rgb code.
**kwargs
These parameters will be passed to the matplotlib plotting
function.
"""
pass
Bad:
class Series:
def plot(self, kind, **kwargs):
"""
Generate a plot.
Note the blank line between the parameters title and the first
parameter. Also, note that after the name of the parameter `kind`
and before the colon, a space is missing.
Parameters
----------
kind: str
kind of matplotlib plot
"""
pass
Parameter types
When specifying the parameter types, Python built-in data types can be used directly (the Python type is pre-
ferred to the more verbose string, integer, boolean, etc):
– int
– float
– str
– bool
For complex types, define the subtypes. For dict and tuple, as more than one type is present, we use the brackets
to help read the type (curly brackets for dict and normal brackets for tuple):
– list of int
– dict of {str : int}
– tuple of (str, int, int)
– tuple of (str,)
– set of str
In case where there are just a set of values allowed, list them in curly brackets and separated by commas
(followed by a space). If the values are ordinal and they have an order, list them in this order. Otherwise, list the
default value first, if there is one:
– {0, 10, 25}
– {‘simple’, ‘advanced’}
– {‘low’, ‘medium’, ‘high’}
– {‘cat’, ‘dog’, ‘bird’}
If the type is defined in a Python module, the module must be specified:
– datetime.date
– datetime.datetime
– decimal.Decimal
If the type is in a package, the module must be also specified:
– numpy.ndarray
– scipy.sparse.coo_matrix
If the type is a pandas type, also specify pandas except for Series and DataFrame:
– Series
– DataFrame
– pandas.Index
– pandas.Categorical
– pandas.SparseArray
If the exact type is not relevant, but must be compatible with a numpy array, array-like can be specified. If Any
type that can be iterated is accepted, iterable can be used:
– array-like
– iterable
If more than one type is accepted, separate them by commas, except the last two types, that need to be separated
by the word ‘or’:
– int or float
– float, decimal.Decimal or None
– str or list of str
If None is one of the accepted values, it always needs to be the last in the list.
For axis, the convention is to use something like:
– axis : {0 or ‘index’, 1 or ‘columns’, None}, default None
If the method returns a value, it will be documented in this section. Also if the method yields its output.
The title of the section will be defined in the same way as the “Parameters”. With the names “Returns” or
“Yields” followed by a line with as many hyphens as the letters in the preceding word.
The documentation of the return is also similar to the parameters. But in this case, no name will be provided,
unless the method returns or yields more than one value (a tuple of values).
The types for “Returns” and “Yields” are the same as the ones for the “Parameters”. Also, the description must
finish with a dot.
For example, with a single value:
def sample():
"""
Generate and return a random number.
Returns
-------
float
Random number generated.
"""
return random.random()
Returns
-------
length : int
Length of the returned string.
letters : str
String of random letters.
"""
length = random.randint(1, 10)
letters = ''.join(random.choice(string.ascii_lowercase)
for i in range(length))
return length, letters
This section is used to let users know about pandas functionality related to the one being documented. In rare
cases, if no related methods or functions can be found at all, this section can be skipped.
An obvious example would be the head() and tail() methods. As tail() does the equivalent as head() but at the
end of the Series or DataFrame instead of at the beginning, it is good to let the users know about it.
To give an intuition on what can be considered related, here there are some examples:
– loc and iloc, as they do the same, but in one case providing indices and in the other positions
– max and min, as they do the opposite
– iterrows, itertuples and iteritems, as it is easy that a user looking for the method to iterate
over columns ends up in the method to iterate over rows, and vice-versa
– fillna and dropna, as both methods are used to handle missing values
– read_csv and to_csv, as they are complementary
– merge and join, as one is a generalization of the other
– astype and pandas.to_datetime, as users may be reading the documentation of astype to know
how to cast as a date, and the way to do it is with pandas.to_datetime
– where is related to numpy.where, as its functionality is based on it
When deciding what is related, you should mainly use your common sense and think about what can be useful
for the users reading the documentation, especially the less experienced ones.
When relating to other libraries (mainly numpy), use the name of the module first (not an alias like np). If the
function is in a module which is not the main one, like scipy.sparse, list the full module (e.g. scipy.
sparse.coo_matrix).
This section, as the previous, also has a header, “See Also” (note the capital S and A). Also followed by the line
with hyphens, and preceded by a blank line.
After the header, we will add a line for each related method or function, followed by a space, a colon, another
space, and a short description that illustrated what this method or function does, why is it relevant in this context,
and what are the key differences between the documented function and the one referencing. The description must
also finish with a dot.
Note that in “Returns” and “Yields”, the description is located in the following line than the type. But in this
section it is located in the same line, with a colon in between. If the description does not fit in the same line, it
can continue in the next ones, but it has to be indented in them.
For example:
class Series:
def head(self):
"""
Return the first 5 elements of the Series.
Returns
-------
Series
Subset of the original series with the 5 first values.
See Also
--------
Series.tail : Return the last 5 elements of the Series.
Series.iloc : Return a slice of the elements in the Series,
which can also be used to return the first or last n.
"""
return self.iloc[:5]
Section 6: Notes
This is an optional section used for notes about the implementation of the algorithm. Or to document technical
aspects of the function behavior.
Feel free to skip it, unless you are familiar with the implementation of the algorithm, or you discover some
counter-intuitive behavior while writing the examples for the function.
This section follows the same format as the extended summary section.
Section 7: Examples
This is one of the most important sections of a docstring, even if it is placed in the last position. As often, people
understand concepts better with examples, than with accurate explanations.
Examples in docstrings, besides illustrating the usage of the function or method, must be valid Python code, that
in a deterministic way returns the presented output, and that can be copied and run by users.
They are presented as a session in the Python terminal. >>> is used to present code. . . . is used for code
continuing from the previous line. Output is presented immediately after the last line of code generating the
output (no blank lines in between). Comments describing the examples can be added with blank lines before
and after them.
The way to present examples is as follows:
1. Import required libraries (except numpy and pandas)
2. Create the data required for the example
3. Show a very basic example that gives an idea of the most common use case
4. Add examples with explanations that illustrate how the parameters can be used for extended functionality
A simple example could be:
class Series:
def head(self, n=5):
"""
Return the first elements of the Series.
Parameters
----------
n : int
Number of values to return.
Return
------
pandas.Series
Subset of the original series with the n first values.
See Also
--------
tail : Return the last n elements of the Series.
Examples
--------
>>> s = pd.Series(['Ant', 'Bear', 'Cow', 'Dog', 'Falcon',
... 'Lion', 'Monkey', 'Rabbit', 'Zebra'])
>>> s.head()
0 Ant
1 Bear
2 Cow
3 Dog
4 Falcon
dtype: object
With the `n` parameter, we can change the number of returned rows:
>>> s.head(n=3)
0 Ant
1 Bear
2 Cow
dtype: object
"""
return self.iloc[:n]
The examples should be as concise as possible. In cases where the complexity of the function requires long
examples, is recommended to use blocks with headers in bold. Use double star ** to make a text bold, like in
**this example**.
Code in examples is assumed to always start with these two lines which are not shown:
import numpy as np
import pandas as pd
Any other module used in the examples must be explicitly imported, one per line (as recommended in PEP-8)
and avoiding aliases. Avoid excessive imports, but if needed, imports from the standard library go first, followed
by third-party libraries (like matplotlib).
When illustrating examples with a single Series use the name s, and if illustrating with a single DataFrame
use the name df. For indices, idx is the preferred name. If a set of homogeneous Series or DataFrame
is used, name them s1, s2, s3. . . or df1, df2, df3. . . If the data is not homogeneous, and more than one
structure is needed, name them with something meaningful, for example df_main and df_to_join.
Data used in the example should be as compact as possible. The number of rows is recommended to be around
4, but make it a number that makes sense for the specific example. For example in the head method, it requires
to be higher than 5, to show the example with the default values. If doing the mean, we could use something
like [1, 2, 3], so it is easy to see that the value returned is the mean.
For more complex examples (groupping for example), avoid using data without interpretation, like a matrix of
random numbers with columns A, B, C, D. . . And instead use a meaningful example, which makes it easier to
understand the concept. Unless required by the example, use names of animals, to keep examples consistent.
And numerical properties of them.
When calling the method, keywords arguments head(n=3) are preferred to positional arguments head(3).
Good:
class Series:
def mean(self):
"""
Compute the mean of the input.
Examples
--------
>>> s = pd.Series([1, 2, 3])
>>> s.mean()
2
"""
pass
Examples
--------
>>> s = pd.Series([1, np.nan, 3])
>>> s.fillna(0)
[1, 0, 3]
"""
pass
def groupby_mean(self):
"""
Group by index and return mean.
Examples
--------
>>> s = pd.Series([380., 370., 24., 26],
... name='max_speed',
... index=['falcon', 'falcon', 'parrot', 'parrot'])
>>> s.groupby_mean()
index
(continues on next page)
Examples
--------
>>> s = pd.Series('Antelope', 'Lion', 'Zebra', numpy.nan)
>>> s.contains(pattern='a')
0 False
1 False
2 True
3 NaN
dtype: bool
**Case sensitivity**
**Missing values**
We can fill missing values in the output using the `na` parameter:
Bad:
def method(foo=None, bar=None):
"""
A sample DataFrame method.
Examples
--------
>>> import numpy as np
>>> import pandas as pd
>>> df = pd.DataFrame(numpy.random.randn(3, 3),
... columns=('a', 'b', 'c'))
>>> df.method(1)
21
>>> df.method(bar=14)
123
"""
pass
Getting the examples pass the doctests in the validation script can sometimes be tricky. Here are some attention
points:
– Import all needed libraries (except for pandas and numpy, those are already imported as import
pandas as pd and import numpy as np) and define all variables you use in the example.
– Try to avoid using random data. However random data might be OK in some cases, like if the function you
are documenting deals with probability distributions, or if the amount of data needed to make the function
result meaningful is too much, such that creating it manually is very cumbersome. In those cases, always
use a fixed random seed to make the generated examples predictable. Example:
>>> np.random.seed(42)
>>> df = pd.DataFrame({'normal': np.random.normal(100, 5, 20)})
– If you have a code snippet that wraps multiple lines, you need to use ‘. . . ’ on the continued lines:
– If you want to show a case where an exception is raised, you can do:
>>> pd.to_datetime(["712-01-01"])
Traceback (most recent call last):
OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 712-01-01 00:00:00
It is essential to include the “Traceback (most recent call last):”, but for the actual error only the error name
is sufficient.
– If there is a small part of the result that can vary (e.g. a hash in an object represenation), you can use ...
to represent this part.
If you want to show that s.plot() returns a matplotlib AxesSubplot object, this will fail the doctest
>>> s.plot()
<matplotlib.axes._subplots.AxesSubplot at 0x7efd0c0b0690>
Plots in examples
There are some methods in pandas returning plots. To render the plots generated by the examples in the docu-
mentation, the .. plot:: directive exists.
To use it, place the next code after the “Examples” header as shown below. The plot will be generated automat-
ically when building the documentation.
class Series:
def plot(self):
"""
Generate a plot with the `Series` data.
Examples
--------
.. plot::
:context: close-figs
Sharing Docstrings
Pandas has a system for sharing docstrings, with slight variations, between classes. This helps us keep docstrings
consistent, while keeping things clear for the user reading. It comes at the cost of some complexity when writing.
Each shared docstring will have a base template with variables, like %(klass)s. The variables filled in later
on using the Substitution decorator. Finally, docstrings can be appended to with the Appender decorator.
In this example, we’ll create a parent docstring normally (this is like pandas.core.generic.NDFrame.
Then we’ll have two children (like pandas.core.series.Series and pandas.core.frame.
DataFrame). We’ll substitute the children’s class names in this docstring.
class Parent:
def my_function(self):
"""Apply my function to %(klass)s."""
...
class ChildA(Parent):
@Substitution(klass="ChildA")
@Appender(Parent.my_function.__doc__)
def my_function(self):
(continues on next page)
class ChildB(Parent):
@Substitution(klass="ChildB")
@Appender(Parent.my_function.__doc__)
def my_function(self):
...
>>> print(Parent.my_function.__doc__)
Apply my function to %(klass)s.
>>> print(ChildA.my_function.__doc__)
Apply my function to ChildA.
>>> print(ChildB.my_function.__doc__)
Apply my function to ChildB.
@Appender(template % _shared_doc_kwargs)
def my_function(self):
...
where template may come from a module-level _shared_docs dictionary mapping function names to
docstrings. Wherever possible, we prefer using Appender and Substitution, since the docstring-writing
processes is slightly closer to normal.
See pandas.core.generic.NDFrame.fillna for an example template, and pandas.core.
series.Series.fillna and pandas.core.generic.frame.fillna for the filled versions.
• The tutorials make heavy use of the ipython directive sphinx extension. This directive lets you put code in the
documentation which will be run during the doc build. For example:
.. ipython:: python
x = 2
x**3
In [1]: x = 2
In [2]: x**3
Out[2]: 8
Almost all code examples in the docs are run (and the output saved) during the doc build. This approach means
that code examples will always be up to date, but it does make the doc building a bit more complex.
• Our API documentation in doc/source/api.rst houses the auto-generated documentation from the doc-
strings. For classes, there are a few subtleties around controlling which methods and attributes have pages
auto-generated.
We have two autosummary templates for classes.
1. _templates/autosummary/class.rst. Use this when you want to automatically generate a page
for every public method and attribute on the class. The Attributes and Methods sections will be
automatically added to the class’ rendered documentation by numpydoc. See DataFrame for an example.
2. _templates/autosummary/class_without_autosummary. Use this when you want to pick
a subset of methods / attributes to auto-generate pages for. When using this template, you should include an
Attributes and Methods section in the class docstring. See CategoricalIndex for an example.
Every method should be included in a toctree in api.rst, else Sphinx will emit a warning.
Note: The .rst files are used to automatically generate Markdown and HTML versions of the docs. For this reason,
please do not edit CONTRIBUTING.md directly, but instead make any changes to doc/source/contributing.
rst. Then, to generate CONTRIBUTING.md, use pandoc with the following command:
The utility script scripts/validate_docstrings.py can be used to get a csv summary of the API documen-
tation. And also validate common errors in the docstring of a specific class, function or method. The summary also
compares the list of methods documented in doc/source/api.rst (which is used to generate the API Reference
page) and the actual public methods. This will identify methods documented in doc/source/api.rst that are
not actually class methods, and existing methods that are not documented in doc/source/api.rst.
3.4.2.1 Requirements
First, you need to have a development environment to be able to build pandas (see the docs on creating a development
environment above).
So how do you build the docs? Navigate to your local pandas/doc/ directory in the console and run:
Then you can find the HTML output in the folder pandas/doc/build/html/.
The first time you build the docs, it will take quite a while because it has to run all the code examples and build all the
generated docstring pages. In subsequent evocations, sphinx will try to only build the pages that have been modified.
If you want to do a full clean build, do:
You can tell make.py to compile only a single section of the docs, greatly reducing the turn-around time for checking
your changes.
For comparison, a full documentation build may take 15 minutes, but a single section may take 15 seconds. Subsequent
builds, which only process portions you have changed, will be faster.
You can also specify to use multiple cores to speed up the documentation build:
Open the following file in a web browser to see the full documentation you just built:
pandas/docs/build/html/index.html
And you’ll have the satisfaction of seeing your new and improved documentation!
When pull requests are merged into the pandas master branch, the main parts of the documentation are also built by
Travis-CI. These docs are then hosted here, see also the Continuous Integration section.
Code Base:
• Code standards
– C (cpplint)
– Python (PEP8)
– Backwards Compatibility
• Testing With Continuous Integration
• Test-driven development/code writing
– Writing tests
– Transitioning to pytest
– Using pytest
• Running the test suite
• Running the performance test suite
Writing good code is not just about what you write. It is also about how you write it. During Continuous Integration
testing, several tools will be run to check your code for stylistic errors. Generating any warnings will cause the test to
fail. Thus, good style is a requirement for submitting code to pandas.
In addition, because a lot of people use our library, it is important that we do not make sudden changes to the code
that could have the potential to break a lot of user code as a result, that is, we need it to be as backwards compatible as
possible to avoid mass breakages.
Additional standards are outlined on the code style wiki page.
3.5.1.1 C (cpplint)
pandas uses the Google standard. Google provides an open source style checker called cpplint, but we use a fork
of it that can be found here. Here are some of the more common cpplint issues:
• we restrict line-length to 80 characters to promote readability
• every header file must include a header guard to avoid name collisions if re-included
Continuous Integration will run the cpplint tool and report any stylistic errors in your code. Therefore, it is helpful
before submitting code to run the check yourself:
To make your commits compliant with this standard, you can install the ClangFormat tool, which can be downloaded
here. To configure, in your home directory, run the following command:
Then modify the file to ensure that any indentation width parameters are at least four. Once configured, you can run
the tool as follows:
clang-format modified-c-file
This will output what your file will look like if the changes are made, and to apply them, run the following command:
clang-format -i modified-c-file
To run the tool on an entire directory, you can run the following analogous commands:
Do note that this tool is best-effort, meaning that it will try to correct as many errors as possible, but it may not correct
all of them. Thus, it is recommended that you run cpplint to double check and make any other style fixes manually.
pandas uses the PEP8 standard. There are several tools to ensure you abide by this standard. Here are some of the
more common PEP8 issues:
• we restrict line-length to 79 characters to promote readability
• passing arguments should have spaces after commas, e.g. foo(arg1, arg2, kw1='bar')
Continuous Integration will run the flake8 tool and report any stylistic errors in your code. Therefore, it is helpful
before submitting code to run the check yourself on the diff:
This command will catch any stylistic errors in your changes specifically, but be beware it may not catch all of them.
For example, if you delete the only usage of an imported function, it is stylistically incorrect to import an unused
function. However, style-checking the diff will not catch this because the actual import is not part of the diff. Thus,
for completeness, you should run this command, though it will take longer:
Note that on OSX, the -r flag is not available, so you have to omit it and run this slightly modified command:
Note that on Windows, these commands are unfortunately not possible because commands like grep and xargs are
not available natively. To imitate the behavior with the commands above, you should run:
This will list all of the Python files that have been modified. The only ones that matter during linting are any whose
directory filepath begins with “pandas.” For each filepath, copy and paste it after the flake8 command as shown
below:
flake8 <python-filepath>
Alternatively, you can install the grep and xargs commands via the MinGW toolchain, and it will allow you to run
the commands above.
Please try to maintain backward compatibility. pandas has lots of users with lots of existing code, so don’t break it if
at all possible. If you think breakage is required, clearly state why as part of the pull request. Also, be careful when
changing method signatures and add deprecation warnings where needed. Also, add the deprecated sphinx directive
to the deprecated functions or methods.
If a function with the same arguments as the one being deprecated exist, you can use the pandas.util.
_decorators.deprecate:
def old_func():
"""Summary of the function.
.. deprecated:: 0.21.0
Use new_func instead.
"""
warnings.warn('Use new_func instead.', FutureWarning, stacklevel=2)
new_func()
The pandas test suite will run automatically on Travis-CI, Appveyor, and Circle CI continuous integration services,
once your pull request is submitted. However, if you wish to run the test suite on a branch prior to submitting the pull
request, then the continuous integration services need to be hooked to your GitHub repository. Instructions are here
for Travis-CI, Appveyor , and CircleCI.
A pull-request will be considered for merging when you have an all ‘green’ build. If any tests are failing, then you
will get a red ‘X’, where you can click through to see the individual failed tests. This is an example of a green build.
Note: Each time you push to your fork, a new run of the tests will be triggered on the CI. Appveyor will auto-cancel
any non-currently-running tests for that same pull-request. You can enable the auto-cancel feature for Travis-CI here
and for CircleCI here.
pandas is serious about testing and strongly encourages contributors to embrace test-driven development (TDD). This
development process “relies on the repetition of a very short development cycle: first the developer writes an (initially
failing) automated test case that defines a desired improvement or new function, then produces the minimum amount
of code to pass that test.” So, before actually writing any code, you should write your tests. Often the test can be taken
from the original GitHub issue. However, it is always worth considering additional use cases and writing corresponding
tests.
Adding tests is one of the most common requests after code is pushed to pandas. Therefore, it is worth getting in the
habit of writing tests ahead of time so this is never an issue.
Like many packages, pandas uses pytest and the convenient extensions in numpy.testing.
All tests should go into the tests subdirectory of the specific package. This folder contains many current examples of
tests, and we suggest looking to these for inspiration. If your test requires working with files or network connectivity,
there is more information on the testing page of the wiki.
The pandas.util.testing module has many special assert functions that make it easier to make statements
about whether Series or DataFrame objects are equivalent. The easiest way to verify that your code is correct is to
explicitly construct the result you expect, then compare the actual result to the expected correct result:
def test_pivot(self):
data = {
'index' : ['A', 'B', 'C', 'C', 'B', 'A'],
'columns' : ['One', 'One', 'One', 'Two', 'Two', 'Two'],
'values' : [1., 2., 3., 3., 2., 1.]
}
frame = DataFrame(data)
pivoted = frame.pivot(index='index', columns='columns', values='values')
expected = DataFrame({
'One' : {'A' : 1., 'B' : 2., 'C' : 3.},
'Two' : {'A' : 1., 'B' : 2., 'C' : 3.}
})
assert_frame_equal(pivoted, expected)
pandas existing test structure is mostly classed based, meaning that you will typically find tests wrapped in a class.
class TestReallyCoolFeature(object):
....
Going forward, we are moving to a more functional style using the pytest framework, which offers a richer testing
framework that will facilitate testing and developing. Thus, instead of writing test classes, we will write test functions
like this:
def test_really_cool_feature():
....
Here is an example of a self-contained set of tests that illustrate multiple features that we like to use.
• functional style: tests are like test_* and only take arguments that are either fixtures or parameters
• pytest.mark can be used to set metadata on test functions, e.g. skip or xfail.
• using parametrize: allow testing of multiple cases
• to set a mark on a parameter, pytest.param(..., marks=...) syntax should be used
• fixture, code for object construction, on a per-test basis
• using bare assert for scalars and truth-testing
• tm.assert_series_equal (and its counter part tm.assert_frame_equal), for pandas object com-
parisons.
• the typical pattern of constructing an expected and comparing versus the result
We would name this file test_cool_feature.py and put in an appropriate place in the pandas/tests/
structure.
import pytest
import numpy as np
import pandas as pd
from pandas.util import testing as tm
@pytest.mark.parametrize('dtype', ['float32',
pytest.param('int16', marks=pytest.mark.skip),
pytest.param('int32',
marks=pytest.mark.xfail(reason='to show how it works'))])
def test_mark(dtype):
assert str(np.dtype(dtype)) == 'float32'
@pytest.fixture
def series():
return pd.Series([1, 2, 3])
tester.py::test_dtypes[int8] PASSED
tester.py::test_dtypes[int16] PASSED
tester.py::test_dtypes[int32] PASSED
tester.py::test_dtypes[int64] PASSED
tester.py::test_mark[float32] PASSED
tester.py::test_mark[int16] SKIPPED
tester.py::test_mark[int32] xfail
tester.py::test_series[int8] PASSED
tester.py::test_series[int16] PASSED
tester.py::test_series[int32] PASSED
tester.py::test_series[int64] PASSED
Tests that we have parametrized are now accessible via the test name, for example we could run these with -k
int8 to sub-select only those tests which match int8.
test_cool_feature.py::test_dtypes[int8] PASSED
test_cool_feature.py::test_series[int8] PASSED
The tests can then be run directly inside your Git clone (without having to install pandas) by typing:
pytest pandas
The tests suite is exhaustive and takes around 20 minutes to run. Often it is worth running only a subset of tests first
around your changes before running the entire suite.
The easiest way to do this is with:
pytest pandas/tests/[test-module].py
pytest pandas/tests/[test-module].py::[TestClass]
pytest pandas/tests/[test-module].py::[TestClass]::[test_method]
Using pytest-xdist, one can speed up local testing on multicore machines. To use this feature, you will need to install
pytest-xdist via:
Two scripts are provided to assist with this. These scripts distribute testing across 4 threads.
On Unix variants, one can type:
test_fast.sh
test_fast.bat
This can significantly reduce the time it takes to locally run tests before submitting a pull request.
For more, see the pytest documentation.
New in version 0.20.0.
Furthermore one can run
pd.test()
Performance matters and it is worth considering whether your code has introduced performance regressions. pandas
is in the process of migrating to asv benchmarks to enable easy monitoring of the performance of critical pandas
operations. These benchmarks are all found in the pandas/asv_bench directory. asv supports both python2 and
python3.
To use all features of asv, you will need either conda or virtualenv. For more details please check the asv
installation webpage.
To install asv:
If you need to run a benchmark, change your directory to asv_bench/ and run:
You can replace HEAD with the name of the branch you are working on, and report benchmarks that changed by
more than 10%. The command uses conda by default for creating the benchmark environments. If you want to use
virtualenv instead, write:
The -E virtualenv option should be added to all asv commands that run benchmarks. The default value is
defined in asv.conf.json.
Running the full test suite can take up to one hour and use up to 3GB of RAM. Usually it is sufficient to paste only
a subset of the results into the pull request to show that the committed changes do not cause unexpected performance
regressions. You can run specific benchmarks using the -b flag, which takes a regular expression. For example, this
will only run tests from a pandas/asv_bench/benchmarks/groupby.py file:
If you want to only run a specific group of tests from a file, you can do it using . as a separator. For example:
This will display stderr from the benchmarks, and use your local python that comes from your $PATH.
Information on how to write a benchmark and how to use asv can be found in the asv documentation.
Changes should be reflected in the release notes located in doc/source/whatsnew/vx.y.z.txt. This file
contains an ongoing change log for each release. Add an entry to this file to document your fix, enhancement
or (unavoidable) breaking change. Make sure to include the GitHub issue number when adding your entry (using
:issue:`1234` where 1234 is the issue/pull request number).
If your code is an enhancement, it is most likely necessary to add usage examples to the existing documentation. This
can be done following the section regarding documentation above. Further, to let users know when this feature was
added, the versionadded directive is used. The sphinx syntax for that is:
.. versionadded:: 0.21.0
This will put the text New in version 0.21.0 wherever you put the sphinx directive. This should also be put in the
docstring when adding a new function or method (example) or a new keyword argument (example).
Keep style fixes to a separate commit to make your pull request more readable.
Once you’ve made changes, you can see them by typing:
git status
If you have created a new file, it is not being tracked by git. Add it by typing:
# On branch shiny-new-feature
#
# modified: /relative/path/to/file-you-added.py
#
Finally, commit your changes to your local repository with an explanatory message. Pandas uses a convention for
commit message prefixes and layout. Here are some common prefixes along with general guidelines for when to use
them:
• ENH: Enhancement, new functionality
• BUG: Bug fix
git commit -m
When you want your changes to appear publicly on your GitHub page, push your forked feature branch’s commits:
Here origin is the default name given to your remote repository on GitHub. You can see the remote repositories:
git remote -v
If you added the upstream repository as described above you will see something like:
Now your code is on GitHub, but it is not yet a part of the pandas project. For that to happen, a pull request needs to
be submitted on GitHub.
When you’re ready to ask for a code review, file a pull request. Before you do, once again make sure that you have
followed all the guidelines outlined in this document regarding code style, tests, performance tests, and documentation.
You should also double check your branch changes against the branch it was based on:
1. Navigate to your repository on GitHub – https://github.com/your-user-name/pandas
2. Click on Branches
3. Click on the Compare button for your feature branch
4. Select the base and compare branches, if necessary. This will be master and shiny-new-feature,
respectively.
If everything looks good, you are ready to make a pull request. A pull request is how code from a local repository
becomes available to the GitHub community and can be looked at and eventually merged into the master version. This
pull request and its associated changes will eventually be committed to the master branch and available in the next
release. To submit a pull request:
1. Navigate to your repository on GitHub
2. Click on the Pull Request button
3. You can then click on Commits and Files Changed to make sure everything looks okay one last time
4. Write a description of your changes in the Preview Discussion tab
5. Click Send Pull Request.
This request then goes to the repository maintainers, and they will review the code.
Based on the review you get on your pull request, you will probably need to make some changes to the code. In that
case, you can make them in your branch, add a new commit to that branch, push it to GitHub, and the pull request will
be automatically updated. Pushing them to GitHub again is done by:
This will automatically update your pull request with the latest code and restart the Continuous Integration tests.
Another reason you might need to update your pull request is to solve conflicts with changes that have been merged
into the master branch since you opened your pull request.
To do this, you need to “merge upstream master” in your branch:
If there are no conflicts (or they could be fixed automatically), a file with a default commit message will open, and you
can simply save and quit this file.
If there are merge conflicts, you need to solve those conflicts. See for example at https://help.github.com/articles/
resolving-a-merge-conflict-using-the-command-line/ for an explanation on how to do this. Once the conflicts are
merged and the files where the conflicts were solved are added, you can run git commit to save those fixes.
If you have uncommitted changes at the moment you want to update the branch with master, you will need to stash
them prior to updating (see the stash docs). This will effectively store your changes and they can be reapplied after
updating.
After the feature branch has been update locally, you can now update your pull request by pushing to the branch on
GitHub:
Once your feature branch is accepted into upstream, you’ll probably want to get rid of the branch. First, merge
upstream master into your branch so git knows it is safe to delete your branch:
Make sure you use a lower-case -d, or else git won’t warn you if your feature branch has not actually been merged.
The branch will still exist on GitHub, so to delete it there do:
FOUR
PACKAGE OVERVIEW
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data
analysis tools for the Python programming language.
pandas consists of the following elements:
• A set of labeled array data structures, the primary of which are Series and DataFrame.
• Index objects enabling both simple axis indexing and multi-level / hierarchical axis indexing.
• An integrated group by engine for aggregating and transforming data sets.
• Date range generation (date_range) and custom date offsets enabling the implementation of customized frequen-
cies.
• Input/Output tools: loading tabular data from flat files (CSV, delimited, Excel 2003), and saving and loading
pandas objects from the fast and efficient PyTables/HDF5 format.
• Memory-efficient “sparse” versions of the standard data structures for storing data that is mostly missing or
mostly constant (some fixed value).
• Moving window statistics (rolling mean, rolling standard deviation, etc.).
The best way to think about the pandas data structures is as flexible containers for lower dimensional data. For
example, DataFrame is a container for Series, and Series is a container for scalars. We would like to be able to insert
and remove objects from these containers in a dictionary-like fashion.
Also, we would like sensible default behaviors for the common API functions which take into account the typical
orientation of time series and cross-sectional data sets. When using ndarrays to store 2- and 3-dimensional data, a
burden is placed on the user to consider the orientation of the data set when writing functions; axes are considered
more or less equivalent (except when C- or Fortran-contiguousness matters for performance). In pandas, the axes are
intended to lend more semantic meaning to the data; i.e., for a particular data set there is likely to be a “right” way to
orient the data. The goal, then, is to reduce the amount of mental effort required to code up data transformations in
downstream functions.
493
pandas: powerful Python data analysis toolkit, Release 0.23.4
For example, with tabular data (DataFrame) it is more semantically helpful to think of the index (the rows) and the
columns rather than axis 0 and axis 1. Iterating through the columns of the DataFrame thus results in more readable
code:
All pandas data structures are value-mutable (the values they contain can be altered) but not always size-mutable. The
length of a Series cannot be changed, but, for example, columns can be inserted into a DataFrame. However, the vast
majority of methods produce new objects and leave the input data untouched. In general we like to favor immutability
where sensible.
The first stop for pandas issues and ideas is the Github Issue Tracker. If you have a general question, pandas community
experts can answer through Stack Overflow.
4.4 Community
pandas is actively supported today by a community of like-minded individuals around the world who contribute their
valuable time and energy to help make open source pandas possible. Thanks to all of our contributors.
If you’re interested in contributing, please visit Contributing to pandas webpage.
pandas is a NumFOCUS sponsored project. This will help ensure the success of development of pandas as a world-
class open-source project, and makes it possible to donate to the project.
The governance process that pandas project has used informally since its inception in 2008 is formalized in Project
Governance documents. The documents clarify how decisions are made and how the various elements of our commu-
nity interact, including the relationship between open source collaborative development and work that may be funded
by for-profit or non-profit entities.
Wes McKinney is the Benevolent Dictator for Life (BDFL).
The list of the Core Team members and more detailed information can be found on the people’s page of the governance
repo.
The information about current institutional partners can be found on pandas website page.
4.8 License
Copyright (c) 2008-2012, AQR Capital Management, LLC, Lambda Foundry, Inc. and PyData
˓→Development Team
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
FIVE
10 MINUTES TO PANDAS
This is a short introduction to pandas, geared mainly for new users. You can see more complex recipes in the Cookbook.
Customarily, we import as follows:
In [4]: s = pd.Series([1,3,5,np.nan,6,8])
In [5]: s
Out[5]:
0 1.0
1 3.0
2 5.0
3 NaN
4 6.0
5 8.0
dtype: float64
Creating a DataFrame by passing a NumPy array, with a datetime index and labeled columns:
In [7]: dates
Out[7]:
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
'2013-01-05', '2013-01-06'],
dtype='datetime64[ns]', freq='D')
In [9]: df
Out[9]:
(continues on next page)
497
pandas: powerful Python data analysis toolkit, Release 0.23.4
In [11]: df2
Out[11]:
A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo
In [12]: df2.dtypes
Out[12]:
A float64
B datetime64[ns]
C float32
D int32
E category
F object
dtype: object
If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled.
Here’s a subset of the attributes that will be completed:
In [13]: df2.<TAB>
df2.A df2.bool
df2.abs df2.boxplot
df2.add df2.C
df2.add_prefix df2.clip
df2.add_suffix df2.clip_lower
df2.align df2.clip_upper
df2.all df2.columns
df2.any df2.combine
df2.append df2.combine_first
df2.apply df2.compound
df2.applymap df2.consolidate
df2.D
As you can see, the columns A, B, C, and D are automatically tab completed. E is there as well; the rest of the attributes
have been truncated for brevity.
In [14]: df.head()
Out[14]:
A B C D
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
2013-01-05 -0.424972 0.567020 0.276232 -1.087401
In [15]: df.tail(3)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
2013-01-05 -0.424972 0.567020 0.276232 -1.087401
2013-01-06 -0.673690 0.113648 -1.478427 0.524988
In [16]: df.index
Out[16]:
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
'2013-01-05', '2013-01-06'],
dtype='datetime64[ns]', freq='D')
In [17]: df.columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Index(['A', 'B', 'C', 'D'], dtype='object')
In [18]: df.values
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [19]: df.describe()
Out[19]:
A B C D
count 6.000000 6.000000 6.000000 6.000000
mean 0.073711 -0.431125 -0.687758 -0.233103
std 0.843157 0.922818 0.779887 0.973118
min -0.861849 -2.104569 -1.509059 -1.135632
25% -0.611510 -0.600794 -1.368714 -1.076610
50% 0.022070 -0.228039 -0.767252 -0.386188
75% 0.658444 0.041933 -0.034326 0.461706
max 1.212112 0.567020 0.276232 1.071804
In [20]: df.T
Out[20]:
2013-01-01 2013-01-02 2013-01-03 2013-01-04 2013-01-05 2013-01-06
A 0.469112 1.212112 -0.861849 0.721555 -0.424972 -0.673690
B -0.282863 -0.173215 -2.104569 -0.706771 0.567020 0.113648
C -1.509059 0.119209 -0.494929 -1.039575 0.276232 -1.478427
D -1.135632 -1.044236 1.071804 0.271860 -1.087401 0.524988
Sorting by an axis:
Sorting by values:
In [22]: df.sort_values(by='B')
Out[22]:
A B C D
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-06 -0.673690 0.113648 -1.478427 0.524988
2013-01-05 -0.424972 0.567020 0.276232 -1.087401
5.3 Selection
Note: While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for
interactive work, for production code, we recommend the optimized pandas data access methods, .at, .iat, .loc
and .iloc.
See the indexing documentation Indexing and Selecting Data and MultiIndex / Advanced Indexing.
5.3.1 Getting
In [23]: df['A']
Out[23]:
2013-01-01 0.469112
2013-01-02 1.212112
2013-01-03 -0.861849
2013-01-04 0.721555
(continues on next page)
In [24]: df[0:3]
Out[24]:
A B C D
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
In [25]: df['20130102':'20130104']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
In [26]: df.loc[dates[0]]
Out[26]:
A 0.469112
B -0.282863
C -1.509059
D -1.135632
Name: 2013-01-01 00:00:00, dtype: float64
In [27]: df.loc[:,['A','B']]
Out[27]:
A B
2013-01-01 0.469112 -0.282863
2013-01-02 1.212112 -0.173215
2013-01-03 -0.861849 -2.104569
2013-01-04 0.721555 -0.706771
2013-01-05 -0.424972 0.567020
2013-01-06 -0.673690 0.113648
In [28]: df.loc['20130102':'20130104',['A','B']]
Out[28]:
A B
2013-01-02 1.212112 -0.173215
2013-01-03 -0.861849 -2.104569
2013-01-04 0.721555 -0.706771
In [29]: df.loc['20130102',['A','B']]
Out[29]:
A 1.212112
B -0.173215
Name: 2013-01-02 00:00:00, dtype: float64
In [30]: df.loc[dates[0],'A']
Out[30]: 0.46911229990718628
In [31]: df.at[dates[0],'A']
Out[31]: 0.46911229990718628
In [32]: df.iloc[3]
Out[32]:
A 0.721555
B -0.706771
C -1.039575
D 0.271860
Name: 2013-01-04 00:00:00, dtype: float64
In [33]: df.iloc[3:5,0:2]
Out[33]:
A B
2013-01-04 0.721555 -0.706771
2013-01-05 -0.424972 0.567020
In [34]: df.iloc[[1,2,4],[0,2]]
Out[34]:
A C
2013-01-02 1.212112 0.119209
2013-01-03 -0.861849 -0.494929
2013-01-05 -0.424972 0.276232
In [35]: df.iloc[1:3,:]
Out[35]:
A B C D
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
In [36]: df.iloc[:,1:3]
Out[36]:
B C
2013-01-01 -0.282863 -1.509059
2013-01-02 -0.173215 0.119209
2013-01-03 -2.104569 -0.494929
2013-01-04 -0.706771 -1.039575
2013-01-05 0.567020 0.276232
2013-01-06 0.113648 -1.478427
In [37]: df.iloc[1,1]
Out[37]: -0.17321464905330858
In [38]: df.iat[1,1]
Out[38]: -0.17321464905330858
In [43]: df2
Out[43]:
A B C D E
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632 one
2013-01-02 1.212112 -0.173215 0.119209 -1.044236 one
(continues on next page)
In [44]: df2[df2['E'].isin(['two','four'])]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D E
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 two
2013-01-05 -0.424972 0.567020 0.276232 -1.087401 four
5.3.5 Setting
In [46]: s1
Out[46]:
2013-01-02 1
2013-01-03 2
2013-01-04 3
2013-01-05 4
2013-01-06 5
2013-01-07 6
Freq: D, dtype: int64
In [47]: df['F'] = s1
In [48]: df.at[dates[0],'A'] = 0
In [49]: df.iat[0,1] = 0
In [51]: df
Out[51]:
A B C D F
2013-01-01 0.000000 0.000000 -1.509059 5 NaN
2013-01-02 1.212112 -0.173215 0.119209 5 1.0
2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0
2013-01-04 0.721555 -0.706771 -1.039575 5 3.0
2013-01-05 -0.424972 0.567020 0.276232 5 4.0
2013-01-06 -0.673690 0.113648 -1.478427 5 5.0
In [54]: df2
Out[54]:
A B C D F
2013-01-01 0.000000 0.000000 -1.509059 -5 NaN
2013-01-02 -1.212112 -0.173215 -0.119209 -5 -1.0
2013-01-03 -0.861849 -2.104569 -0.494929 -5 -2.0
2013-01-04 -0.721555 -0.706771 -1.039575 -5 -3.0
2013-01-05 -0.424972 -0.567020 -0.276232 -5 -4.0
2013-01-06 -0.673690 -0.113648 -1.478427 -5 -5.0
pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See
the Missing Data section.
Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data.
In [55]: df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
In [56]: df1.loc[dates[0]:dates[1],'E'] = 1
In [57]: df1
Out[57]:
A B C D F E
2013-01-01 0.000000 0.000000 -1.509059 5 NaN 1.0
2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0
2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0 NaN
2013-01-04 0.721555 -0.706771 -1.039575 5 3.0 NaN
5.5 Operations
5.5.1 Stats
Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically
broadcasts along the specified dimension.
In [63]: s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(2)
In [64]: s
Out[64]:
2013-01-01 NaN
2013-01-02 NaN
2013-01-03 1.0
2013-01-04 3.0
2013-01-05 5.0
2013-01-06 NaN
Freq: D, dtype: float64
5.5.2 Apply
In [66]: df.apply(np.cumsum)
Out[66]:
A B C D F
2013-01-01 0.000000 0.000000 -1.509059 5 NaN
2013-01-02 1.212112 -0.173215 -1.389850 10 1.0
2013-01-03 0.350263 -2.277784 -1.884779 15 3.0
2013-01-04 1.071818 -2.984555 -2.924354 20 6.0
2013-01-05 0.646846 -2.417535 -2.648122 25 10.0
2013-01-06 -0.026844 -2.303886 -4.126549 30 15.0
A 2.073961
B 2.671590
C 1.785291
D 0.000000
F 4.000000
dtype: float64
5.5.3 Histogramming
In [69]: s
Out[69]:
0 4
1 2
2 1
3 2
4 6
5 4
6 4
7 6
8 4
9 4
dtype: int64
In [70]: s.value_counts()
(continues on next page)
4 5
6 2
2 2
1 1
dtype: int64
Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each
element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions
by default (and in some cases always uses them). See more at Vectorized String Methods.
In [71]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
In [72]: s.str.lower()
Out[72]:
0 a
1 b
2 c
3 aaba
4 baca
5 NaN
6 caba
7 dog
8 cat
dtype: object
5.6 Merge
5.6.1 Concat
pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various
kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
See the Merging section.
Concatenating pandas objects together with concat():
In [74]: df
Out[74]:
0 1 2 3
0 -0.548702 1.467327 -1.015962 -0.483075
1 1.637550 -1.217659 -0.291519 -1.745505
2 -0.263952 0.991460 -0.919069 0.266046
3 -0.709661 1.669052 1.037882 -1.705775
4 -0.919854 -0.042379 1.247642 -0.009920
5 0.290213 0.495767 0.362949 1.548106
6 -1.131345 -0.089329 0.337863 -0.945867
(continues on next page)
In [76]: pd.concat(pieces)
Out[76]:
0 1 2 3
0 -0.548702 1.467327 -1.015962 -0.483075
1 1.637550 -1.217659 -0.291519 -1.745505
2 -0.263952 0.991460 -0.919069 0.266046
3 -0.709661 1.669052 1.037882 -1.705775
4 -0.919854 -0.042379 1.247642 -0.009920
5 0.290213 0.495767 0.362949 1.548106
6 -1.131345 -0.089329 0.337863 -0.945867
7 -0.932132 1.956030 0.017587 -0.016692
8 -0.575247 0.254161 -1.143704 0.215897
9 1.193555 -0.077118 -0.408530 -0.862495
5.6.2 Join
In [79]: left
Out[79]:
key lval
0 foo 1
1 foo 2
In [80]: right
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[80]:
key rval
0 foo 4
1 foo 5
In [84]: left
Out[84]:
key lval
0 foo 1
1 bar 2
In [85]: right
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[85]:
key rval
0 foo 4
1 bar 5
5.6.3 Append
In [88]: df
Out[88]:
A B C D
0 1.346061 1.511763 1.627081 -0.990582
1 -0.441652 1.211526 0.268520 0.024580
2 -1.577585 0.396823 -0.105381 -0.532532
3 1.453749 1.208843 -0.080952 -0.264610
4 -0.727965 -0.589346 0.339969 -0.693205
5 -0.339355 0.593616 0.884345 1.591431
6 0.141809 0.220390 0.435589 0.192451
7 -0.096701 0.803351 1.715071 -0.708758
In [89]: s = df.iloc[3]
5.7 Grouping
By “group by” we are referring to a process involving one or more of the following steps:
• Splitting the data into groups based on some criteria
• Applying a function to each group independently
• Combining the results into a data structure
See the Grouping section.
In [92]: df
Out[92]:
A B C D
0 foo one -1.202872 -0.055224
1 bar one -1.814470 2.395985
2 foo two 1.018601 1.552825
3 bar three -0.595447 0.166599
4 foo two 1.395433 0.047609
5 bar two -0.392670 -0.136473
6 foo one 0.007207 -0.561757
7 foo three 1.928123 -1.623033
Grouping and then applying the sum() function to the resulting groups.
In [93]: df.groupby('A').sum()
Out[93]:
C D
A
bar -2.802588 2.42611
foo 3.146492 -0.63958
Grouping by multiple columns forms a hierarchical index, and again we can apply the sum function.
In [94]: df.groupby(['A','B']).sum()
Out[94]:
C D
A B
bar one -1.814470 2.395985
three -0.595447 0.166599
two -0.392670 -0.136473
foo one -1.195665 -0.616981
three 1.928123 -1.623033
two 2.414034 1.600434
5.8 Reshaping
5.8.1 Stack
In [99]: df2
Out[99]:
A B
first second
bar one 0.029399 -0.542108
two 0.282696 -0.087302
baz one -1.575170 1.771208
two 0.816482 1.100230
In [101]: stacked
Out[101]:
first second
bar one A 0.029399
B -0.542108
two A 0.282696
B -0.087302
baz one A -1.575170
B 1.771208
two A 0.816482
B 1.100230
dtype: float64
With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is
unstack(), which by default unstacks the last level:
In [102]: stacked.unstack()
Out[102]:
A B
first second
bar one 0.029399 -0.542108
two 0.282696 -0.087302
baz one -1.575170 1.771208
two 0.816482 1.100230
In [103]: stacked.unstack(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [104]: stacked.unstack(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [106]: df
Out[106]:
A B C D E
0 one A foo 1.418757 -0.179666
1 one B foo -1.879024 1.291836
2 two C foo 0.536826 -0.009614
3 three A bar 1.006160 0.392149
4 one B bar -0.029716 0.264599
5 one C bar -1.146178 -0.057409
6 two A foo 0.100900 -1.425638
7 three B foo -1.035018 1.024098
8 one C foo 0.314665 -0.106062
9 one A bar -0.773723 1.824375
10 two B bar -1.170653 0.595974
11 three C bar 0.648740 1.167115
pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency con-
version (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial
applications. See the Time Series section.
In [110]: ts.resample('5Min').sum()
Out[110]:
2012-01-01 25083
Freq: 5T, dtype: int64
In [113]: ts
Out[113]:
2012-03-06 0.464000
2012-03-07 0.227371
2012-03-08 -0.496922
2012-03-09 0.306389
2012-03-10 -2.290613
Freq: D, dtype: float64
In [115]: ts_utc
Out[115]:
2012-03-06 00:00:00+00:00 0.464000
2012-03-07 00:00:00+00:00 0.227371
2012-03-08 00:00:00+00:00 -0.496922
2012-03-09 00:00:00+00:00 0.306389
2012-03-10 00:00:00+00:00 -2.290613
Freq: D, dtype: float64
In [116]: ts_utc.tz_convert('US/Eastern')
Out[116]:
2012-03-05 19:00:00-05:00 0.464000
2012-03-06 19:00:00-05:00 0.227371
2012-03-07 19:00:00-05:00 -0.496922
2012-03-08 19:00:00-05:00 0.306389
2012-03-09 19:00:00-05:00 -2.290613
Freq: D, dtype: float64
In [119]: ts
Out[119]:
2012-01-31 -1.134623
2012-02-29 -1.561819
2012-03-31 -0.260838
2012-04-30 0.281957
2012-05-31 1.523962
Freq: M, dtype: float64
In [120]: ps = ts.to_period()
In [121]: ps
Out[121]:
2012-01 -1.134623
2012-02 -1.561819
2012-03 -0.260838
2012-04 0.281957
2012-05 1.523962
Freq: M, dtype: float64
In [122]: ps.to_timestamp()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2012-01-01 -1.134623
2012-02-01 -1.561819
2012-03-01 -0.260838
2012-04-01 0.281957
2012-05-01 1.523962
Freq: MS, dtype: float64
Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following
example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following
the quarter end:
In [126]: ts.head()
Out[126]:
1990-03-01 09:00 -0.902937
1990-06-01 09:00 0.068159
1990-09-01 09:00 -0.057873
1990-12-01 09:00 -0.368204
1991-03-01 09:00 -1.144073
Freq: H, dtype: float64
5.10 Categoricals
pandas can include categorical data in a DataFrame. For full docs, see the categorical introduction and the API
documentation.
In [129]: df["grade"]
Out[129]:
0 a
1 b
2 b
3 a
4 a
5 e
Name: grade, dtype: category
Categories (3, object): [a, b, e]
Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new
Series by default).
In [132]: df["grade"]
Out[132]:
0 very good
1 good
2 good
3 very good
4 very good
5 very bad
Name: grade, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]
In [133]: df.sort_values(by="grade")
Out[133]:
id raw_grade grade
5 6 e very bad
1 2 b good
2 3 b good
0 1 a very good
3 4 a very good
4 5 a very good
In [134]: df.groupby("grade").size()
Out[134]:
grade
very bad 1
bad 0
medium 0
good 2
very good 3
dtype: int64
5.11 Plotting
In [136]: ts = ts.cumsum()
In [137]: ts.plot()
Out[137]: <matplotlib.axes._subplots.AxesSubplot at 0x7faa453cf710>
On a DataFrame, the plot() method is a convenience to plot all of the columns with labels:
In [139]: df = df.cumsum()
5.12.1 CSV
In [141]: df.to_csv('foo.csv')
In [142]: pd.read_csv('foo.csv')
Out[142]:
Unnamed: 0 A B C D
(continues on next page)
5.12.2 HDF5
In [143]: df.to_hdf('foo.h5','df')
In [144]: pd.read_hdf('foo.h5','df')
Out[144]:
A B C D
2000-01-01 0.266457 -0.399641 -0.219582 1.186860
2000-01-02 -1.170732 -0.345873 1.653061 -0.282953
2000-01-03 -1.734933 0.530468 2.060811 -0.515536
2000-01-04 -1.555121 1.452620 0.239859 -1.156896
2000-01-05 0.578117 0.511371 0.103552 -2.428202
2000-01-06 0.478344 0.449933 -0.741620 -1.962409
2000-01-07 1.235339 -0.091757 -1.543861 -1.084753
... ... ... ... ...
2002-09-20 -10.628548 -9.153563 -7.883146 28.313940
2002-09-21 -10.390377 -8.727491 -6.399645 30.914107
2002-09-22 -8.985362 -8.485624 -4.669462 31.367740
2002-09-23 -9.558560 -8.781216 -4.499815 30.518439
2002-09-24 -9.902058 -9.340490 -4.386639 30.105593
2002-09-25 -10.216020 -9.480682 -3.933802 29.758560
2002-09-26 -11.856774 -10.671012 -3.216025 29.369368
5.12.3 Excel
5.13 Gotchas
If you are attempting to perform an operation you might see an exception like:
SIX
TUTORIALS
This is a guide to many pandas tutorials, geared mainly for new users.
The goal of this 2015 cookbook (by Julia Evans) is to give you some concrete examples for getting started with pandas.
These are examples with real-world data, and all the bugs and weirdness that entails.
Here are links to the v0.2 release. For an up-to-date table of contents, see the pandas-cookbook GitHub repository. To
run the examples in this tutorial, you’ll need to clone the GitHub repository and get IPython Notebook running. See
How to use this cookbook.
• A quick tour of the IPython Notebook: Shows off IPython’s awesome tab completion and magic functions.
• Chapter 1: Reading your data into pandas is pretty much the easiest thing. Even when the encoding is wrong!
• Chapter 2: It’s not totally obvious how to select data from a pandas dataframe. Here we explain the basics (how
to take slices and get columns)
• Chapter 3: Here we get into serious slicing and dicing and learn how to filter dataframes in complicated ways,
really fast.
• Chapter 4: Groupby/aggregate is seriously my favorite thing about pandas and I use it all the time. You should
probably read this.
• Chapter 5: Here you get to find out if it’s cold in Montreal in the winter (spoiler: yes). Web scraping with pandas
is fun! Here we combine dataframes.
• Chapter 6: Strings with pandas are great. It has all these vectorized string operations and they’re the best. We
will turn a bunch of strings containing “Snow” into vectors of numbers in a trice.
• Chapter 7: Cleaning up messy data is never a joy, but with pandas it’s easier.
• Chapter 8: Parsing Unix timestamps is confusing at first but it turns out to be really easy.
• Chapter 9: Reading data from SQL databases.
521
pandas: powerful Python data analysis toolkit, Release 0.23.4
This guide is a comprehensive introduction to the data analysis process using the Python data ecosystem and an
interesting open dataset. There are four sections covering selected topics as follows:
• Munging Data
• Aggregating Data
• Visualizing Data
• Time Series
Practice your skills with real data sets and exercises. For more resources, please visit the main repository.
• 01 - Getting & Knowing Your Data
• 02 - Filtering & Sorting
• 03 - Grouping
• 04 - Apply
• 05 - Merge
• 06 - Stats
• 07 - Visualization
• 08 - Creating Series and DataFrames
• 09 - Time Series
• 10 - Deleting
Tutorial series written in 2016 by Tom Augspurger. The source may be found in the GitHub repository
TomAugspurger/effective-pandas.
• Modern Pandas
• Method Chaining
• Indexes
• Performance
• Tidy Data
• Visualization
• Timeseries
SEVEN
COOKBOOK
This is a repository for short and sweet examples and links for useful pandas recipes. We encourage users to add to
this documentation.
Adding interesting links and/or inline examples to this section is a great First Pull Request.
Simplified, condensed, new-user friendly, in-line examples have been inserted where possible to augment the Stack-
Overflow and GitHub links. Many of the links contain expanded information, above what the in-line examples offer.
Pandas (pd) and Numpy (np) are the only two abbreviated imported modules. The rest are kept explicitly imported for
newer users.
These examples are written for Python 3. Minor tweaks might be necessary for earlier python versions.
7.1 Idioms
In [1]: df = pd.DataFrame(
...: {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
...:
Out[1]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
7.1.1 if-then. . .
525
pandas: powerful Python data analysis toolkit, Release 0.23.4
In [6]: df.where(df_mask,-1000)
Out[6]:
AAA BBB CCC
0 4 -1000 2000
1 5 -1000 -1000
2 6 -1000 555
3 7 -1000 -1000
In [7]: df = pd.DataFrame(
...: {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
...:
Out[7]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
7.1.2 Splitting
In [9]: df = pd.DataFrame(
...: {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
...:
Out[9]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [16]: df = pd.DataFrame(
....: {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
....:
Out[16]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [18]: df.loc[(df.CCC-aValue).abs().argsort()]
Out[18]:
AAA BBB CCC
1 5 20 50
0 4 10 100
2 6 30 -30
3 7 40 -50
In [19]: df = pd.DataFrame(
....: {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
....:
Out[19]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [26]: df[AllCrit]
Out[26]:
(continues on next page)
7.2 Selection
7.2.1 DataFrames
In [30]: df = pd.DataFrame(data=data,index=['foo','bar','boo','kar']); df
Out[30]:
AAA BBB CCC
foo 4 10 100
bar 5 20 50
boo 6 30 -30
kar 7 40 -50
In [33]: df.loc['bar':'kar']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Ambiguity arises when an index consists of integers with a non-zero start or non-unit increment.
In [37]: df = pd.DataFrame(
....: {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40], 'CCC' : [100,50,-30,-50]});
˓→df
....:
Out[37]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
7.2.2 Panels
Extend a panel frame by transposing, adding a new dimension, and transposing back to the original dimensions
In [42]: df1, df2, df3 = pd.DataFrame(data, rng, cols), pd.DataFrame(data, rng, cols),
˓→ pd.DataFrame(data, rng, cols)
In [43]: pf = pd.Panel({'df1':df1,'df2':df2,'df3':df3});pf
Out[43]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 100 (major_axis) x 4 (minor_axis)
Items axis: df1 to df3
Major_axis axis: 2013-01-01 00:00:00 to 2013-04-10 00:00:00
Minor_axis axis: A to D
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 100 (major_axis) x 5 (minor_axis)
Items axis: df1 to df3
Major_axis axis: 2013-01-01 00:00:00 to 2013-04-10 00:00:00
Minor_axis axis: A to F
Mask a panel by using np.where and then reconstructing the panel with the new masked values
In [45]: df = pd.DataFrame(
....: {'AAA' : [1,2,1,3], 'BBB' : [1,1,2,2], 'CCC' : [2,1,3,1]}); df
....:
Out[45]:
AAA BBB CCC
0 1 1 2
1 2 1 1
2 1 2 3
3 3 2 1
7.3 MultiIndexing
# As Labelled Index
In [54]: df = df.set_index('row');df
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
One Two
X Y X Y
row
0 1.1 1.2 1.11 1.22
1 1.1 1.2 1.11 1.22
2 1.1 1.2 1.11 1.22
level_1 X Y
row
0 One 1.10 1.20
0 Two 1.11 1.22
1 One 1.10 1.20
1 Two 1.11 1.22
2 One 1.10 1.20
2 Two 1.11 1.22
# And fix the labels (Notice the label 'level_1' got added automatically)
In [57]: df.columns = ['Sample','All_X','All_Y'];df
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
7.3.1 Arithmetic
In [59]: df = pd.DataFrame(np.random.randn(2,6),index=['n','m'],columns=cols); df
Out[59]:
A B C
O I O I O I
n 1.920906 -0.388231 -2.314394 0.665508 0.402562 0.399555
m -1.765956 0.850423 0.388054 0.992312 0.744086 -0.739776
In [60]: df = df.div(df['C'],level=1); df
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C
O I O I O I
n 4.771702 -0.971660 -5.749162 1.665625 1.0 1.0
m -2.373321 -1.149568 0.521518 -1.341367 1.0 1.0
7.3.2 Slicing
In [63]: df = pd.DataFrame([11,22,33,44,55],index,['MyData']); df
Out[63]:
MyData
AA one 11
six 22
BB one 33
two 44
six 55
To take the cross section of the 1st level and 1st axis the index:
In [64]: df.xs('BB',level=0,axis=0) #Note : level and axis are optional, and default
˓→to zero
Out[64]:
MyData
one 33
two 44
six 55
In [65]: df.xs('six',level=1,axis=0)
Out[65]:
MyData
AA 22
BB 55
In [71]: df = pd.DataFrame(data,indx,cols); df
Out[71]:
Exams Labs
I II I II
Student Course
Ada Comp 70 71 72 73
Math 71 73 75 74
Sci 72 75 75 75
Quinn Comp 73 74 75 76
Math 74 76 78 77
Sci 75 78 78 78
Violet Comp 76 77 78 79
Math 77 79 81 80
Sci 78 81 81 81
In [73]: df.loc['Violet']
Out[73]:
Exams Labs
I II I II
Course
Comp 76 77 78 79
Math 77 79 81 80
Sci 78 81 81 81
In [74]: df.loc[(All,'Math'),All]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Exams Labs
I II I II
Student Course
Ada Math 71 73 75 74
Quinn Math 74 76 78 77
Violet Math 77 79 81 80
In [75]: df.loc[(slice('Ada','Quinn'),'Math'),All]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Exams Labs
I II I II
Student Course
Ada Math 71 73 75 74
Quinn Math 74 76 78 77
In [76]: df.loc[(All,'Math'),('Exams')]
(continues on next page)
I II
Student Course
Ada Math 71 73
Quinn Math 74 76
Violet Math 77 79
In [77]: df.loc[(All,'Math'),(All,'II')]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Exams Labs
II II
Student Course
Ada Math 73 74
Quinn Math 76 77
Violet Math 79 80
7.3.3 Sorting
7.3.4 Levels
In [81]: df
Out[81]:
A
2013-08-01 -1.054874
2013-08-02 -0.179642
2013-08-05 0.639589
2013-08-06 NaN
2013-08-07 1.906684
2013-08-08 0.104050
In [82]: df.reindex(df.index[::-1]).ffill()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A
2013-08-08 0.104050
2013-08-07 1.906684
2013-08-06 1.906684
2013-08-05 0.639589
2013-08-02 -0.179642
2013-08-01 -1.054874
7.4.1 Replace
7.5 Grouping
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
animal
cat L
dog M
fish M
dtype: object
Using get_group
In [85]: gb = df.groupby(['animal'])
In [86]: gb.get_group('cat')
Out[86]:
animal size weight adult
0 cat S 8 False
2 cat M 11 False
5 cat L 12 True
6 cat L 12 True
In [89]: expected_df
Out[89]:
size weight adult
animal
cat L 12.4375 True
dog L 20.0000 True
fish L 1.2500 True
Expanding Apply
In [90]: S = pd.Series([i / 100.0 for i in range(1,11)])
In [95]: gb = df.groupby('A')
In [97]: gb.transform(replace)
Out[97]:
B
0 1.0
1 1.0
2 1.0
3 2.0
In [102]: sorted_df
Out[102]:
code data flag
1 bar -0.21 True
4 bar -0.59 False
0 foo 0.16 False
3 foo 0.45 True
2 baz 0.33 False
5 baz 0.62 True
In [107]: ts.resample("5min").apply(mhc)
Out[107]:
Custom 2014-10-07 00:00:00 1.234
2014-10-07 00:05:00 NaT
2014-10-07 00:10:00 7.404
2014-10-07 00:15:00 NaT
Max 2014-10-07 00:00:00 2
2014-10-07 00:05:00 4
2014-10-07 00:10:00 7
2014-10-07 00:15:00 9
Mean 2014-10-07 00:00:00 1
2014-10-07 00:05:00 3.5
2014-10-07 00:10:00 6
2014-10-07 00:15:00 8.5
dtype: object
In [108]: ts
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2014-10-07 00:00:00 0
2014-10-07 00:02:00 1
2014-10-07 00:04:00 2
2014-10-07 00:06:00 3
2014-10-07 00:08:00 4
2014-10-07 00:10:00 5
2014-10-07 00:12:00 6
2014-10-07 00:14:00 7
2014-10-07 00:16:00 8
2014-10-07 00:18:00 9
Freq: 2T, dtype: int64
In [112]: df = pd.DataFrame(
.....: {u'line_race': [10, 10, 8, 10, 10, 8],
.....: u'beyer': [99, 102, 103, 103, 88, 100]},
.....: index=[u'Last Gunfighter', u'Last Gunfighter', u'Last Gunfighter',
.....: u'Paynter', u'Paynter', u'Paynter']); df
.....:
Out[112]:
line_race beyer
Last Gunfighter 10 99
Last Gunfighter 10 102
Last Gunfighter 8 103
Paynter 10 103
Paynter 10 88
Paynter 8 100
In [114]: df
Out[114]:
line_race beyer beyer_shifted
Last Gunfighter 10 99 NaN
Last Gunfighter 10 102 99.0
Last Gunfighter 8 103 102.0
Paynter 10 103 NaN
Paynter 10 88 103.0
Paynter 8 100 88.0
In [115]: df = pd.DataFrame({'host':['other','other','that','this','this'],
.....: 'service':['mail','web','mail','mail','web'],
.....: 'no':[1, 2, 1, 2, 1]}).set_index(['host', 'service'])
.....:
In [118]: df_count
Out[118]:
host service no
0 other web 2
1 that mail 1
2 this mail 2
0 0
1 1
2 0
3 1
4 2
5 3
6 0
7 1
8 2
Name: A, dtype: int64
7.5.2 Splitting
Splitting a frame
Create a list of dataframes, split using a delineation based on logic included in rows.
In [124]: dfs[0]
Out[124]:
Case Data
0 A 0.174068
1 A -0.439461
2 A -0.741343
3 B -0.079673
In [125]: dfs[1]
(continues on next page)
Case Data
4 A -0.922875
5 A 0.303638
6 B -0.917368
In [126]: dfs[2]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Case Data
7 A -1.624062
8 A -0.758514
7.5.3 Pivot
In [129]: table.stack('City')
Out[129]:
Sales
Province City
AL All 12.0
Calgary 8.0
Edmonton 4.0
BC All 16.0
Vancouver 16.0
MN All 3.0
Winnipeg 3.0
... ...
All Calgary 8.0
Edmonton 4.0
Montreal 6.0
Toronto 13.0
Vancouver 16.0
Windsor 1.0
Winnipeg 3.0
7.5.4 Apply
In [138]: df = pd.DataFrame(data=np.random.randn(2000,2)/10000,
.....: index=pd.date_range('2001-01-01',periods=2000),
.....: columns=['A','B']); df
.....:
Out[138]:
A B
2001-01-01 0.000032 -0.000004
2001-01-02 -0.000001 0.000207
2001-01-03 0.000120 -0.000220
2001-01-04 -0.000083 -0.000165
2001-01-05 -0.000047 0.000156
2001-01-06 0.000027 0.000104
2001-01-07 0.000041 -0.000101
... ... ...
2006-06-17 -0.000034 0.000034
2006-06-18 0.000002 0.000166
2006-06-19 0.000023 -0.000081
2006-06-20 -0.000061 0.000012
2006-06-21 -0.000111 0.000027
2006-06-22 -0.000061 -0.000009
2006-06-23 0.000074 -0.000138
Out[140]:
2001-01-01 -0.001373
2001-01-02 -0.001705
2001-01-03 -0.002885
2001-01-04 -0.002987
2001-01-05 -0.002384
2001-01-06 -0.004700
2001-01-07 -0.005500
...
(continues on next page)
.....:
Out[142]:
Open Close Volume
2014-01-01 0.011174 -0.653039 1581
2014-01-02 0.214258 1.314205 1707
2014-01-03 -1.046922 -0.341915 1768
2014-01-04 -0.752902 -1.303586 836
2014-01-05 -0.410793 0.396288 694
2014-01-06 0.648401 -0.548006 796
2014-01-07 0.737320 0.481380 265
... ... ... ...
2014-04-04 0.120378 -2.548128 564
2014-04-05 0.231661 0.223346 1908
2014-04-06 0.952664 1.228841 1090
2014-04-07 -0.176090 0.552784 1813
2014-04-08 1.781318 -0.795389 1103
2014-04-09 -0.753493 -0.018815 1456
2014-04-10 -1.047997 1.138197 1193
In [144]: window = 5
In [146]: s.round(2)
Out[146]:
2014-01-06 -0.03
2014-01-07 0.07
2014-01-08 -0.40
2014-01-09 -0.81
2014-01-10 -0.63
2014-01-11 -0.86
2014-01-12 -0.36
...
(continues on next page)
7.6 Timeseries
Between times
Using indexer between time
Constructing a datetime range that excludes weekends and includes only certain times
Vectorized Lookup
Aggregation and plotting time series
Turn a matrix with hours in columns and days in rows into a continuous row sequence in the form of a time series.
How to rearrange a Python pandas DataFrame?
Dealing with duplicates when reindexing a timeseries to a specified frequency
Calculate the first day of the month for each entry in a DatetimeIndex
In [148]: dates.to_period(freq='M').to_timestamp()
Out[148]:
DatetimeIndex(['2000-01-01', '2000-01-01', '2000-01-01', '2000-01-01',
'2000-01-01'],
dtype='datetime64[ns]', freq=None)
7.6.1 Resampling
7.7 Merge
In [152]: df = df1.append(df2,ignore_index=True); df
Out[152]:
A B C
0 -0.480676 -1.305282 -0.212846
1 1.979901 0.363112 -0.275732
2 -1.433852 0.580237 -0.013672
3 1.776623 -0.803467 0.521517
4 -0.302508 -0.442948 -0.395768
5 -0.249024 -0.031510 2.413751
6 -0.480676 -1.305282 -0.212846
7 1.979901 0.363112 -0.275732
8 -1.433852 0.580237 -0.013672
9 1.776623 -0.803467 0.521517
10 -0.302508 -0.442948 -0.395768
11 -0.249024 -0.031510 2.413751
Out[155]:
Area Bins Test_0_L Data_L Test_1_L Test_0_R Data_R Test_1_R
0 A 110 0 -0.378914 -1 1 -1.032527 0
1 A 160 0 -1.402816 -1 1 0.715333 0
2 A 160 1 0.715333 0 2 -0.091438 1
3 C 40 0 1.608418 -1 1 0.753207 0
7.8 Plotting
In [156]: df = pd.DataFrame(
.....: {u'stratifying_var': np.random.uniform(0, 100, 20),
.....: u'price': np.random.normal(100, 5, 20)})
.....:
7.9.1 CSV
The best way to combine multiple files into a single DataFrame is to read the individual frames one by one, put all of
the individual frames into a list, and then combine the frames in the list using pd.concat():
You can use the same approach to read all files matching a pattern. Here is an example using glob:
Finally, this strategy will work with the other pd.read_*(...) functions described in the io docs.
In [30]: i = pd.date_range('20000101',periods=10000)
In [32]: df.head()
Out[32]:
day month year
0 1 1 2000
1 2 1 2000
2 3 1 2000
3 4 1 2000
4 5 1 2000
In [35]: ds.head()
Out[35]:
(continues on next page)
7.9.2 SQL
7.9.3 Excel
7.9.4 HTML
Reading HTML tables from a server that cannot handle the default request header
7.9.5 HDFStore
In [170]: df = pd.DataFrame(np.random.randn(8,3))
In [172]: store.put('df',df)
In [174]: store.get_storer('df').attrs.my_attribute
Out[174]: {'A': 10}
pandas readily accepts NumPy record arrays, if you need to read in a binary file consisting of an array of C structs.
For example, given this C program in a file called main.c compiled with gcc main.c -std=gnu99 on a 64-bit
machine,
#include <stdio.h>
#include <stdint.h>
return 0;
}
the following Python code will read the binary file 'binary.dat' into a pandas DataFrame, where each element
of the struct corresponds to a column in the frame:
# note that the offsets are larger than the size of the type because of
# struct padding
offsets = 0, 8, 16
formats = 'i4', 'f8', 'f4'
dt = np.dtype({'names': names, 'offsets': offsets, 'formats': formats},
align=True)
df = pd.DataFrame(np.fromfile('binary.dat', dt))
Note: The offsets of the structure elements may be different depending on the architecture of the machine on which
the file was created. Using a raw binary file format like this for general data storage is not recommended, as it is not
cross platform. We recommended either HDF5 or msgpack, both of which are supported by pandas’ IO facilities.
7.10 Computation
7.11 Timedeltas
In [176]: s - s.max()
Out[176]:
0 -2 days
1 -1 days
2 0 days
dtype: timedelta64[ns]
In [177]: s.max() - s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[177]:
0 2 days
1 1 days
2 0 days
dtype: timedelta64[ns]
(continues on next page)
In [178]: s - datetime.datetime(2011,1,1,3,5)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [179]: s + datetime.timedelta(minutes=5)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
In [180]: datetime.datetime(2011,1,1,3,5) - s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [181]: datetime.timedelta(minutes=5) + s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
In [186]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A datetime64[ns]
(continues on next page)
Another example
Values can be set to NaT using np.nan, similar to datetime
In [187]: y = s - s.shift(); y
Out[187]:
0 NaT
1 1 days
2 1 days
dtype: timedelta64[ns]
To globally provide aliases for axis names, one can define these 2 functions:
In [193]: df2.sum(axis='myaxis2')
Out[193]:
i1 0.745167
i2 -0.176251
i3 0.014354
dtype: float64
To create a dataframe from every combination of some given values, like R’s expand.grid() function, we can
create a dict where the keys are column names and the values are lists of the data values:
In [196]: df = expand_grid(
.....: {'height': [60, 70],
.....: 'weight': [100, 140, 180],
.....: 'sex': ['Male', 'Female']})
.....:
In [197]: df
Out[197]:
height weight sex
0 60 100 Male
1 60 100 Female
2 60 140 Male
3 60 140 Female
4 60 180 Male
5 60 180 Female
6 70 100 Male
7 70 100 Female
8 70 140 Male
9 70 140 Female
10 70 180 Male
11 70 180 Female
EIGHT
We’ll start with a quick, non-comprehensive overview of the fundamental data structures in pandas to get you started.
The fundamental behavior about data types, indexing, and axis labeling / alignment apply across all of the objects. To
get started, import NumPy and load pandas into your namespace:
Here is a basic tenet to keep in mind: data alignment is intrinsic. The link between labels and data will not be broken
unless done so explicitly by you.
We’ll give a brief intro to the data structures, then consider all of the broad categories of functionality and methods in
separate sections.
8.1 Series
Series is a one-dimensional labeled array capable of holding any data type (integers, strings, floating point numbers,
Python objects, etc.). The axis labels are collectively referred to as the index. The basic method to create a Series is
to call:
In [4]: s
Out[4]:
a 0.4691
b -0.2829
(continues on next page)
559
pandas: powerful Python data analysis toolkit, Release 0.23.4
In [5]: s.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[5]:
˓→Index(['a', 'b', 'c', 'd', 'e'], dtype='object')
In [6]: pd.Series(np.random.randn(5))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 -0.1732
1 0.1192
2 -1.0442
3 -0.8618
4 -2.1046
dtype: float64
Note: pandas supports non-unique index values. If an operation that does not support duplicate index values is
attempted, an exception will be raised at that time. The reason for being lazy is nearly all performance-based (there
are many instances in computations, like parts of GroupBy, where the index is not used).
From dict
Series can be instantiated from dicts:
In [7]: d = {'b' : 1, 'a' : 0, 'c' : 2}
In [8]: pd.Series(d)
Out[8]:
b 1
a 0
c 2
dtype: int64
Note: When the data is a dict, and an index is not passed, the Series index will be ordered by the dict’s insertion
order, if you’re using Python version >= 3.6 and Pandas version >= 0.23.
If you’re using Python < 3.6 or Pandas < 0.23, and an index is not passed, the Series index will be the lexically
ordered list of dict keys.
In the example above, if you were on a Python version lower than 3.6 or a Pandas version lower than 0.23, the Series
would be ordered by the lexical order of the dict keys (i.e. ['a', 'b', 'c'] rather than ['b', 'a', 'c']).
If an index is passed, the values in data corresponding to the labels in the index will be pulled out.
In [9]: d = {'a' : 0., 'b' : 1., 'c' : 2.}
In [10]: pd.Series(d)
Out[10]:
a 0.0
b 1.0
c 2.0
(continues on next page)
Note: NaN (not a number) is the standard missing data marker used in pandas.
Series acts very similarly to a ndarray, and is a valid argument to most NumPy functions. However, operations
such as slicing will also slice the index.
In [13]: s[0]
Out[13]: 0.46911229990718628
In [14]: s[:3]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[14]:
a 0.4691
b -0.2829
c -1.5091
dtype: float64
a 0.4691
e 1.2121
dtype: float64
e 1.2121
d -1.1356
b -0.2829
(continues on next page)
In [17]: np.exp(s)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a 1.5986
b 0.7536
c 0.2211
d 0.3212
e 3.3606
dtype: float64
A Series is like a fixed-size dict in that you can get and set values by index label:
In [18]: s['a']
Out[18]: 0.46911229990718628
In [20]: s
Out[20]:
a 0.4691
b -0.2829
c -1.5091
d -1.1356
e 12.0000
dtype: float64
In [21]: 'e' in s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[21]:
˓→True
In [22]: 'f' in s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→False
>>> s['f']
KeyError: 'f'
Using the get method, a missing label will return None or specified default:
In [23]: s.get('f')
When working with raw NumPy arrays, looping through value-by-value is usually not necessary. The same is true
when working with Series in pandas. Series can also be passed into most NumPy methods expecting an ndarray.
In [25]: s + s
Out[25]:
a 0.9382
b -0.5657
c -3.0181
d -2.2713
e 24.0000
dtype: float64
In [26]: s * 2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[26]:
˓→
a 0.9382
b -0.5657
c -3.0181
d -2.2713
e 24.0000
dtype: float64
In [27]: np.exp(s)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a 1.5986
b 0.7536
c 0.2211
d 0.3212
e 162754.7914
dtype: float64
A key difference between Series and ndarray is that operations between Series automatically align the data based on
label. Thus, you can write computations without giving consideration to whether the Series involved have the same
labels.
The result of an operation between unaligned Series will have the union of the indexes involved. If a label is not found
in one Series or the other, the result will be marked as missing NaN. Being able to write code without doing any explicit
data alignment grants immense freedom and flexibility in interactive data analysis and research. The integrated data
alignment features of the pandas data structures set pandas apart from the majority of related tools for working with
labeled data.
Note: In general, we chose to make the default result of operations between differently indexed objects yield the
union of the indexes in order to avoid loss of information. Having an index label, though the data is missing, is
typically important information as part of a computation. You of course have the option of dropping labels with
In [30]: s
Out[30]:
0 -0.4949
1 1.0718
2 0.7216
3 -0.7068
4 -1.0396
Name: something, dtype: float64
In [31]: s.name
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→'something'
The Series name will be assigned automatically in many cases, in particular when taking 1D slices of DataFrame as
you will see below.
New in version 0.18.0.
You can rename a Series with the pandas.Series.rename() method.
In [32]: s2 = s.rename("different")
In [33]: s2.name
Out[33]: 'different'
8.2 DataFrame
DataFrame is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it
like a spreadsheet or SQL table, or a dict of Series objects. It is generally the most commonly used pandas object.
Like Series, DataFrame accepts many different kinds of input:
• Dict of 1D ndarrays, lists, dicts, or Series
• 2-D numpy.ndarray
• Structured or record ndarray
• A Series
• Another DataFrame
Along with the data, you can optionally pass index (row labels) and columns (column labels) arguments. If you pass
an index and / or columns, you are guaranteeing the index and / or columns of the resulting DataFrame. Thus, a dict
of Series plus a specific index will discard all data not matching up to the passed index.
If axis labels are not passed, they will be constructed from the input data based on common sense rules.
Note: When the data is a dict, and columns is not specified, the DataFrame columns will be ordered by the dict’s
insertion order, if you are using Python version >= 3.6 and Pandas >= 0.23.
If you are using Python < 3.6 or Pandas < 0.23, and columns is not specified, the DataFrame columns will be the
lexically ordered list of dict keys.
The resulting index will be the union of the indexes of the various Series. If there are any nested dicts, these will first
be converted to Series. If no columns are passed, the columns will be the ordered list of dict keys.
In [35]: df = pd.DataFrame(d)
In [36]: df
Out[36]:
one two
a 1.0 1.0
b 2.0 2.0
c 3.0 3.0
d NaN 4.0
two three
d 4.0 NaN
b 2.0 NaN
a 1.0 NaN
The row and column labels can be accessed respectively by accessing the index and columns attributes:
Note: When a particular set of columns is passed along with a dict of data, the passed columns override the keys in
the dict.
In [39]: df.index
Out[39]: Index(['a', 'b', 'c', 'd'], dtype='object')
In [40]: df.columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[40]: Index(['one', 'two'],
˓→dtype='object')
The ndarrays must all be the same length. If an index is passed, it must clearly also be the same length as the arrays.
If no index is passed, the result will be range(n), where n is the array length.
In [42]: pd.DataFrame(d)
Out[42]:
one two
0 1.0 4.0
1 2.0 3.0
2 3.0 2.0
3 4.0 1.0
In [46]: pd.DataFrame(data)
Out[46]:
A B C
0 1 2.0 b'Hello'
1 2 3.0 b'World'
C A B
0 b'Hello' 1 2.0
1 b'World' 2 3.0
Note: DataFrame is not intended to work exactly like a 2-dimensional NumPy ndarray.
In [49]: data2 = [{'a': 1, 'b': 2}, {'a': 5, 'b': 10, 'c': 20}]
In [50]: pd.DataFrame(data2)
Out[50]:
a b c
0 1 2 NaN
1 5 10 20.0
a b
0 1 2
1 5 10
The result will be a DataFrame with the same index as the input Series, and with one column whose name is the
original name of the Series (only if no other column name provided).
Missing Data
Much more will be said on this topic in the Missing data section. To construct a DataFrame with missing data, we use
np.nan to represent missing values. Alternatively, you may pass a numpy.MaskedArray as the data argument to
the DataFrame constructor, and its masked entries will be considered missing.
DataFrame.from_dict
DataFrame.from_dict takes a dict of dicts or a dict of array-like sequences and returns a DataFrame. It operates
like the DataFrame constructor except for the orient parameter which is 'columns' by default, but which can
be set to 'index' in order to use the dict keys as row labels.
If you pass orient='index', the keys will be the row labels. In this case, you can also pass the desired column
names:
DataFrame.from_records
DataFrame.from_records takes a list of tuples or an ndarray with structured dtype. It works analogously to the
normal DataFrame constructor, except that the resulting DataFrame index may be a specific field of the structured
dtype. For example:
In [56]: data
Out[56]:
array([(1, 2., b'Hello'), (2, 3., b'World')],
dtype=[('A', '<i4'), ('B', '<f4'), ('C', 'S10')])
A B
C
b'Hello' 1 2.0
b'World' 2 3.0
You can treat a DataFrame semantically like a dict of like-indexed Series objects. Getting, setting, and deleting
columns works with the same syntax as the analogous dict operations:
In [58]: df['one']
Out[58]:
a 1.0
b 2.0
c 3.0
d NaN
Name: one, dtype: float64
In [61]: df
Out[61]:
one two three flag
a 1.0 1.0 1.0 False
b 2.0 2.0 4.0 False
c 3.0 3.0 9.0 True
d NaN 4.0 NaN False
In [64]: df
Out[64]:
one flag
a 1.0 False
b 2.0 False
c 3.0 True
d NaN False
When inserting a scalar value, it will naturally be propagated to fill the column:
In [65]: df['foo'] = 'bar'
In [66]: df
Out[66]:
one flag foo
a 1.0 False bar
b 2.0 False bar
c 3.0 True bar
d NaN False bar
When inserting a Series that does not have the same index as the DataFrame, it will be conformed to the DataFrame’s
index:
In [67]: df['one_trunc'] = df['one'][:2]
In [68]: df
Out[68]:
one flag foo one_trunc
a 1.0 False bar 1.0
b 2.0 False bar 2.0
c 3.0 True bar NaN
d NaN False bar NaN
You can insert raw ndarrays but their length must match the length of the DataFrame’s index.
By default, columns get inserted at the end. The insert function is available to insert at a particular location in the
columns:
In [69]: df.insert(1, 'bar', df['one'])
In [70]: df
(continues on next page)
Inspired by dplyr’s mutate verb, DataFrame has an assign() method that allows you to easily create new columns
that are potentially derived from existing columns.
In [72]: iris.head()
Out[72]:
SepalLength SepalWidth PetalLength PetalWidth Name
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
In the example above, we inserted a precomputed value. We can also pass in a function of one argument to be evaluated
on the DataFrame being assigned to.
assign always returns a copy of the data, leaving the original DataFrame untouched.
Passing a callable, as opposed to an actual value to be inserted, is useful when you don’t have a reference to the
DataFrame at hand. This is common when using assign in a chain of operations. For example, we can limit the
DataFrame to just those observations with a Sepal Length greater than 5, calculate the ratio, and plot:
Since a function is passed in, the function is computed on the DataFrame being assigned to. Importantly, this is the
DataFrame that’s been filtered to those rows with sepal length greater than 5. The filtering happens first, and then the
ratio calculations. This is an example where we didn’t have a reference to the filtered DataFrame available.
The function signature for assign is simply **kwargs. The keys are the column names for the new fields, and the
values are either a value to be inserted (for example, a Series or NumPy array), or a function of one argument to be
called on the DataFrame. A copy of the original DataFrame is returned, with the new values inserted.
Changed in version 0.23.0.
Starting with Python 3.6 the order of **kwargs is preserved. This allows for dependent assignment, where an
expression later in **kwargs can refer to a column created earlier in the same assign().
In [76]: dfa = pd.DataFrame({"A": [1, 2, 3],
....: "B": [4, 5, 6]})
....:
In the second expression, x['C'] will refer to the newly created column, that’s equal to dfa['A'] + dfa['B'].
To write code compatible with all versions of Python, split the assignment in two.
Warning: Dependent assignment maybe subtly change the behavior of your code between Python 3.6 and older
versions of Python.
If you wish write code that supports versions of python before and after 3.6, you’ll need to take care when passing
assign expressions that
• Updating an existing column
• Referring to the newly updated column in the same assign
For example, we’ll update column “A” and then refer to it when creating “B”.
>>> dependent = pd.DataFrame({"A": [1, 1, 1]})
>>> dependent.assign(A=lambda x: x["A"] + 1,
B=lambda x: x["A"] + 2)
For Python 3.5 and earlier the expression creating B refers to the “old” value of A, [1, 1, 1]. The output is
then
A B
0 2 3
1 2 3
2 2 3
For Python 3.6 and later, the expression creating A refers to the “new” value of A, [2, 2, 2], which results in
A B
0 2 4
1 2 4
2 2 4
Row selection, for example, returns a Series whose index is the columns of the DataFrame:
In [80]: df.loc['b']
Out[80]:
one 2
bar 2
flag False
foo bar
one_trunc 2
Name: b, dtype: object
In [81]: df.iloc[2]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
one 3
bar 3
flag True
foo bar
one_trunc NaN
Name: c, dtype: object
For a more exhaustive treatment of sophisticated label-based indexing and slicing, see the section on indexing. We
will address the fundamentals of reindexing / conforming to new sets of labels in the section on reindexing.
Data alignment between DataFrame objects automatically align on both the columns and the index (row labels).
Again, the resulting object will have the union of the column and row labels.
In [84]: df + df2
Out[84]:
A B C D
0 0.0457 -0.0141 1.3809 NaN
1 -0.9554 -1.5010 0.0372 NaN
2 -0.6627 1.5348 -0.8597 NaN
3 -2.4529 1.2373 -0.1337 NaN
4 1.4145 1.9517 -2.3204 NaN
5 -0.4949 -1.6497 -1.0846 NaN
6 -1.0476 -0.7486 -0.8055 NaN
7 NaN NaN NaN NaN
8 NaN NaN NaN NaN
9 NaN NaN NaN NaN
When doing an operation between DataFrame and Series, the default behavior is to align the Series index on the
DataFrame columns, thus broadcasting row-wise. For example:
In [85]: df - df.iloc[0]
Out[85]:
A B C D
0 0.0000 0.0000 0.0000 0.0000
1 -1.3593 -0.2487 -0.4534 -1.7547
2 0.2531 0.8297 0.0100 -1.9912
3 -1.3111 0.0543 -1.7249 -1.6205
4 0.5730 1.5007 -0.6761 1.3673
5 -1.7412 0.7820 -1.2416 -2.0531
6 -1.2408 -0.8696 -0.1533 0.0004
7 -0.7439 0.4110 -0.9296 -0.2824
8 -1.1949 1.3207 0.2382 -1.4826
9 2.2938 1.8562 0.7733 -1.4465
In the special case of working with time series data, and the DataFrame index also contains dates, the broadcasting
will be column-wise:
In [86]: index = pd.date_range('1/1/2000', periods=8)
In [88]: df
Out[88]:
A B C
2000-01-01 -1.2268 0.7698 -1.2812
2000-01-02 -0.7277 -0.1213 -0.0979
2000-01-03 0.6958 0.3417 0.9597
2000-01-04 -1.1103 -0.6200 0.1497
2000-01-05 -0.7323 0.6877 0.1764
2000-01-06 0.4033 -0.1550 0.3016
2000-01-07 -2.1799 -1.3698 -0.9542
2000-01-08 1.4627 -1.7432 -0.8266
In [89]: type(df['A'])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→pandas.core.series.Series
In [90]: df - df['A']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
[8 rows x 11 columns]
Warning:
df - df['A']
is now deprecated and will be removed in a future release. The preferred way to replicate this behavior is
df.sub(df['A'], axis=0)
For explicit control over the matching and broadcasting behavior, see the section on flexible binary operations.
Operations with scalars are just as you would expect:
In [91]: df * 5 + 2
Out[91]:
A B C
2000-01-01 -4.1341 5.8490 -4.4062
2000-01-02 -1.6385 1.3935 1.5106
2000-01-03 5.4789 3.7087 6.7986
2000-01-04 -3.5517 -1.0999 2.7487
2000-01-05 -1.6617 5.4387 2.8822
2000-01-06 4.0165 1.2252 3.5081
2000-01-07 -8.8993 -4.8492 -2.7710
2000-01-08 9.3135 -6.7158 -2.1330
In [92]: 1 / df
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C
2000-01-01 -0.8151 1.2990 -0.7805
2000-01-02 -1.3742 -8.2436 -10.2163
2000-01-03 1.4372 2.9262 1.0420
2000-01-04 -0.9006 -1.6130 6.6779
2000-01-05 -1.3655 1.4540 5.6675
2000-01-06 2.4795 -6.4537 3.3154
2000-01-07 -0.4587 -0.7300 -1.0480
2000-01-08 0.6837 -0.5737 -1.2098
In [93]: df ** 4
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C
2000-01-01 2.2653 0.3512 2.6948e+00
2000-01-02 0.2804 0.0002 9.1796e-05
2000-01-03 0.2344 0.0136 8.4838e-01
2000-01-04 1.5199 0.1477 5.0286e-04
2000-01-05 0.2876 0.2237 9.6924e-04
2000-01-06 0.0265 0.0006 8.2769e-03
2000-01-07 22.5795 3.5212 8.2903e-01
2000-01-08 4.5774 9.2332 4.6683e-01
a b
0 True True
1 True False
2 False True
In [99]: -df1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b
0 False True
1 True False
2 False False
8.2.12 Transposing
To transpose, access the T attribute (also the transpose function), similar to an ndarray:
# only show the first 5 rows
In [100]: df[:5].T
Out[100]:
2000-01-01 2000-01-02 2000-01-03 2000-01-04 2000-01-05
A -1.2268 -0.7277 0.6958 -1.1103 -0.7323
B 0.7698 -0.1213 0.3417 -0.6200 0.6877
C -1.2812 -0.0979 0.9597 0.1497 0.1764
Elementwise NumPy ufuncs (log, exp, sqrt, . . . ) and various other NumPy functions can be used with no issues on
DataFrame, assuming the data within are numeric:
In [101]: np.exp(df)
Out[101]:
(continues on next page)
In [102]: np.asarray(df)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [103]: df.T.dot(df)
Out[103]:
A B C
A 11.3419 -0.0598 3.0080
B -0.0598 6.5206 2.0833
C 3.0080 2.0833 4.3105
In [104]: s1 = pd.Series(np.arange(5,10))
In [105]: s1.dot(s1)
Out[105]: 255
DataFrame is not intended to be a drop-in replacement for ndarray as its indexing semantics are quite different in
places from a matrix.
Very large DataFrames will be truncated to display them in the console. You can also get a summary using info().
(Here I am reading a CSV version of the baseball dataset from the plyr R package):
In [107]: print(baseball)
id player year stint ... hbp sh sf gidp
0 88641 womacto01 2006 2 ... 0.0 3.0 0.0 0.0
1 88643 schilcu01 2006 1 ... 0.0 0.0 0.0 0.0
.. ... ... ... ... ... ... ... ... ...
98 89533 aloumo01 2007 1 ... 2.0 0.0 3.0 13.0
99 89534 alomasa02 2007 1 ... 0.0 0.0 0.0 0.0
(continues on next page)
In [108]: baseball.info()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→<class 'pandas.core.frame.DataFrame'>
However, using to_string will return a string representation of the DataFrame in tabular form, though it won’t
always fit the console width:
You can change how much to print on a single row by setting the display.width option:
You can adjust the max width of the individual columns by setting display.max_colwidth
In [114]: pd.set_option('display.max_colwidth',30)
In [115]: pd.DataFrame(datafile)
Out[115]:
filename path
0 filename_01 media/user_name/storage/fo...
1 filename_02 media/user_name/storage/fo...
In [116]: pd.set_option('display.max_colwidth',100)
In [117]: pd.DataFrame(datafile)
Out[117]:
filename path
0 filename_01 media/user_name/storage/folder_01/filename_01
1 filename_02 media/user_name/storage/folder_02/filename_02
You can also disable this feature via the expand_frame_repr option. This will print the table in one block.
If a DataFrame column label is a valid Python variable name, the column can be accessed like an attribute:
In [119]: df
Out[119]:
foo1 foo2
0 1.171216 -0.858447
1 0.520260 0.306996
2 -1.197071 -0.028665
3 -1.066969 0.384316
4 -0.303421 1.574159
In [120]: df.foo1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1.171216
1 0.520260
2 -1.197071
3 -1.066969
4 -0.303421
Name: foo1, dtype: float64
The columns are also connected to the IPython completion mechanism so they can be tab-completed:
In [5]: df.fo<TAB>
df.foo1 df.foo2
8.3 Panel
Warning: In 0.20.0, Panel is deprecated and will be removed in a future version. See the section Deprecate
Panel.
Panel is a somewhat less-used, but still important container for 3-dimensional data. The term panel data is derived
from econometrics and is partially responsible for the name pandas: pan(el)-da(ta)-s. The names for the 3 axes are
intended to give some semantic meaning to describing operations involving panel data and, in particular, econometric
analysis of panel data. However, for the strict purposes of slicing and dicing a collection of DataFrame objects, you
may find the axis names slightly arbitrary:
• items: axis 0, each item corresponds to a DataFrame contained inside
• major_axis: axis 1, it is the index (rows) of each of the DataFrames
• minor_axis: axis 2, it is the columns of each of the DataFrames
Construction of Panels works about like you would expect:
In [122]: wp
Out[122]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to D
In [124]: pd.Panel(data)
Out[124]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 4 (major_axis) x 3 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 0 to 3
Minor_axis axis: 0 to 2
Note that the values in the dict need only be convertible to DataFrame. Thus, they can be any of the other valid
inputs to DataFrame as per above.
One helpful factory method is Panel.from_dict, which takes a dictionary of DataFrames as above, and the
following named parameters:
Orient is especially useful for mixed-type DataFrames. If you pass a dict of DataFrame objects with mixed-type
columns, all of the data will get upcasted to dtype=object unless you pass orient='minor':
In [127]: df
Out[127]:
a b
0 foo -0.308853
1 bar -0.681087
2 baz 0.377953
In [130]: panel['a']
Out[130]:
item1 item2
0 foo foo
1 bar bar
2 baz baz
In [131]: panel['b']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[131]:
item1 item2
0 -0.308853 -0.308853
1 -0.681087 -0.681087
2 0.377953 0.377953
In [132]: panel['b'].dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
item1 float64
item2 float64
dtype: object
Note: Panel, being less commonly used than Series and DataFrame, has been slightly neglected feature-wise. A
number of methods and options available in DataFrame are not available in Panel.
In [135]: df.to_panel()
Out[135]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 2 (major_axis) x 2 (minor_axis)
Items axis: A to B
Major_axis axis: one to two
Minor_axis axis: x to y
The API for insertion and deletion is the same as for DataFrame. And as with DataFrame, if the item is a valid Python
identifier, you can access it as an attribute and tab-complete it in IPython.
8.3.5 Transposing
A Panel can be rearranged using its transpose method (which does not make a copy by default unless the data are
heterogeneous):
In [138]: wp.transpose(2, 0, 1)
Out[138]:
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 3 (major_axis) x 5 (minor_axis)
Items axis: A to D
Major_axis axis: Item1 to Item3
Minor_axis axis: 2000-01-01 00:00:00 to 2000-01-05 00:00:00
In [140]: wp.major_xs(wp.major_axis[2])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [141]: wp.minor_axis
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Index(['A', 'B', 'C', 'D'], dtype='object')
In [142]: wp.minor_xs('C')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
8.3.7 Squeezing
Another way to change the dimensionality of an object is to squeeze a 1-len object, similar to wp['Item1'].
In [143]: wp.reindex(items=['Item1']).squeeze()
Out[143]:
A B C D
2000-01-01 1.588931 0.476720 0.473424 -0.242861
2000-01-02 -0.014805 -0.284319 0.650776 -1.461665
2000-01-03 -1.137707 -0.891060 -0.693921 1.613616
2000-01-04 0.464000 0.227371 -0.496922 0.306389
2000-01-05 -2.290613 -1.134623 -1.561819 -0.260838
2000-01-01 0.476720
2000-01-02 -0.284319
2000-01-03 -0.891060
2000-01-04 0.227371
2000-01-05 -1.134623
Freq: D, Name: B, dtype: float64
A Panel can be represented in 2D form as a hierarchically indexed DataFrame. See the section hierarchical indexing
for more on this. To convert a Panel to a DataFrame, use the to_frame method:
In [146]: panel.to_frame()
(continues on next page)
Over the last few years, pandas has increased in both breadth and depth, with new features, datatype support, and
manipulation routines. As a result, supporting efficient indexing and functional routines for Series, DataFrame
and Panel has contributed to an increasingly fragmented and difficult-to-understand codebase.
The 3-D structure of a Panel is much less common for many types of data analysis, than the 1-D of the Series or
the 2-D of the DataFrame. Going forward it makes sense for pandas to focus on these areas exclusively.
Oftentimes, one can simply use a MultiIndex DataFrame for easily working with higher dimensional data.
In addition, the xarray package was built from the ground up, specifically in order to support the multi-dimensional
analysis that is one of Panel s main usecases. Here is a link to the xarray panel-transition documentation.
In [147]: p = tm.makePanel()
In [148]: p
Out[148]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 30 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-02-11 00:00:00
Minor_axis axis: A to D
In [150]: p.to_xarray()
Out[150]:
<xarray.DataArray (items: 3, major_axis: 30, minor_axis: 4)>
array([[[-0.390201, 1.562443, -1.085663, 0.136235],
[ 1.207122, 0.763264, -1.114738, 0.886313],
...,
[ 1.592673, -0.571329, 1.998044, 0.303638],
[ 1.559318, -0.026671, -0.244548, -0.917368]],
NINE
Here we discuss a lot of the essential functionality common to the pandas data structures. Here’s how to create some
of the objects used in the examples from the previous section:
To view a small sample of a Series or DataFrame object, use the head() and tail() methods. The default number
of elements to display is five, but you may pass a custom number.
In [6]: long_series.head()
Out[6]:
0 0.229453
1 0.304418
2 0.736135
3 -0.859631
4 -0.424100
dtype: float64
In [7]: long_series.tail(3)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[7]:
˓→
997 -0.351587
998 1.136249
999 -0.448789
dtype: float64
589
pandas: powerful Python data analysis toolkit, Release 0.23.4
pandas objects have a number of attributes enabling you to access the metadata
• shape: gives the axis dimensions of the object, consistent with ndarray
• Axis labels
– Series: index (only axis)
– DataFrame: index (rows) and columns
– Panel: items, major_axis, and minor_axis
Note, these attributes can be safely assigned to!
In [8]: df[:2]
Out[8]:
A B C
2000-01-01 0.048869 -1.360687 -0.47901
2000-01-02 -0.859661 -0.231595 -0.52775
In [10]: df
Out[10]:
a b c
2000-01-01 0.048869 -1.360687 -0.479010
2000-01-02 -0.859661 -0.231595 -0.527750
2000-01-03 -1.296337 0.150680 0.123836
2000-01-04 0.571764 1.555563 -0.823761
2000-01-05 0.535420 -1.032853 1.469725
2000-01-06 1.304124 1.449735 0.203109
2000-01-07 -1.032011 0.969818 -0.962723
2000-01-08 1.382083 -0.938794 0.669142
To get the actual data inside a data structure, one need only access the values property:
In [11]: s.values
Out[11]: array([-1.9339, 0.3773, 0.7341, 2.1416, -0.0112])
In [12]: df.values
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[12]:
array([[ 0.0489, -1.3607, -0.479 ],
[-0.8597, -0.2316, -0.5278],
[-1.2963, 0.1507, 0.1238],
[ 0.5718, 1.5556, -0.8238],
[ 0.5354, -1.0329, 1.4697],
[ 1.3041, 1.4497, 0.2031],
[-1.032 , 0.9698, -0.9627],
[ 1.3821, -0.9388, 0.6691]])
In [13]: wp.values
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
If a DataFrame or Panel contains homogeneously-typed data, the ndarray can actually be modified in-place, and the
changes will be reflected in the data structure. For heterogeneous data (e.g. some of the DataFrame’s columns are not
all the same dtype), this will not be the case. The values attribute itself, unlike the axis labels, cannot be assigned to.
Note: When working with heterogeneous data, the dtype of the resulting ndarray will be chosen to accommodate all
of the data involved. For example, if strings are involved, the result will be of object dtype. If there are only floats and
integers, the resulting array will be of float dtype.
pandas has support for accelerating certain types of binary numerical and boolean operations using the numexpr
library and the bottleneck libraries.
These libraries are especially useful when dealing with large data sets, and provide large speedups. numexpr uses
smart chunking, caching, and multiple cores. bottleneck is a set of specialized cython routines that are especially
fast when dealing with arrays that have nans.
Here is a sample (using 100 column x 100,000 row DataFrames):
You are highly encouraged to install both libraries. See the section Recommended Dependencies for more installation
info.
These are both enabled to be used by default, you can control this by setting the options:
New in version 0.20.0.
pd.set_option('compute.use_bottleneck', False)
pd.set_option('compute.use_numexpr', False)
With binary operations between pandas data structures, there are two key points of interest:
• Broadcasting behavior between higher- (e.g. DataFrame) and lower-dimensional (e.g. Series) objects.
• Missing data in computations.
We will demonstrate how to manage these issues independently, though they can be handled simultaneously.
DataFrame has the methods add(), sub(), mul(), div() and related functions radd(), rsub(), . . . for
carrying out binary operations. For broadcasting behavior, Series input is of primary interest. Using these functions,
you can use to either match on the index or columns via the axis keyword:
In [14]: df = pd.DataFrame({'one' : pd.Series(np.random.randn(3), index=['a', 'b', 'c
˓→']),
....:
In [15]: df
Out[15]:
one two three
a -1.101558 1.124472 NaN
b -0.177289 2.487104 -0.634293
c 0.462215 -0.486066 1.931194
d NaN -0.456288 -1.222918
With Panel, describing the matching behavior is a bit more difficult, so the arithmetic methods instead (and perhaps
confusingly?) give you the option to specify the broadcast axis. For example, suppose we wished to demean the data
over a particular axis. This can be accomplished by taking the mean over an axis and broadcasting over the same axis:
In [25]: major_mean = wp.mean(axis='major')
In [26]: major_mean
Out[26]:
Item1 Item2
A -0.878036 -0.092218
B -0.060128 0.529811
C 0.099453 -0.715139
D 0.248599 -0.186535
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to D
Note: I could be convinced to make the axis argument in the DataFrame methods match the broadcasting behavior of
Panel. Though it would require a transition period so users can change their code. . .
Series and Index also support the divmod() builtin. This function takes the floor division and modulo operation at
the same time returning a two-tuple of the same type as the left hand side. For example:
In [28]: s = pd.Series(np.arange(10))
In [29]: s
Out[29]:
(continues on next page)
In [31]: div
Out[31]:
0 0
1 0
2 0
3 1
4 1
5 1
6 2
7 2
8 2
9 3
dtype: int64
In [32]: rem
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[32]:
˓→
0 0
1 1
2 2
3 0
4 1
5 2
6 0
7 1
8 2
9 0
dtype: int64
In [34]: idx
Out[34]: Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype='int64')
In [36]: div
Out[36]: Int64Index([0, 0, 0, 1, 1, 1, 2, 2, 2, 3], dtype='int64')
In [37]: rem
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[37]:
˓→Int64Index([0, 1, 2, 0, 1, 2, 0, 1, 2, 0], dtype='int64')
In [39]: div
Out[39]:
0 0
1 0
2 0
3 1
4 1
5 1
6 1
7 1
8 1
9 1
dtype: int64
In [40]: rem
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[40]:
˓→
0 0
1 1
2 2
3 0
4 0
5 1
6 1
7 2
8 2
9 3
dtype: int64
In Series and DataFrame, the arithmetic functions have the option of inputting a fill_value, namely a value to substitute
when at most one of the values at a location are missing. For example, when adding two DataFrame objects, you may
wish to treat NaN as 0 unless both DataFrames are missing that value, in which case the result will be NaN (you can
later replace NaN with some other value using fillna if you wish).
In [41]: df
Out[41]:
one two three
a -1.101558 1.124472 NaN
b -0.177289 2.487104 -0.634293
c 0.462215 -0.486066 1.931194
d NaN -0.456288 -1.222918
In [42]: df2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [43]: df + df2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Series and DataFrame have the binary comparison methods eq, ne, lt, gt, le, and ge whose behavior is analogous
to the binary arithmetic operations described above:
In [45]: df.gt(df2)
Out[45]:
one two three
a False False False
b False False False
c False False False
d False False False
In [46]: df2.ne(df)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
These operations produce a pandas object of the same type as the left-hand-side input that is of dtype bool. These
boolean objects can be used in indexing operations, see the section on Boolean indexing.
You can apply the reductions: empty, any(), all(), and bool() to provide a way to summarize a boolean result.
You can test if a pandas object is empty, via the empty property.
In [50]: df.empty
Out[50]: False
In [51]: pd.DataFrame(columns=list('ABC')).empty
\\\\\\\\\\\\\\\Out[51]: True
To evaluate single-element pandas objects in a boolean context, use the method bool():
In [52]: pd.Series([True]).bool()
Out[52]: True
In [53]: pd.Series([False]).bool()
\\\\\\\\\\\\\\Out[53]: False
In [54]: pd.DataFrame([[True]]).bool()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[54]: True
In [55]: pd.DataFrame([[False]]).bool()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[55]: False
Or
>>> df and df2
These will both raise errors, as you are trying to compare multiple values.
ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.
˓→all().
Often you may find that there is more than one way to compute the same result. As a simple example, consider
df+df and df*2. To test that these two computations produce the same result, given the tools shown above, you
might imagine using (df+df == df*2).all(). But in fact, this expression is False:
In [56]: df+df == df*2
Out[56]:
one two three
a True True False
b True True True
c True True True
d False True True
one False
two True
three False
dtype: bool
Notice that the boolean DataFrame df+df == df*2 contains some False values! This is because NaNs do not
compare as equals:
In [58]: np.nan == np.nan
Out[58]: False
So, NDFrames (such as Series, DataFrames, and Panels) have an equals() method for testing equality, with NaNs
in corresponding locations treated as equal.
In [59]: (df+df).equals(df*2)
Out[59]: True
Note that the Series or DataFrame index needs to be in the same order for equality to be True:
In [60]: df1 = pd.DataFrame({'col':['foo', 0, np.nan]})
In [62]: df1.equals(df2)
Out[62]: False
In [63]: df1.equals(df2.sort_index())
\\\\\\\\\\\\\\\Out[63]: True
You can conveniently perform element-wise comparisons when comparing a pandas data structure with a scalar value:
In [64]: pd.Series(['foo', 'bar', 'baz']) == 'foo'
Out[64]:
0 True
1 False
2 False
dtype: bool
(continues on next page)
Pandas also handles element-wise comparisons between different array-like objects of the same length:
Trying to compare Index or Series objects of different lengths will raise a ValueError:
Note that this is different from the NumPy behavior where a comparison can be broadcast:
A problem occasionally arising is the combination of two similar data sets where values in one are preferred over the
other. An example would be two data series representing a particular economic indicator where one is considered to
be of “higher quality”. However, the lower quality series might extend further back in history or have more complete
data coverage. As such, we would like to combine two DataFrame objects where missing values in one DataFrame
are conditionally filled with like-labeled values from the other DataFrame. The function implementing this operation
is combine_first(), which we illustrate:
In [72]: df1
Out[72]:
A B
0 1.0 NaN
1 NaN 2.0
2 3.0 3.0
3 5.0 NaN
4 NaN 6.0
In [73]: df2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[73]:
˓→
A B
0 5.0 NaN
1 2.0 NaN
2 4.0 3.0
3 NaN 4.0
4 3.0 6.0
5 7.0 8.0
In [74]: df1.combine_first(df2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B
0 1.0 NaN
1 2.0 2.0
2 3.0 3.0
3 5.0 4.0
4 3.0 6.0
5 7.0 8.0
The combine_first() method above calls the more general DataFrame.combine(). This method takes
another DataFrame and a combiner function, aligns the input DataFrame and then passes the combiner function pairs
of Series (i.e., columns whose names are the same).
So, for instance, to reproduce combine_first() as above:
There exists a large number of methods for computing descriptive statistics and other related operations on Series,
DataFrame, and Panel. Most of these are aggregations (hence producing a lower-dimensional result) like sum(),
mean(), and quantile(), but some of them, like cumsum() and cumprod(), produce an object of the same
size. Generally speaking, these methods take an axis argument, just like ndarray.{sum, std, . . . }, but the axis can be
specified by name or integer:
• Series: no axis argument needed
• DataFrame: “index” (axis=0, default), “columns” (axis=1)
• Panel: “items” (axis=0), “major” (axis=1, default), “minor” (axis=2)
For example:
In [77]: df
Out[77]:
one two three
a -1.101558 1.124472 NaN
b -0.177289 2.487104 -0.634293
c 0.462215 -0.486066 1.931194
d NaN -0.456288 -1.222918
In [78]: df.mean(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
one -0.272211
two 0.667306
three 0.024661
dtype: float64
In [79]: df.mean(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a 0.011457
b 0.558507
c 0.635781
d -0.839603
dtype: float64
All such methods have a skipna option signaling whether to exclude missing data (True by default):
a 0.022914
b 1.675522
c 1.907343
d -1.679206
dtype: float64
Combined with the broadcasting / arithmetic behavior, one can describe various statistical procedures, like standard-
ization (rendering data zero mean and standard deviation 1), very concisely:
In [83]: ts_stand.std()
Out[83]:
one 1.0
two 1.0
three 1.0
dtype: float64
In [85]: xs_stand.std(1)
Out[85]:
a 1.0
b 1.0
c 1.0
d 1.0
dtype: float64
Note that methods like cumsum() and cumprod() preserve the location of NaN values. This is somewhat different
from expanding() and rolling(). For more details please see this note.
In [86]: df.cumsum()
Out[86]:
one two three
a -1.101558 1.124472 NaN
b -1.278848 3.611576 -0.634293
c -0.816633 3.125511 1.296901
d NaN 2.669223 0.073983
Here is a quick reference summary table of common functions. Each also takes an optional level parameter which
applies only if the object has a hierarchical index.
Function Description
count Number of non-NA observations
sum Sum of values
mean Mean of values
mad Mean absolute deviation
median Arithmetic median of values
min Minimum
max Maximum
mode Mode
abs Absolute Value
prod Product of values
std Bessel-corrected sample standard deviation
var Unbiased variance
sem Standard error of the mean
skew Sample skewness (3rd moment)
kurt Sample kurtosis (4th moment)
quantile Sample quantile (value at %)
cumsum Cumulative sum
cumprod Cumulative product
cummax Cumulative maximum
cummin Cumulative minimum
Note that by chance some NumPy methods, like mean, std, and sum, will exclude NAs on Series input by default:
In [87]: np.mean(df['one'])
Out[87]: -0.27221094480450114
In [88]: np.mean(df['one'].values)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[88]: nan
In [91]: series[10:20] = 5
In [92]: series.nunique()
Out[92]: 11
There is a convenient describe() function which computes a variety of summary statistics about a Series or the
columns of a DataFrame (excluding NAs of course):
In [95]: series.describe()
Out[95]:
(continues on next page)
In [98]: frame.describe()
Out[98]:
a b c d e
count 500.000000 500.000000 500.000000 500.000000 500.000000
mean -0.045109 -0.052045 0.024520 0.006117 0.001141
std 1.029268 1.002320 1.042793 1.040134 1.005207
min -2.915767 -3.294023 -3.610499 -2.907036 -3.010899
25% -0.763783 -0.720389 -0.609600 -0.665896 -0.682900
50% -0.086033 -0.048843 0.006093 0.043191 -0.001651
75% 0.663399 0.620980 0.728382 0.735973 0.656439
max 3.400646 2.925597 3.416896 3.331522 3.007143
In [100]: s = pd.Series(['a', 'a', 'b', 'b', 'a', 'a', np.nan, 'c', 'd', 'a'])
In [101]: s.describe()
Out[101]:
count 9
unique 4
top a
freq 5
dtype: object
Note that on a mixed-type DataFrame object, describe() will restrict the summary to include only numerical
columns or, if none are, only categorical columns:
In [103]: frame.describe()
Out[103]:
b
count 4.000000
mean 1.500000
std 1.290994
min 0.000000
25% 0.750000
50% 1.500000
75% 2.250000
max 3.000000
This behaviour can be controlled by providing a list of types as include/exclude arguments. The special value
all can also be used:
In [104]: frame.describe(include=['object'])
Out[104]:
a
count 4
unique 2
top Yes
freq 2
In [105]: frame.describe(include=['number'])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[105]:
b
count 4.000000
mean 1.500000
std 1.290994
min 0.000000
25% 0.750000
50% 1.500000
75% 2.250000
max 3.000000
In [106]: frame.describe(include='all')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b
count 4 4.000000
unique 2 NaN
top Yes NaN
freq 2 NaN
mean NaN 1.500000
std NaN 1.290994
min NaN 0.000000
25% NaN 0.750000
50% NaN 1.500000
75% NaN 2.250000
max NaN 3.000000
That feature relies on select_dtypes. Refer to there for details about accepted inputs.
The idxmin() and idxmax() functions on Series and DataFrame compute the index labels with the minimum and
maximum corresponding values:
In [107]: s1 = pd.Series(np.random.randn(5))
In [108]: s1
Out[108]:
0 -1.649461
1 0.169660
2 1.246181
3 0.131682
4 -2.001988
dtype: float64
In [111]: df1
Out[111]:
A B C
0 -1.273023 0.870502 0.214583
1 0.088452 -0.173364 1.207466
2 0.546121 0.409515 -0.310515
3 0.585014 -0.490528 -0.054639
4 -0.239226 0.701089 0.228656
In [112]: df1.idxmin(axis=0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A 0
B 3
C 2
dtype: int64
In [113]: df1.idxmax(axis=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 B
1 C
2 A
3 A
4 B
dtype: object
When there are multiple rows (or columns) matching the minimum or maximum value, idxmin() and idxmax()
return the first matching index:
In [114]: df3 = pd.DataFrame([2, 1, 1, 3, np.nan], columns=['A'], index=list('edcba'))
In [115]: df3
Out[115]:
A
e 2.0
(continues on next page)
In [116]: df3['A'].idxmin()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[116]: 'd'
Note: idxmin and idxmax are called argmin and argmax in NumPy.
The value_counts() Series method and top-level function computes a histogram of a 1D array of values. It can
also be used as a function on regular arrays:
In [118]: data
Out[118]:
array([3, 3, 0, 2, 1, 0, 5, 5, 3, 6, 1, 5, 6, 2, 0, 0, 6, 3, 3, 5, 0, 4, 3,
3, 3, 0, 6, 1, 3, 5, 5, 0, 4, 0, 6, 3, 6, 5, 4, 3, 2, 1, 5, 0, 1, 1,
6, 4, 1, 4])
In [119]: s = pd.Series(data)
In [120]: s.value_counts()
Out[120]:
3 11
0 9
5 8
6 7
1 7
4 5
2 3
dtype: int64
In [121]: pd.value_counts(data)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[121]:
˓→
3 11
0 9
5 8
6 7
1 7
4 5
2 3
dtype: int64
Similarly, you can get the most frequently occurring value(s) (the mode) of the values in a Series or DataFrame:
In [123]: s5.mode()
(continues on next page)
In [125]: df5.mode()
Out[125]:
A B
0 2 -5
Continuous values can be discretized using the cut() (bins based on values) and qcut() (bins based on sample
quantiles) functions:
In [126]: arr = np.random.randn(20)
In [128]: factor
Out[128]:
[(-2.611, -1.58], (0.473, 1.499], (-2.611, -1.58], (-1.58, -0.554], (-0.554, 0.473], .
˓→.., (0.473, 1.499], (0.473, 1.499], (-0.554, 0.473], (-0.554, 0.473], (-0.554, 0.
˓→473]]
Length: 20
Categories (4, interval[float64]): [(-2.611, -1.58] < (-1.58, -0.554] < (-0.554, 0.
˓→473] <
(0.473, 1.499]]
In [130]: factor
Out[130]:
[(-5, -1], (0, 1], (-5, -1], (-1, 0], (-1, 0], ..., (1, 5], (1, 5], (-1, 0], (-1, 0],
˓→(-1, 0]]
Length: 20
Categories (4, interval[int64]): [(-5, -1] < (-1, 0] < (0, 1] < (1, 5]]
qcut() computes sample quantiles. For example, we could slice up some normally distributed data into equal-size
quartiles like so:
In [131]: arr = np.random.randn(30)
In [133]: factor
Out[133]:
[(0.544, 1.976], (0.544, 1.976], (-1.255, -0.375], (0.544, 1.976], (-0.103, 0.544], ..
˓→., (-0.103, 0.544], (0.544, 1.976], (-0.103, 0.544], (-1.255, -0.375], (-0.375, -0.
˓→103]]
Length: 30
(continues on next page)
(0.544, 1.976]]
In [134]: pd.value_counts(factor)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
(0.544, 1.976] 8
(-1.255, -0.375] 8
(-0.103, 0.544] 7
(-0.375, -0.103] 7
dtype: int64
In [137]: factor
Out[137]:
[(0.0, inf], (0.0, inf], (0.0, inf], (0.0, inf], (-inf, 0.0], ..., (-inf, 0.0], (-inf,
˓→ 0.0], (0.0, inf], (-inf, 0.0], (0.0, inf]]
Length: 20
Categories (2, interval[float64]): [(-inf, 0.0] < (0.0, inf]]
To apply your own or another library’s functions to pandas objects, you should be aware of the three methods below.
The appropriate method to use depends on whether your function expects to operate on an entire DataFrame or
Series, row- or column-wise, or elementwise.
1. Tablewise Function Application: pipe()
2. Row or Column-wise Function Application: apply()
3. Aggregation API: agg() and transform()
4. Applying Elementwise Functions: applymap()
DataFrames and Series can of course just be passed into functions. However, if the function needs to be called
in a chain, consider using the pipe() method. Compare the following
>>> (df.pipe(h)
.pipe(g, arg1=1)
.pipe(f, arg2=2, arg3=3)
)
Pandas encourages the second style, which is known as method chaining. pipe makes it easy to use your own or
another library’s functions in method chains, alongside pandas’ methods.
In the example above, the functions f, g, and h each expected the DataFrame as the first positional argument. What
if the function you wish to apply takes its data as, say, the second argument? In this case, provide pipe with a tuple
of (callable, data_keyword). .pipe will route the DataFrame to the argument specified in the tuple.
For example, we can fit a regression using statsmodels. Their API expects a formula first and a DataFrame as the
second argument, data. We pass in the function, keyword pair (sm.ols, 'data') to pipe:
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly
˓→specified.
[2] The condition number is large, 1.49e+07. This might indicate that there are
strong multicollinearity or other numerical problems.
"""
The pipe method is inspired by unix pipes and more recently dplyr and magrittr, which have introduced the popular
(%>%) (read pipe) operator for R. The implementation of pipe here is quite clean and feels right at home in python.
Arbitrary functions can be applied along the axes of a DataFrame using the apply() method, which, like the de-
scriptive statistics methods, takes an optional axis argument:
In [141]: df.apply(np.mean)
Out[141]:
one -0.272211
two 0.667306
three 0.024661
dtype: float64
a 0.011457
b 0.558507
c 0.635781
d -0.839603
dtype: float64
one 1.563773
two 2.973170
three 3.154112
dtype: float64
In [144]: df.apply(np.cumsum)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [145]: df.apply(np.exp)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a 0.011457
b 0.558507
c 0.635781
d -0.839603
dtype: float64
The return type of the function passed to apply() affects the type of the final output from DataFrame.apply for
the default behaviour:
• If the applied function returns a Series, the final output is a DataFrame. The columns match the index of
the Series returned by the applied function.
• If the applied function returns any other type, the final output is a Series.
This default behaviour can be overridden using the result_type, which accepts three options: reduce,
broadcast, and expand. These will determine how list-likes return values expand (or not) to a DataFrame.
apply() combined with some cleverness can be used to answer many questions about a data set. For example,
suppose we wanted to extract the date where the maximum value for each column occurred:
You may also pass additional arguments and keyword arguments to the apply() method. For instance, consider the
following function you would like to apply:
Another useful feature is the ability to pass Series methods to carry out some Series operation on each column or row:
In [150]: tsdf
Out[150]:
A B C
2000-01-01 -0.720299 0.546303 -0.082042
2000-01-02 0.200295 -0.577554 -0.908402
2000-01-03 0.102533 1.653614 0.303319
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
(continues on next page)
In [151]: tsdf.apply(pd.Series.interpolate)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C
2000-01-01 -0.720299 0.546303 -0.082042
2000-01-02 0.200295 -0.577554 -0.908402
2000-01-03 0.102533 1.653614 0.303319
2000-01-04 0.188539 1.391201 0.272754
2000-01-05 0.274546 1.128788 0.242189
2000-01-06 0.360553 0.866374 0.211624
2000-01-07 0.446559 0.603961 0.181059
2000-01-08 0.532566 0.341548 0.150493
2000-01-09 0.330418 1.761200 0.567133
2000-01-10 -0.251020 1.020099 1.893177
Finally, apply() takes an argument raw which is False by default, which converts each row or column into a Series
before applying the function. When set to True, the passed function will instead receive an ndarray object, which has
positive performance implications if you do not need the indexing functionality.
In [154]: tsdf
Out[154]:
A B C
2000-01-01 0.170247 -0.916844 0.835024
2000-01-02 1.259919 0.801111 0.445614
2000-01-03 1.453046 2.430373 0.653093
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 -1.874526 0.569822 -0.609644
2000-01-09 0.812462 0.565894 -1.461363
2000-01-10 -0.985475 1.388154 -0.078747
Using a single function is equivalent to apply(). You can also pass named methods as strings. These will return a
Series of the aggregated output:
In [155]: tsdf.agg(np.sum)
Out[155]:
A 0.835673
B 4.838510
C -0.216025
dtype: float64
In [156]: tsdf.agg('sum')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[156]:
A 0.835673
B 4.838510
C -0.216025
dtype: float64
A 0.835673
B 4.838510
C -0.216025
dtype: float64
In [158]: tsdf.A.agg('sum')
Out[158]: 0.83567297915820504
You can pass multiple aggregation arguments as a list. The results of each of the passed functions will be a row in the
resulting DataFrame. These are naturally named from the aggregation function.
In [159]: tsdf.agg(['sum'])
Out[159]:
A B C
sum 0.835673 4.83851 -0.216025
Passing a named function will yield that name for the row:
In [163]: def mymean(x):
.....: return x.mean()
.....:
Passing a dictionary of column names to a scalar or a list of scalars, to DataFrame.agg allows you to customize
which functions are applied to which columns. Note that the results are not in any particular order, you can use an
OrderedDict instead to guarantee ordering.
In [165]: tsdf.agg({'A': 'mean', 'B': 'sum'})
Out[165]:
A 0.139279
B 4.838510
dtype: float64
Passing a list-like will generate a DataFrame output. You will get a matrix-like output of all of the aggregators. The
output will consist of all unique functions. Those that are not noted for a particular column will be NaN:
In [166]: tsdf.agg({'A': ['mean', 'min'], 'B': 'sum'})
Out[166]:
A B
mean 0.139279 NaN
min -1.874526 NaN
sum NaN 4.83851
When presented with mixed dtypes that cannot aggregate, .agg will only take the valid aggregations. This is similar
to how groupby .agg works.
In [167]: mdf = pd.DataFrame({'A': [1, 2, 3],
.....: 'B': [1., 2., 3.],
.....: 'C': ['foo', 'bar', 'baz'],
.....: 'D': pd.date_range('20130101', periods=3)})
.....:
In [168]: mdf.dtypes
Out[168]:
A int64
(continues on next page)
With .agg() is it possible to easily create a custom describe function, similar to the built in describe function.
In [170]: from functools import partial
In [178]: tsdf
Out[178]:
(continues on next page)
Transform the entire frame. .transform() allows input functions as: a NumPy function, a string function name or
a user defined function.
In [179]: tsdf.transform(np.abs)
Out[179]:
A B C
2000-01-01 0.578465 0.503335 0.987140
2000-01-02 0.767147 0.266046 1.083797
2000-01-03 0.195348 0.722247 0.894537
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.556397 0.542165 0.308675
2000-01-09 1.010924 0.672504 1.139222
2000-01-10 0.354653 0.563622 0.365106
In [180]: tsdf.transform('abs')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C
2000-01-01 0.578465 0.503335 0.987140
2000-01-02 0.767147 0.266046 1.083797
2000-01-03 0.195348 0.722247 0.894537
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.556397 0.542165 0.308675
2000-01-09 1.010924 0.672504 1.139222
2000-01-10 0.354653 0.563622 0.365106
A B C
2000-01-01 0.578465 0.503335 0.987140
2000-01-02 0.767147 0.266046 1.083797
2000-01-03 0.195348 0.722247 0.894537
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.556397 0.542165 0.308675
(continues on next page)
In [182]: np.abs(tsdf)
Out[182]:
A B C
2000-01-01 0.578465 0.503335 0.987140
2000-01-02 0.767147 0.266046 1.083797
2000-01-03 0.195348 0.722247 0.894537
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.556397 0.542165 0.308675
2000-01-09 1.010924 0.672504 1.139222
2000-01-10 0.354653 0.563622 0.365106
Passing a single function to .transform() with a Series will yield a single Series in return.
In [183]: tsdf.A.transform(np.abs)
Out[183]:
2000-01-01 0.578465
2000-01-02 0.767147
2000-01-03 0.195348
2000-01-04 NaN
2000-01-05 NaN
2000-01-06 NaN
2000-01-07 NaN
2000-01-08 0.556397
2000-01-09 1.010924
2000-01-10 0.354653
Freq: D, Name: A, dtype: float64
Passing multiple functions will yield a column multi-indexed DataFrame. The first level will be the original frame
column names; the second level will be the names of the transforming functions.
Passing multiple functions to a Series will yield a DataFrame. The resulting column names will be the transforming
functions.
In [185]: tsdf.A.transform([np.abs, lambda x: x+1])
Out[185]:
absolute <lambda>
2000-01-01 0.578465 0.421535
2000-01-02 0.767147 0.232853
2000-01-03 0.195348 1.195348
2000-01-04 NaN NaN
2000-01-05 NaN NaN
2000-01-06 NaN NaN
2000-01-07 NaN NaN
2000-01-08 0.556397 0.443603
2000-01-09 1.010924 -0.010924
2000-01-10 0.354653 1.354653
Passing a dict of lists will generate a multi-indexed DataFrame with these selective transforms.
In [187]: tsdf.transform({'A': np.abs, 'B': [lambda x: x+1, 'sqrt']})
Out[187]:
A B
absolute <lambda> sqrt
2000-01-01 0.578465 0.496665 NaN
2000-01-02 0.767147 0.733954 NaN
2000-01-03 0.195348 1.722247 0.849851
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.556397 1.542165 0.736318
2000-01-09 1.010924 0.327496 NaN
2000-01-10 0.354653 1.563622 0.750748
Since not all functions can be vectorized (accept NumPy arrays and return another array or value), the methods
applymap() on DataFrame and analogously map() on Series accept any Python function taking a single value and
In [188]: df4
Out[188]:
one two three
a -1.101558 1.124472 NaN
b -0.177289 2.487104 -0.634293
c 0.462215 -0.486066 1.931194
d NaN -0.456288 -1.222918
In [190]: df4['one'].map(f)
Out[190]:
a 19
b 20
c 18
d 3
Name: one, dtype: int64
In [191]: df4.applymap(f)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[191]:
one two three
a 19 18 3
b 20 18 19
c 18 20 18
d 3 19 19
Series.map() has an additional feature; it can be used to easily “link” or “map” values defined by a secondary
series. This is closely related to merging/joining functionality:
In [194]: s
Out[194]:
a six
b seven
c six
d seven
e six
dtype: object
In [195]: s.map(t)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[195]:
˓→
a 6.0
b 7.0
c 6.0
d 7.0
e 6.0
dtype: float64
Applying with a Panel will pass a Series to the applied function. If the applied function returns a Series, the
result of the application will be a Panel. If the applied function reduces to a scalar, the result of the application will
be a DataFrame.
In [196]: import pandas.util.testing as tm
In [198]: panel
Out[198]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: A to D
In [199]: panel['ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2000-01-03 1.092702 0.604244 -2.927808 0.339642
2000-01-04 -1.481449 -0.487265 0.082065 1.499953
2000-01-05 1.781190 1.990533 0.456554 -0.317818
2000-01-06 -0.031543 0.327007 -1.757911 0.447371
2000-01-07 0.480993 1.053639 0.982407 -1.315799
A transformational apply.
In [200]: result = panel.apply(lambda x: x*2, axis='items')
In [201]: result
Out[201]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: A to D
In [202]: result['ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2000-01-03 2.185405 1.208489 -5.855616 0.679285
2000-01-04 -2.962899 -0.974530 0.164130 2.999905
2000-01-05 3.562379 3.981066 0.913107 -0.635635
2000-01-06 -0.063086 0.654013 -3.515821 0.894742
2000-01-07 0.961986 2.107278 1.964815 -2.631598
A reduction operation.
In [203]: panel.apply(lambda x: x.dtype, axis='items')
Out[203]:
A B C D
2000-01-03 float64 float64 float64 float64
2000-01-04 float64 float64 float64 float64
2000-01-05 float64 float64 float64 float64
(continues on next page)
In [205]: panel.sum('major_axis')
Out[205]:
ItemA ItemB ItemC
A 1.841893 0.918017 -1.160547
B 3.488158 -2.629773 0.603397
C -3.164692 0.805970 0.806501
D 0.653349 -0.152299 0.252577
A transformation operation that returns a Panel, but is computing the z-score across the major_axis.
In [207]: result
Out[207]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: A to D
In [208]: result['ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2000-01-03 0.585813 -0.102070 -1.394063 0.201263
2000-01-04 -1.496089 -1.295066 0.434343 1.318766
2000-01-05 1.142642 1.413112 0.661833 -0.431942
2000-01-06 -0.323445 -0.405085 -0.683386 0.305017
2000-01-07 0.091079 0.389108 0.981273 -1.393105
Apply can also accept multiple axes in the axis argument. This will pass a DataFrame of the cross-section to the
applied function.
In [211]: result
(continues on next page)
In [212]: result.loc[:,:,'ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2000-01-03 0.859304 0.448509 -1.109374 0.397237
2000-01-04 -1.053319 -1.063370 0.986639 1.152266
2000-01-05 1.106511 1.143185 -0.093917 -0.583083
2000-01-06 0.561619 -0.835608 -1.075936 0.194525
2000-01-07 -0.339514 1.097901 0.747522 -1.147605
In [214]: result
Out[214]:
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 5 (major_axis) x 3 (minor_axis)
Items axis: A to D
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: ItemA to ItemC
In [215]: result.loc[:,:,'ItemA']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2000-01-03 0.859304 0.448509 -1.109374 0.397237
2000-01-04 -1.053319 -1.063370 0.986639 1.152266
2000-01-05 1.106511 1.143185 -0.093917 -0.583083
2000-01-06 0.561619 -0.835608 -1.075936 0.194525
2000-01-07 -0.339514 1.097901 0.747522 -1.147605
reindex() is the fundamental data alignment method in pandas. It is used to implement nearly all other features
relying on label-alignment functionality. To reindex means to conform the data to match a given set of labels along a
particular axis. This accomplishes several things:
• Reorders the existing data to match a new set of labels
• Inserts missing value (NA) markers in label locations where no data for that label existed
• If specified, fill data for missing labels using logic (highly relevant to working with time series data)
Here is a simple example:
In [217]: s
Out[217]:
a -0.454087
b -0.360309
c -0.951631
d -0.535459
e 0.835231
dtype: float64
e 0.835231
b -0.360309
f NaN
d -0.535459
dtype: float64
Here, the f label was not contained in the Series and hence appears as NaN in the result.
With a DataFrame, you can simultaneously reindex the index and columns:
In [219]: df
Out[219]:
one two three
a -1.101558 1.124472 NaN
b -0.177289 2.487104 -0.634293
c 0.462215 -0.486066 1.931194
d NaN -0.456288 -1.222918
Note that the Index objects containing the actual axis labels can be shared between objects. So if we have a Series
and a DataFrame, the following can be done:
In [222]: rs = s.reindex(df.index)
In [223]: rs
Out[223]:
a -0.454087
b -0.360309
(continues on next page)
This means that the reindexed Series’s index is the same Python object as the DataFrame’s index.
New in version 0.21.0.
DataFrame.reindex() also supports an “axis-style” calling convention, where you specify a single labels
argument and the axis it applies to.
See also:
MultiIndex / Advanced Indexing is an even more concise way of doing reindexing.
Note: When writing performance-sensitive code, there is a good reason to spend some time becoming a reindexing
ninja: many operations are faster on pre-aligned data. Adding two unaligned DataFrames internally triggers a
reindexing step. For exploratory analysis you will hardly notice the difference (because reindex has been heavily
optimized), but when CPU cycles matter sprinkling a few explicit reindex calls here and there can have an impact.
You may wish to take an object and reindex its axes to be labeled the same as another object. While the syntax for this
is straightforward albeit verbose, it is a common enough operation that the reindex_like() method is available
to make this simpler:
In [227]: df2
Out[227]:
one two
a -1.101558 1.124472
b -0.177289 2.487104
c 0.462215 -0.486066
In [228]: df3
(continues on next page)
one two
a -0.829347 0.082635
b 0.094922 1.445267
c 0.734426 -1.527903
In [229]: df.reindex_like(df2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
one two
a -1.101558 1.124472
b -0.177289 2.487104
c 0.462215 -0.486066
The align() method is the fastest way to simultaneously align two objects. It supports a join argument (related to
joining and merging):
• join='outer': take the union of the indexes (default)
• join='left': use the calling object’s index
• join='right': use the passed object’s index
• join='inner': intersect the indexes
It returns a tuple with both of the reindexed Series:
In [230]: s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])
In [231]: s1 = s[:4]
In [232]: s2 = s[1:]
In [233]: s1.align(s2)
Out[233]:
(a 0.505453
b 1.788110
c -0.405908
d -0.801912
e NaN
dtype: float64, a NaN
b 1.788110
c -0.405908
d -0.801912
e 0.768460
dtype: float64)
(b 1.788110
c -0.405908
d -0.801912
dtype: float64, b 1.788110
(continues on next page)
(a 0.505453
b 1.788110
c -0.405908
d -0.801912
dtype: float64, a NaN
b 1.788110
c -0.405908
d -0.801912
dtype: float64)
For DataFrames, the join method will be applied to both the index and the columns by default:
You can also pass an axis option to only align on the specified axis:
If you pass a Series to DataFrame.align(), you can choose to align both objects either on the DataFrame’s index
or columns using the axis argument:
reindex() takes an optional parameter method which is a filling method chosen from the following table:
Method Action
pad / ffill Fill values forward
bfill / backfill Fill values backward
nearest Fill from the nearest index value
In [242]: ts
Out[242]:
2000-01-03 0.466284
2000-01-04 -0.457411
2000-01-05 -0.364060
2000-01-06 0.785367
2000-01-07 -1.463093
2000-01-08 1.187315
2000-01-09 -0.493153
2000-01-10 -1.323445
Freq: D, dtype: float64
In [243]: ts2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2000-01-03 0.466284
2000-01-06 0.785367
2000-01-09 -0.493153
dtype: float64
In [244]: ts2.reindex(ts.index)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2000-01-03 0.466284
2000-01-04 NaN
2000-01-05 NaN
2000-01-06 0.785367
2000-01-07 NaN
2000-01-08 NaN
2000-01-09 -0.493153
2000-01-10 NaN
Freq: D, dtype: float64
2000-01-03 0.466284
2000-01-04 0.466284
2000-01-05 0.466284
2000-01-06 0.785367
(continues on next page)
2000-01-03 0.466284
2000-01-04 0.785367
2000-01-05 0.785367
2000-01-06 0.785367
2000-01-07 -0.493153
2000-01-08 -0.493153
2000-01-09 -0.493153
2000-01-10 NaN
Freq: D, dtype: float64
2000-01-03 0.466284
2000-01-04 0.466284
2000-01-05 0.785367
2000-01-06 0.785367
2000-01-07 0.785367
2000-01-08 -0.493153
2000-01-09 -0.493153
2000-01-10 -0.493153
Freq: D, dtype: float64
These methods require that the indexes are ordered increasing or decreasing.
Note that the same result could have been achieved using fillna (except for method='nearest') or interpolate:
In [248]: ts2.reindex(ts.index).fillna(method='ffill')
Out[248]:
2000-01-03 0.466284
2000-01-04 0.466284
2000-01-05 0.466284
2000-01-06 0.785367
2000-01-07 0.785367
2000-01-08 0.785367
2000-01-09 -0.493153
2000-01-10 -0.493153
Freq: D, dtype: float64
reindex() will raise a ValueError if the index is not monotonically increasing or decreasing. fillna() and
interpolate() will not perform any checks on the order of the index.
The limit and tolerance arguments provide additional control over filling while reindexing. Limit specifies the
maximum count of consecutive matches:
In contrast, tolerance specifies the maximum distance between the index and indexer values:
Notice that when used on a DatetimeIndex, TimedeltaIndex or PeriodIndex, tolerance will coerced
into a Timedelta if possible. This allows you to specify tolerance with appropriate strings.
A method closely related to reindex is the drop() function. It removes a set of labels from an axis:
In [251]: df
Out[251]:
one two three
a -1.101558 1.124472 NaN
b -0.177289 2.487104 -0.634293
c 0.462215 -0.486066 1.931194
d NaN -0.456288 -1.222918
two three
a 1.124472 NaN
b 2.487104 -0.634293
c -0.486066 1.931194
d -0.456288 -1.222918
Note that the following also works, but is a bit less obvious / clean:
In [254]: df.reindex(df.index.difference(['a', 'd']))
Out[254]:
one two three
b -0.177289 2.487104 -0.634293
c 0.462215 -0.486066 1.931194
The rename() method allows you to relabel an axis based on some mapping (a dict or Series) or an arbitrary function.
In [255]: s
Out[255]:
a 0.505453
b 1.788110
c -0.405908
d -0.801912
e 0.768460
dtype: float64
In [256]: s.rename(str.upper)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[2
˓→
A 0.505453
B 1.788110
C -0.405908
D -0.801912
E 0.768460
dtype: float64
If you pass a function, it must return a value when called with any of the labels (and must produce a set of unique
values). A dict or Series can also be used:
In [257]: df.rename(columns={'one': 'foo', 'two': 'bar'},
.....: index={'a': 'apple', 'b': 'banana', 'd': 'durian'})
.....:
Out[257]:
foo bar three
apple -1.101558 1.124472 NaN
banana -0.177289 2.487104 -0.634293
c 0.462215 -0.486066 1.931194
durian NaN -0.456288 -1.222918
If the mapping doesn’t include a column/index label, it isn’t renamed. Note that extra labels in the mapping don’t
throw an error.
New in version 0.21.0.
DataFrame.rename() also supports an “axis-style” calling convention, where you specify a single mapper and
the axis to apply that mapping to.
In [258]: df.rename({'one': 'foo', 'two': 'bar'}, axis='columns')
Out[258]:
foo bar three
a -1.101558 1.124472 NaN
b -0.177289 2.487104 -0.634293
(continues on next page)
The rename() method also provides an inplace named parameter that is by default False and copies the under-
lying data. Pass inplace=True to rename the data in place.
New in version 0.18.0.
Finally, rename() also accepts a scalar or list-like for altering the Series.name attribute.
In [260]: s.rename("scalar-name")
Out[260]:
a 0.505453
b 1.788110
c -0.405908
d -0.801912
e 0.768460
Name: scalar-name, dtype: float64
The Panel class has a related rename_axis() class which can rename any of its three axes.
9.8 Iteration
The behavior of basic iteration over pandas objects depends on the type. When iterating over a Series, it is regarded
as array-like, and basic iteration produces the values. Other data structures, like DataFrame and Panel, follow the
dict-like convention of iterating over the “keys” of the objects.
In short, basic iteration (for i in object) produces:
• Series: values
• DataFrame: column labels
• Panel: item labels
Thus, for example, iterating over a DataFrame gives you the column names:
Pandas objects also have the dict-like iteritems() method to iterate over the (key, value) pairs.
To iterate over the rows of a DataFrame, you can use the following methods:
• iterrows(): Iterate over the rows of a DataFrame as (index, Series) pairs. This converts the rows to Series
objects, which can change the dtypes and has some performance implications.
• itertuples(): Iterate over the rows of a DataFrame as namedtuples of the values. This is a lot faster than
iterrows(), and is in most cases preferable to use to iterate over the values of a DataFrame.
Warning: Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is
not needed and can be avoided with one of the following approaches:
• Look for a vectorized solution: many operations can be performed using built-in methods or NumPy func-
tions, (boolean) indexing, . . .
• When you have a function that cannot work on the full DataFrame/Series at once, it is better to use apply()
instead of iterating over the values. See the docs on function application.
• If you need to do iterative manipulations on the values but performance is important, consider writing the in-
ner loop with cython or numba. See the enhancing performance section for some examples of this approach.
Warning: You should never modify something you are iterating over. This is not guaranteed to work in all cases.
Depending on the data types, the iterator returns a copy and not a view, and writing to it will have no effect!
For example, in the following case setting the value has no effect:
In [263]: df = pd.DataFrame({'a': [1, 2, 3], 'b': ['a', 'b', 'c']})
In [265]: df
Out[265]:
a b
0 1 a
1 2 b
2 3 c
9.8.1 iteritems
Consistent with the dict-like interface, iteritems() iterates through key-value pairs:
• Series: (index, scalar value) pairs
• DataFrame: (column, Series) pairs
• Panel: (item, DataFrame) pairs
For example:
9.8.2 iterrows
iterrows() allows you to iterate through the rows of a DataFrame as Series objects. It returns an iterator yielding
each index value along with a Series containing the data in each row:
Note: Because iterrows() returns a Series for each row, it does not preserve dtypes across the rows (dtypes are
preserved across columns for DataFrames). For example,
In [269]: df_orig.dtypes
Out[269]:
int int64
float float64
dtype: object
In [271]: row
Out[271]:
int 1.0
float 1.5
Name: 0, dtype: float64
All values in row, returned as a Series, are now upcasted to floats, also the original integer value in column x:
In [272]: row['int'].dtype
Out[272]: dtype('float64')
In [273]: df_orig['int'].dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[273]: dtype('int64')
To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns namedtuples of the
values and which is generally much faster than iterrows().
In [275]: print(df2)
x y
0 1 4
1 2 5
2 3 6
In [276]: print(df2.T)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ 0 1 2
x 1 2 3
y 4 5 6
In [278]: print(df2_t)
0 1 2
x 1 2 3
y 4 5 6
9.8.3 itertuples
The itertuples() method will return an iterator yielding a namedtuple for each row in the DataFrame. The first
element of the tuple will be the row’s corresponding index value, while the remaining values are the row values.
For instance:
This method does not convert the row to a Series object; it merely returns the values inside a namedtuple. Therefore,
itertuples() preserves the data type of the values and is generally faster as iterrows().
Note: The column names will be renamed to positional names if they are invalid Python identifiers, repeated, or start
with an underscore. With a large number of columns (>255), regular tuples are returned.
Series has an accessor to succinctly return datetime like properties for the values of the Series, if it is a date-
time/period like Series. This will return a Series, indexed like the existing Series.
# datetime
In [280]: s = pd.Series(pd.date_range('20130101 09:10:12', periods=4))
In [281]: s
Out[281]:
0 2013-01-01 09:10:12
1 2013-01-02 09:10:12
2 2013-01-03 09:10:12
3 2013-01-04 09:10:12
dtype: datetime64[ns]
In [282]: s.dt.hour
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 9
1 9
2 9
3 9
dtype: int64
In [283]: s.dt.second
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 12
1 12
2 12
3 12
dtype: int64
In [284]: s.dt.day
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1
1 2
2 3
3 4
dtype: int64
In [285]: s[s.dt.day==2]
Out[285]:
1 2013-01-02 09:10:12
dtype: datetime64[ns]
In [287]: stz
Out[287]:
0 2013-01-01 09:10:12-05:00
(continues on next page)
In [288]: stz.dt.tz
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→<DstTzInfo 'US/Eastern' LMT-1 day, 19:04:00 STD>
In [289]: s.dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
Out[289]:
0 2013-01-01 04:10:12-05:00
1 2013-01-02 04:10:12-05:00
2 2013-01-03 04:10:12-05:00
3 2013-01-04 04:10:12-05:00
dtype: datetime64[ns, US/Eastern]
You can also format datetime values as strings with Series.dt.strftime() which supports the same format as
the standard strftime().
# DatetimeIndex
In [290]: s = pd.Series(pd.date_range('20130101', periods=4))
In [291]: s
Out[291]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: datetime64[ns]
In [292]: s.dt.strftime('%Y/%m/%d')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[292]
˓→
0 2013/01/01
1 2013/01/02
2 2013/01/03
3 2013/01/04
dtype: object
# PeriodIndex
In [293]: s = pd.Series(pd.period_range('20130101', periods=4))
In [294]: s
Out[294]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: object
In [295]: s.dt.strftime('%Y/%m/%d')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[295]:
˓→
# period
In [296]: s = pd.Series(pd.period_range('20130101', periods=4, freq='D'))
In [297]: s
Out[297]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: object
In [298]: s.dt.year
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[298]:
˓→
0 2013
1 2013
2 2013
3 2013
dtype: int64
In [299]: s.dt.day
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1
1 2
2 3
3 4
dtype: int64
# timedelta
In [300]: s = pd.Series(pd.timedelta_range('1 day 00:00:05', periods=4, freq='s'))
In [301]: s
Out[301]:
0 1 days 00:00:05
1 1 days 00:00:06
2 1 days 00:00:07
3 1 days 00:00:08
dtype: timedelta64[ns]
In [302]: s.dt.days
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1
1 1
2 1
3 1
dtype: int64
(continues on next page)
In [303]: s.dt.seconds
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 5
1 6
2 7
3 8
dtype: int64
In [304]: s.dt.components
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Note: Series.dt will raise a TypeError if you access with a non-datetime-like values.
Series is equipped with a set of string processing methods that make it easy to operate on each element of the array.
Perhaps most importantly, these methods exclude missing/NA values automatically. These are accessed via the Series’s
str attribute and generally have names matching the equivalent (scalar) built-in string methods. For example:
In [306]: s.str.lower()
Out[306]:
0 a
1 b
2 c
3 aaba
4 baca
5 NaN
6 caba
7 dog
8 cat
dtype: object
Powerful pattern-matching methods are provided as well, but note that pattern-matching generally uses regular expres-
sions by default (and in some cases always uses them).
Please see Vectorized String Methods for a complete description.
9.11 Sorting
Pandas supports three kinds of sorting: sorting by index labels, sorting by column values, and sorting by a combination
of both.
9.11.1 By Index
The Series.sort_index() and DataFrame.sort_index() methods are used to sort a pandas object by its
index levels.
In [307]: df = pd.DataFrame({'one' : pd.Series(np.random.randn(3), index=['a', 'b', 'c
˓→']),
.....:
In [309]: unsorted_df
Out[309]:
three two one
a NaN 0.708543 0.036274
d -0.540166 0.586626 NaN
c 0.410238 1.121731 1.044630
b -0.282532 -2.038777 -0.490032
# DataFrame
In [310]: unsorted_df.sort_index()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [311]: unsorted_df.sort_index(ascending=False)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [312]: unsorted_df.sort_index(axis=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
# Series
In [313]: unsorted_df['three'].sort_index()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a NaN
b -0.282532
c 0.410238
d -0.540166
Name: three, dtype: float64
9.11.2 By Values
The Series.sort_values() method is used to sort a Series by its values. The DataFrame.sort_values()
method is used to sort a DataFrame by its column or row values. The optional by parameter to DataFrame.
sort_values() may used to specify one or more columns to use to determine the sorted order.
In [314]: df1 = pd.DataFrame({'one':[2,1,1,1],'two':[1,3,2,4],'three':[5,4,3,2]})
In [315]: df1.sort_values(by='two')
Out[315]:
one two three
0 2 1 5
2 1 2 3
1 1 3 4
3 1 4 2
These methods have special treatment of NA values via the na_position argument:
In [317]: s[2] = np.nan
In [318]: s.sort_values()
Out[318]:
0 A
3 Aaba
1 B
4 Baca
6 CABA
8 cat
7 dog
2 NaN
5 NaN
dtype: object
2 NaN
5 NaN
0 A
3 Aaba
1 B
4 Baca
6 CABA
8 cat
7 dog
dtype: object
# Build DataFrame
In [322]: df_multi = pd.DataFrame({'A': np.arange(6, 0, -1)},
.....: index=idx)
.....:
In [323]: df_multi
Out[323]:
A
first second
a 1 6
2 5
2 4
b 2 3
1 2
1 1
Note: If a string matches both a column name and an index level name then a warning is issued and the column takes
precedence. This will result in an ambiguity error in a future version.
9.11.4 searchsorted
Series has the nsmallest() and nlargest() methods which return the smallest or largest 𝑛 values. For a
large Series this can be much faster than sorting the entire Series and calling head(n) on the result.
In [332]: s = pd.Series(np.random.permutation(10))
In [333]: s
Out[333]:
0 8
1 2
2 9
3 5
4 6
5 0
6 1
7 7
8 4
9 3
dtype: int64
In [334]: s.sort_values()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[334
˓→
5 0
6 1
(continues on next page)
In [335]: s.nsmallest(3)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
5 0
6 1
1 2
dtype: int64
In [336]: s.nlargest(3)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2 9
0 8
7 7
dtype: int64
a b c
0 -2 a 1.0
1 -1 b 2.0
6 -1 f 4.0
a b c
0 -2 a 1.0
2 1 d 4.0
4 8 e NaN
1 -1 b 2.0
6 -1 f 4.0
You must be explicit about sorting when the column is a multi-index, and fully specify all levels to by.
In [343]: df1.sort_values(by=('a','two'))
Out[343]:
a b
one two three
0 2 1 5
2 1 2 3
1 1 3 4
3 1 4 2
9.12 Copying
The copy() method on pandas objects copies the underlying data (though not the axis indexes, since they are im-
mutable) and returns a new object. Note that it is seldom necessary to copy objects. For example, there are only a
handful of ways to alter a DataFrame in-place:
• Inserting, deleting, or modifying a column.
• Assigning to the index or columns attributes.
• For homogeneous data, directly modifying the values via the values attribute or advanced indexing.
To be clear, no pandas method has the side effect of modifying your data; almost every method returns a new object,
leaving the original object untouched. If the data is modified, it is because you did so explicitly.
9.13 dtypes
The main types stored in pandas objects are float, int, bool, datetime64[ns] and datetime64[ns,
tz], timedelta[ns], category and object. In addition these dtypes have item sizes, e.g. int64 and
int32. See Series with TZ for more detail on datetime64[ns, tz] dtypes.
A convenient dtypes attribute for DataFrame returns a Series with the data type of each column.
In [345]: dft
Out[345]:
A B C D E F G
0 0.809585 1 foo 2001-01-02 1.0 False 1
1 0.128238 1 foo 2001-01-02 1.0 False 1
2 0.775752 1 foo 2001-01-02 1.0 False 1
In [346]: dft.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A float64
B int64
C object
D datetime64[ns]
E float32
F bool
G int8
dtype: object
In [347]: dft['A'].dtype
Out[347]: dtype('float64')
If a pandas object contains data with multiple dtypes in a single column, the dtype of the column will be chosen to
accommodate all of the data types (object is the most general).
0 1
1 2
2 3
3 6
4 foo
dtype: object
The number of columns of each type in a DataFrame can be found by calling get_dtype_counts().
In [350]: dft.get_dtype_counts()
Out[350]:
float64 1
float32 1
int64 1
int8 1
datetime64[ns] 1
bool 1
object 1
dtype: int64
Numeric dtypes will propagate and can coexist in DataFrames. If a dtype is passed (either directly via the dtype
keyword, a passed ndarray, or a passed Series, then it will be preserved in DataFrame operations. Furthermore,
different numeric dtypes will NOT be combined. The following example will give you a taste.
In [352]: df1
Out[352]:
A
0 0.890400
1 0.283331
2 -0.303613
3 -1.192210
4 0.065420
5 0.455918
6 2.008328
7 0.188942
In [353]: df1.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A float32
dtype: object
.....:
In [355]: df2
Out[355]:
A B C
0 -0.454346 0.200071 255
1 -0.916504 -0.557756 255
2 0.640625 -0.141988 0
3 2.675781 -0.174060 0
4 -0.007866 0.258626 0
5 -0.204224 0.941688 0
6 -0.100098 -1.849045 0
7 -0.402100 -0.949458 0
In [356]: df2.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A float16
(continues on next page)
9.13.1 defaults
By default integer types are int64 and float types are float64, regardless of platform (32-bit or 64-bit). The
following will all result in int64 dtypes.
In [357]: pd.DataFrame([1, 2], columns=['a']).dtypes
Out[357]:
a int64
dtype: object
Note that Numpy will choose platform-dependent types when creating arrays. The following WILL result in int32
on 32-bit platform.
In [360]: frame = pd.DataFrame(np.array([1, 2]))
9.13.2 upcasting
Types can potentially be upcasted when combined with other types, meaning they are promoted from the current type
(e.g. int to float).
In [361]: df3 = df1.reindex_like(df2).fillna(value=0.0) + df2
In [362]: df3
Out[362]:
A B C
0 0.436054 0.200071 255.0
1 -0.633173 -0.557756 255.0
2 0.337012 -0.141988 0.0
3 1.483571 -0.174060 0.0
4 0.057555 0.258626 0.0
5 0.251695 0.941688 0.0
6 1.908231 -1.849045 0.0
7 -0.213158 -0.949458 0.0
In [363]: df3.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A float32
B float64
(continues on next page)
The values attribute on a DataFrame return the lower-common-denominator of the dtypes, meaning the dtype that
can accommodate ALL of the types in the resulting homogeneous dtyped NumPy array. This can force some upcast-
ing.
In [364]: df3.values.dtype
Out[364]: dtype('float64')
9.13.3 astype
You can use the astype() method to explicitly convert dtypes from one to another. These will by default return a
copy, even if the dtype was unchanged (pass copy=False to change this behavior). In addition, they will raise an
exception if the astype operation is invalid.
Upcasting is always according to the numpy rules. If two different dtypes are involved in an operation, then the more
general one will be used as the result of the operation.
In [365]: df3
Out[365]:
A B C
0 0.436054 0.200071 255.0
1 -0.633173 -0.557756 255.0
2 0.337012 -0.141988 0.0
3 1.483571 -0.174060 0.0
4 0.057555 0.258626 0.0
5 0.251695 0.941688 0.0
6 1.908231 -1.849045 0.0
7 -0.213158 -0.949458 0.0
In [366]: df3.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A float32
B float64
C float64
dtype: object
# conversion of dtypes
In [367]: df3.astype('float32').dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A float32
B float32
C float32
dtype: object
In [370]: dft
(continues on next page)
In [371]: dft.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[371]:
a uint8
b uint8
c int64
dtype: object
In [374]: dft1
Out[374]:
a b c
0 True 4 7.0
1 False 5 8.0
2 True 6 9.0
In [375]: dft1.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[375]:
˓→
a bool
b int64
c float64
dtype: object
Note: When trying to convert a subset of columns to a specified type using astype() and loc(), upcasting occurs.
loc() tries to fit in what we are assigning to the current dtypes, while [] will overwrite them taking the dtype from
the right hand side. Therefore the following piece of code produces the unintended result.
In [379]: dft.dtypes
Out[379]:
a int64
b int64
c int64
(continues on next page)
pandas offers various functions to try to force conversion of types from the object dtype to other types. In cases
where the data is already of the correct type, but stored in an object array, the DataFrame.infer_objects()
and Series.infer_objects() methods can be used to soft convert to the correct type.
.....:
In [382]: df = df.T
In [383]: df
Out[383]:
0 1 2
0 1 a 2016-03-02 00:00:00
1 2 b 2016-03-02 00:00:00
In [384]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 object
1 object
2 object
dtype: object
Because the data was transposed the original inference stored all columns as object, which infer_objects will
correct.
In [385]: df.infer_objects().dtypes
Out[385]:
0 int64
1 object
2 datetime64[ns]
dtype: object
The following functions are available for one dimensional object arrays or scalars to perform hard conversion of objects
to a specified type:
• to_numeric() (conversion to numeric dtypes)
In [386]: m = ['1.1', 2, 3]
In [387]: pd.to_numeric(m)
Out[387]: array([ 1.1, 2. , 3. ])
In [390]: pd.to_datetime(m)
Out[390]: DatetimeIndex(['2016-07-09', '2016-03-02'], dtype='datetime64[ns]',
˓→freq=None)
In [392]: pd.to_timedelta(m)
Out[392]: TimedeltaIndex(['0 days 00:00:00.000005', '1 days 00:00:00'], dtype=
˓→'timedelta64[ns]', freq=None)
To force a conversion, we can pass in an errors argument, which specifies how pandas should deal with elements
that cannot be converted to desired dtype or object. By default, errors='raise', meaning that any errors encoun-
tered will be raised during the conversion process. However, if errors='coerce', these errors will be ignored
and pandas will convert problematic elements to pd.NaT (for datetime and timedelta) or np.nan (for numeric).
This might be useful if you are reading in data which is mostly of the desired dtype (e.g. numeric, datetime), but
occasionally has non-conforming elements intermixed that you want to represent as missing:
In [393]: import datetime
In [396]: m = ['apple', 2, 3]
The errors parameter has a third option of errors='ignore', which will simply return the passed in data if it
encounters any errors with the conversion to a desired data type:
In [400]: import datetime
In [403]: m = ['apple', 2, 3]
In addition to object conversion, to_numeric() provides another argument downcast, which gives the option of
downcasting the newly (or already) numeric data to a smaller dtype, which can conserve memory:
In [407]: m = ['1', 2, 3]
As these methods apply only to one-dimensional arrays, lists or scalars; they cannot be used directly on multi-
dimensional objects such as DataFrames. However, with apply(), we can “apply” the function over each column
efficiently:
In [414]: df
Out[414]:
0 1
0 2016-07-09 2016-03-02 00:00:00
1 2016-07-09 2016-03-02 00:00:00
In [415]: df.apply(pd.to_datetime)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1
0 2016-07-09 2016-03-02
1 2016-07-09 2016-03-02
In [417]: df
Out[417]:
0 1 2
0 1.1 2 3
1 1.1 2 3
In [418]: df.apply(pd.to_numeric)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[418]:
0 1 2
0 1.1 2 3
(continues on next page)
In [420]: df
Out[420]:
0 1
0 5us 1 days 00:00:00
1 5us 1 days 00:00:00
In [421]: df.apply(pd.to_timedelta)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[421]:
˓→
0 1
0 00:00:00.000005 1 days
1 00:00:00.000005 1 days
9.13.5 gotchas
Performing selection operations on integer type data can easily upcast the data to floating. The dtype of the
input data will be preserved in cases where nans are not introduced. See also Support for integer NA.
In [422]: dfi = df3.astype('int32')
In [423]: dfi['E'] = 1
In [424]: dfi
Out[424]:
A B C E
0 0 0 255 1
1 0 0 255 1
2 0 0 0 1
3 1 0 0 1
4 0 0 0 1
5 0 0 0 1
6 1 -1 0 1
7 0 0 0 1
In [425]: dfi.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A int32
B int32
C int32
E int64
dtype: object
In [427]: casted
Out[427]:
A B C E
0 NaN NaN 255.0 1
1 NaN NaN 255.0 1
2 NaN NaN NaN 1
(continues on next page)
In [428]: casted.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A float64
B float64
C float64
E int64
dtype: object
In [431]: dfa.dtypes
Out[431]:
A float32
B float64
C float64
dtype: object
In [433]: casted
Out[433]:
A B C
0 NaN 0.200071 255.0
1 NaN NaN 255.0
2 0.337012 NaN NaN
3 1.483571 NaN NaN
4 NaN 0.258626 NaN
5 NaN 0.941688 NaN
6 NaN NaN NaN
7 NaN NaN NaN
In [434]: casted.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A float32
B float64
C float64
dtype: object
In [440]: df
Out[440]:
string int64 uint8 float64 bool1 bool2 dates category
˓→tdeltas uint64 other_dates tz_aware_dates
0 a 1 3 4.0 True False 2018-08-05 11:57:39.507525 A
˓→NaT 3 2013-01-01 2013-01-01 00:00:00-05:00
1 b 2 4 5.0 False True 2018-08-06 11:57:39.507525 B 1
˓→days 4 2013-01-02 2013-01-02 00:00:00-05:00
2 c 3 5 6.0 True False 2018-08-07 11:57:39.507525 C 1
˓→days 5 2013-01-03 2013-01-03 00:00:00-05:00
In [441]: df.dtypes
Out[441]:
string object
int64 int64
uint8 uint8
float64 float64
bool1 bool
bool2 bool
dates datetime64[ns]
category category
tdeltas timedelta64[ns]
uint64 uint64
other_dates datetime64[ns]
tz_aware_dates datetime64[ns, US/Eastern]
dtype: object
select_dtypes() has two parameters include and exclude that allow you to say “give me the columns with
these dtypes” (include) and/or “give the columns without these dtypes” (exclude).
For example, to select bool columns:
In [442]: df.select_dtypes(include=[bool])
Out[442]:
bool1 bool2
0 True False
1 False True
2 True False
You can also pass the name of a dtype in the NumPy dtype hierarchy:
In [443]: df.select_dtypes(include=['bool'])
Out[443]:
bool1 bool2
0 True False
1 False True
2 True False
In [445]: df.select_dtypes(include=['object'])
Out[445]:
string
0 a
1 b
2 c
To see all the child dtypes of a generic dtype like numpy.number you can define a function that returns a tree of
child dtypes:
In [447]: subdtypes(np.generic)
Out[447]:
[numpy.generic,
[[numpy.number,
[[numpy.integer,
[[numpy.signedinteger,
[numpy.int8,
numpy.int16,
numpy.int32,
numpy.int64,
numpy.int64,
numpy.timedelta64]],
[numpy.unsignedinteger,
[numpy.uint8,
numpy.uint16,
numpy.uint32,
numpy.uint64,
(continues on next page)
Note: Pandas also defines the types category, and datetime64[ns, tz], which are not integrated into the
normal NumPy hierarchy and won’t show up with the above function.
TEN
Series and Index are equipped with a set of string processing methods that make it easy to operate on each element of
the array. Perhaps most importantly, these methods exclude missing/NA values automatically. These are accessed via
the str attribute and generally have names matching the equivalent (scalar) built-in string methods:
In [1]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
In [2]: s.str.lower()
Out[2]:
0 a
1 b
2 c
3 aaba
4 baca
5 NaN
6 caba
7 dog
8 cat
dtype: object
In [3]: s.str.upper()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 A
1 B
2 C
3 AABA
4 BACA
5 NaN
6 CABA
7 DOG
8 CAT
dtype: object
In [4]: s.str.len()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1.0
1 1.0
2 1.0
3 4.0
4 4.0
5 NaN
6 4.0
7 3.0
(continues on next page)
659
pandas: powerful Python data analysis toolkit, Release 0.23.4
In [5]: idx = pd.Index([' jack', 'jill ', ' jesse ', 'frank'])
In [6]: idx.str.strip()
Out[6]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='object')
In [7]: idx.str.lstrip()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[7]: Index(['jack
˓→', 'jill ', 'jesse ', 'frank'], dtype='object')
In [8]: idx.str.rstrip()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Index([' jack', 'jill', ' jesse', 'frank'], dtype='object')
The string methods on Index are especially useful for cleaning up or transforming DataFrame columns. For instance,
you may have columns with leading or trailing whitespace:
In [10]: df
Out[10]:
Column A Column B
0 -1.425575 -1.336299
1 0.740933 1.032121
2 -1.585660 0.913812
In [11]: df.columns.str.strip()
Out[11]: Index(['Column A', 'Column B'], dtype='object')
In [12]: df.columns.str.lower()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[12]: Index([' column a ',
˓→ ' column b '], dtype='object')
These string methods can then be used to clean up the columns as needed. Here we are removing leading and trailing
whitespaces, lowercasing all names, and replacing any remaining whitespaces with underscores:
In [14]: df
Out[14]:
column_a column_b
0 -1.425575 -1.336299
1 0.740933 1.032121
2 -1.585660 0.913812
Note: If you have a Series where lots of elements are repeated (i.e. the number of unique elements in the
Series is a lot smaller than the length of the Series), it can be faster to convert the original Series to one of
type category and then use .str.<method> or .dt.<property> on that. The performance difference comes
from the fact that, for Series of type category, the string operations are done on the .categories and not on
In [16]: s2.str.split('_')
Out[16]:
0 [a, b, c]
1 [c, d, e]
2 NaN
3 [f, g, h]
dtype: object
In [18]: s2.str.split('_').str[1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[18]:
0 b
1 d
2 NaN
3 g
dtype: object
rsplit is similar to split except it works in the reverse direction, i.e., from the end of the string to the beginning
of the string:
In [23]: s3
Out[23]:
0 A
1 B
2 C
3 Aaba
4 Baca
5
6 NaN
7 CABA
8 dog
9 cat
dtype: object
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 NaN
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: object
Some caution must be taken to keep regular expressions in mind! For example, the following code will cause trouble
because of the regular expression meaning of $:
0 12
1 -10
2 $10,000
dtype: object
In [37]: import re
Including a flags argument when calling replace with a compiled regular expression object will raise a
ValueError.
10.2 Concatenation
There are several ways to concatenate a Series or Index, either with itself or others, all based on cat(), resp.
Index.str.cat.
In [42]: s.str.cat(sep=',')
Out[42]: 'a,b,c,d'
If not specified, the keyword sep for the separator defaults to the empty string, sep='':
In [43]: s.str.cat()
Out[43]: 'abcd'
By default, missing values are ignored. Using na_rep, they can be given a representation:
In [45]: t.str.cat(sep=',')
Out[45]: 'a,b,d'
The first argument to cat() can be a list-like object, provided that it matches the length of the calling Series (or
Index).
Missing values on either side will result in missing values in the result as well, unless na_rep is specified:
In [48]: s.str.cat(t)
Out[48]:
0 aa
1 bb
2 NaN
3 dd
dtype: object
The parameter others can also be two-dimensional. In this case, the number or rows must match the lengths of the
calling Series (or Index).
In [50]: d = pd.concat([t, s], axis=1)
In [51]: s
Out[51]:
0 a
1 b
2 c
3 d
dtype: object
In [52]: d
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[52]:
0 1
0 a a
1 b b
2 NaN c
3 d d
0 aaa
1 bbb
2 c-c
3 ddd
dtype: object
10.2.4 Concatenating a Series and an indexed object into a Series, with alignment
In [55]: s
Out[55]:
0 a
1 b
2 c
3 d
dtype: object
In [56]: u
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[56]:
1 b
3 d
0 a
2 c
dtype: object
In [57]: s.str.cat(u)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→ (continues on next page)
0 aa
1 bb
2 cc
3 dd
dtype: object
Warning: If the join keyword is not passed, the method cat() will currently fall back to the behavior before
version 0.23.0 (i.e. no alignment), but a FutureWarning will be raised if any of the involved indexes differ,
since this default will change to join='left' in a future version.
The usual options are available for join (one of 'left', 'outer', 'inner', 'right'). In particular,
alignment also means that the different lengths do not need to coincide anymore.
In [60]: s
Out[60]:
0 a
1 b
2 c
3 d
dtype: object
In [61]: v
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[61]:
-1 z
0 a
1 b
3 d
4 e
dtype: object
0 aa
1 bb
2 c-
3 dd
dtype: object
-1 -z
(continues on next page)
In [65]: s
Out[65]:
0 a
1 b
2 c
3 d
dtype: object
In [66]: f
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[66]:
0 1
3 d d
2 NaN c
1 b b
0 a a
0 aaa
1 bbb
2 c-c
3 ddd
dtype: object
All one-dimensional list-likes can be arbitrarily combined in a list-like container (including iterators, dict-views,
etc.):
In [68]: s
Out[68]:
0 a
1 b
2 c
3 d
dtype: object
In [69]: u
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[69]:
1 b
3 d
0 a
2 c
(continues on next page)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 abbA1
1 bddB3
2 caaC0
3 dccD2
dtype: object
All elements must match in length to the calling Series (or Index), except those having an index if join is not
None:
In [71]: v
Out[71]:
-1 z
0 a
1 b
3 d
4 e
dtype: object
If using join='right' on a list of others that contains different indexes, the union of these indexes will be used
as the basis for the final concatenation:
In [73]: u.loc[[3]]
Out[73]:
3 d
dtype: object
You can use [] notation to directly index by position locations. If you index past the end of the string, the result will
be a NaN.
In [76]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan,
....: 'CABA', 'dog', 'cat'])
....:
In [77]: s.str[0]
Out[77]:
0 A
1 B
2 C
3 A
4 B
5 NaN
6 C
7 d
8 c
dtype: object
In [78]: s.str[1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 NaN
1 NaN
2 NaN
3 a
4 a
5 NaN
6 A
7 o
8 a
dtype: object
Warning: In version 0.18.0, extract gained the expand argument. When expand=False it returns a
Series, Index, or DataFrame, depending on the subject and regular expression pattern (same behavior as
pre-0.18.0). When expand=True it always returns a DataFrame, which is more consistent and less confusing
from the perspective of a user. expand=True is the default since version 0.23.0.
The extract method accepts a regular expression with at least one capture group.
Extracting a regular expression with more than one group returns a DataFrame with one column per group.
In [79]: pd.Series(['a1', 'b2', 'c3']).str.extract('([ab])(\d)', expand=False)
Out[79]:
0 1
(continues on next page)
Elements that do not match return a row filled with NaN. Thus, a Series of messy strings can be “converted” into a
like-indexed Series or DataFrame of cleaned-up or more useful strings, without necessitating get() to access tuples
or re.match objects. The dtype of the result is always object, even if no match is found and the result only contains
NaN.
Named groups like
In [80]: pd.Series(['a1', 'b2', 'c3']).str.extract('(?P<letter>[ab])(?P<digit>\d)',
˓→expand=False)
Out[80]:
letter digit
0 a 1
1 b 2
2 NaN NaN
can also be used. Note that any capture group names in the regular expression will be used for column names;
otherwise capture group numbers will be used.
Extracting a regular expression with one group returns a DataFrame with one column if expand=True.
In [82]: pd.Series(['a1', 'b2', 'c3']).str.extract('[ab](\d)', expand=True)
Out[82]:
0
0 1
1 2
2 NaN
Calling on an Index with a regex with exactly one capture group returns a DataFrame with one column if
expand=True.
In [84]: s = pd.Series(["a1", "b2", "c3"], ["A11", "B22", "C33"])
In [85]: s
Out[85]:
A11 a1
(continues on next page)
Calling on an Index with a regex with more than one capture group returns a DataFrame if expand=True.
The table below summarizes the behavior of extract(expand=False) (input subject in first column, number of
groups in regex in first row)
In [90]: s
Out[90]:
A a1a2
B b1
C c1
dtype: object
the extractall method returns every match. The result of extractall is always a DataFrame with a
MultiIndex on its rows. The last level of the MultiIndex is named match and indicates the order in the
subject.
In [93]: s.str.extractall(two_groups)
Out[93]:
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
C 0 c 1
When each subject string in the Series has exactly one match,
In [95]: s
Out[95]:
0 a3
1 b3
2 c2
dtype: object
In [97]: extract_result
Out[97]:
letter digit
0 a 3
1 b 3
2 c 2
In [99]: extractall_result
Out[99]:
letter digit
match
0 0 a 3
1 0 b 3
2 0 c 2
letter digit
(continues on next page)
Index also supports .str.extractall. It returns a DataFrame which has the same result as a Series.str.
extractall with a default index (starts from 0).
New in version 0.19.0.
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
The distinction between match and contains is strictness: match relies on strict re.match, while contains
relies on re.search.
Methods like match, contains, startswith, and endswith take an extra na argument so missing values can
be considered True or False:
In [106]: s4 = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat
˓→'])
You can extract dummy variables from string columns. For example if they are separated by a '|':
In [109]: s.str.get_dummies(sep='|')
Out[109]:
a b c
0 1 0 0
1 1 1 0
2 0 0 0
3 1 0 1
In [111]: idx.str.get_dummies(sep='|')
Out[111]:
MultiIndex(levels=[[0, 1], [0, 1], [0, 1]],
labels=[[1, 1, 0, 1], [0, 1, 0, 0], [0, 0, 0, 1]],
names=['a', 'b', 'c'])
Method Description
cat() Concatenate strings
split() Split strings on delimiter
rsplit() Split strings on delimiter working from the end of the string
get() Index into each element (retrieve i-th element)
join() Join strings in each element of the Series with passed separator
get_dummies() Split strings on the delimiter returning DataFrame of dummy variables
contains() Return boolean array if each string contains pattern/regex
replace() Replace occurrences of pattern/regex/string with some other string or the return value of a
callable given the occurrence
repeat() Duplicate values (s.str.repeat(3) equivalent to x * 3)
pad() Add whitespace to left, right, or both sides of strings
center() Equivalent to str.center
ljust() Equivalent to str.ljust
rjust() Equivalent to str.rjust
zfill() Equivalent to str.zfill
wrap() Split long strings into lines with length less than a given width
slice() Slice each string in the Series
slice_replace() Replace slice in each string with passed value
count() Count occurrences of pattern
startswith() Equivalent to str.startswith(pat) for each element
endswith() Equivalent to str.endswith(pat) for each element
findall() Compute list of all occurrences of pattern/regex for each string
match() Call re.match on each element, returning matched groups as list
extract() Call re.search on each element, returning DataFrame with one row for each element
and one column for each regex capture group
extractall() Call re.findall on each element, returning DataFrame with one row for each match
and one column for each regex capture group
len() Compute string lengths
strip() Equivalent to str.strip
rstrip() Equivalent to str.rstrip
lstrip() Equivalent to str.lstrip
partition() Equivalent to str.partition
rpartition() Equivalent to str.rpartition
lower() Equivalent to str.lower
upper() Equivalent to str.upper
find() Equivalent to str.find
rfind() Equivalent to str.rfind
index() Equivalent to str.index
rindex() Equivalent to str.rindex
capitalize() Equivalent to str.capitalize
swapcase() Equivalent to str.swapcase
normalize() Return Unicode normal form. Equivalent to unicodedata.normalize
translate() Equivalent to str.translate
isalnum() Equivalent to str.isalnum
isalpha() Equivalent to str.isalpha
isdigit() Equivalent to str.isdigit
isspace() Equivalent to str.isspace
islower() Equivalent to str.islower
isupper() Equivalent to str.isupper
istitle() Equivalent to str.istitle
Continued on next page
ELEVEN
11.1 Overview
pandas has an options system that lets you customize some aspects of its behaviour, display-related options being those
the user is most likely to adjust.
Options have a full “dotted-style”, case-insensitive name (e.g. display.max_rows). You can get/set options
directly as attributes of the top-level options attribute:
In [2]: pd.options.display.max_rows
Out[2]: 15
In [4]: pd.options.display.max_rows
Out[4]: 999
The API is composed of 5 relevant functions, available directly from the pandas namespace:
• get_option() / set_option() - get/set the value of a single option.
• reset_option() - reset one or more options to their default value.
• describe_option() - print the descriptions of one or more options.
• option_context() - execute a codeblock with a set of options that revert to prior settings after execution.
Note: Developers can check out pandas/core/config.py for more information.
All of the functions above accept a regexp pattern (re.search style) as an argument, and so passing in a substring
will work - as long as it is unambiguous:
In [5]: pd.get_option("display.max_rows")
Out[5]: 999
In [6]: pd.set_option("display.max_rows",101)
In [7]: pd.get_option("display.max_rows")
Out[7]: 101
In [8]: pd.set_option("max_r",102)
In [9]: pd.get_option("display.max_rows")
Out[9]: 102
679
pandas: powerful Python data analysis toolkit, Release 0.23.4
The following will not work because it matches multiple option names, e.g. display.max_colwidth,
display.max_rows, display.max_columns:
In [10]: try:
....: pd.get_option("column")
....: except KeyError as e:
....: print(e)
....:
'Pattern matched multiple keys'
Note: Using this form of shorthand may cause your code to break if new options with similar names are added in
future versions.
You can get a list of available options and their descriptions with describe_option. When called with no argu-
ment describe_option will print out the descriptions for all available options.
As described above, get_option() and set_option() are available from the pandas namespace. To change an
option, call set_option('option regex', new_value).
In [11]: pd.get_option('mode.sim_interactive')
Out[11]: False
In [13]: pd.get_option('mode.sim_interactive')
Out[13]: True
In [15]: pd.set_option("display.max_rows",999)
In [16]: pd.get_option("display.max_rows")
Out[16]: 999
In [17]: pd.reset_option("display.max_rows")
In [18]: pd.get_option("display.max_rows")
Out[18]: 60
option_context context manager has been exposed through the top-level API, allowing you to execute code with
given option values. Option values are restored automatically when you exit the with block:
In [20]: with pd.option_context("display.max_rows",10,"display.max_columns", 5):
....: print(pd.get_option("display.max_rows"))
....: print(pd.get_option("display.max_columns"))
(continues on next page)
In [21]: print(pd.get_option("display.max_rows"))
\\\\\60
In [22]: print(pd.get_option("display.max_columns"))
\\\\\\\\0
Using startup scripts for the python/ipython environment to import pandas and set options makes working with pandas
more efficient. To do this, create a .py or .ipy script in the startup directory of the desired profile. An example where
the startup folder is in a default ipython profile can be found at:
$IPYTHONDIR/profile_default/startup
More information can be found in the ipython documentation. An example startup script for pandas is displayed
below:
import pandas as pd
pd.set_option('display.max_rows', 999)
pd.set_option('precision', 5)
In [23]: df = pd.DataFrame(np.random.randn(7,2))
In [24]: pd.set_option('max_rows', 7)
In [25]: df
Out[25]:
0 1
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
3 0.119209 -1.044236
4 -0.861849 -2.104569
5 -0.494929 1.071804
6 0.721555 -0.706771
In [26]: pd.set_option('max_rows', 5)
In [27]: df
Out[27]:
(continues on next page)
[7 rows x 2 columns]
In [28]: pd.reset_option('max_rows')
display.expand_frame_repr allows for the representation of dataframes to stretch across pages, wrapped over
the full column vs row-wise.
In [29]: df = pd.DataFrame(np.random.randn(5,10))
In [31]: df
Out[31]:
0 1 2 3 4 5 6 7
˓→ 8 9
0 -1.039575 0.271860 -0.424972 0.567020 0.276232 -1.087401 -0.673690 0.113648 -1.
˓→478427 0.524988
1 0.404705 0.577046 -1.715002 -1.039268 -0.370647 -1.157892 -1.344312 0.844885 1.
˓→075770 -0.109050
In [33]: df
Out[33]:
0 1 2 3 4 5 6 7
˓→ 8 9
0 -1.039575 0.271860 -0.424972 0.567020 0.276232 -1.087401 -0.673690 0.113648 -1.
˓→478427 0.524988
1 0.404705 0.577046 -1.715002 -1.039268 -0.370647 -1.157892 -1.344312 0.844885 1.
˓→075770 -0.109050
In [34]: pd.reset_option('expand_frame_repr')
display.large_repr lets you select whether to display dataframes that exceed max_columns or max_rows
as a truncated frame, or as a summary.
In [35]: df = pd.DataFrame(np.random.randn(10,10))
(continues on next page)
In [36]: pd.set_option('max_rows', 5)
In [38]: df
Out[38]:
0 1 2 3 4 5 6 7
˓→ 8 9
0 -1.413681 1.607920 1.024180 0.569605 0.875906 -2.211372 0.974466 -2.006747 -0.
˓→410001 -0.078638
In [40]: df
Out[40]:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
0 10 non-null float64
1 10 non-null float64
2 10 non-null float64
3 10 non-null float64
4 10 non-null float64
5 10 non-null float64
6 10 non-null float64
7 10 non-null float64
8 10 non-null float64
9 10 non-null float64
dtypes: float64(10)
memory usage: 880.0 bytes
In [41]: pd.reset_option('large_repr')
In [42]: pd.reset_option('max_rows')
display.max_colwidth sets the maximum width of columns. Cells of this length or longer will be truncated
with an ellipsis.
In [44]: pd.set_option('max_colwidth',40)
In [46]: pd.set_option('max_colwidth', 6)
In [47]: df
Out[47]:
0 1 2 3
0 foo bar bim un...
1 horse cow ba... apple
In [48]: pd.reset_option('max_colwidth')
In [49]: df = pd.DataFrame(np.random.randn(10,10))
In [51]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
0 10 non-null float64
1 10 non-null float64
2 10 non-null float64
3 10 non-null float64
4 10 non-null float64
5 10 non-null float64
6 10 non-null float64
7 10 non-null float64
8 10 non-null float64
9 10 non-null float64
dtypes: float64(10)
memory usage: 880.0 bytes
In [52]: pd.set_option('max_info_columns', 5)
In [53]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Columns: 10 entries, 0 to 9
dtypes: float64(10)
memory usage: 880.0 bytes
In [54]: pd.reset_option('max_info_columns')
display.max_info_rows: df.info() will usually show null-counts for each column. For large frames this
can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller
dimensions then specified. Note that you can specify the option df.info(null_counts=True) to override on
showing a particular frame.
In [56]: df
Out[56]:
0 1 2 3 4 5 6 7 8 9
0 0.0 1.0 1.0 0.0 1.0 1.0 0.0 NaN 1.0 NaN
1 1.0 NaN 0.0 0.0 1.0 1.0 NaN 1.0 0.0 1.0
2 NaN NaN NaN 1.0 1.0 0.0 NaN 0.0 1.0 NaN
3 0.0 1.0 1.0 NaN 0.0 NaN 1.0 NaN NaN 0.0
4 0.0 1.0 0.0 0.0 1.0 0.0 0.0 NaN 0.0 0.0
5 0.0 NaN 1.0 NaN NaN NaN NaN 0.0 1.0 NaN
6 0.0 1.0 0.0 0.0 NaN 1.0 NaN NaN 0.0 NaN
7 0.0 NaN 1.0 1.0 NaN 1.0 1.0 1.0 1.0 NaN
8 0.0 0.0 NaN 0.0 NaN 1.0 0.0 0.0 NaN NaN
9 NaN NaN 0.0 NaN NaN NaN 0.0 1.0 1.0 NaN
In [58]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
0 8 non-null float64
1 5 non-null float64
2 8 non-null float64
3 7 non-null float64
4 5 non-null float64
5 7 non-null float64
6 6 non-null float64
7 6 non-null float64
8 8 non-null float64
9 3 non-null float64
dtypes: float64(10)
memory usage: 880.0 bytes
In [59]: pd.set_option('max_info_rows', 5)
In [60]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
0 float64
1 float64
2 float64
3 float64
4 float64
5 float64
6 float64
7 float64
8 float64
9 float64
dtypes: float64(10)
memory usage: 880.0 bytes
In [61]: pd.reset_option('max_info_rows')
display.precision sets the output display precision in terms of decimal places. This is only a suggestion.
In [62]: df = pd.DataFrame(np.random.randn(5,5))
In [63]: pd.set_option('precision',7)
In [64]: df
Out[64]:
0 1 2 3 4
0 -2.0490276 2.8466122 -1.2080493 -0.4503923 2.4239054
1 0.1211080 0.2669165 0.8438259 -0.2225400 2.0219807
2 -0.7167894 -2.2244851 -1.0611370 -0.2328247 0.4307933
3 -0.6654779 1.8298075 -1.4065093 1.0782481 0.3227741
4 0.2003243 0.8900241 0.1948132 0.3516326 0.4488815
In [65]: pd.set_option('precision',4)
In [66]: df
Out[66]:
0 1 2 3 4
0 -2.0490 2.8466 -1.2080 -0.4504 2.4239
1 0.1211 0.2669 0.8438 -0.2225 2.0220
2 -0.7168 -2.2245 -1.0611 -0.2328 0.4308
3 -0.6655 1.8298 -1.4065 1.0782 0.3228
4 0.2003 0.8900 0.1948 0.3516 0.4489
display.chop_threshold sets at what level pandas rounds to zero when it displays a Series of DataFrame. This
setting does not change the precision at which the number is stored.
In [67]: df = pd.DataFrame(np.random.randn(6,6))
In [68]: pd.set_option('chop_threshold', 0)
In [69]: df
Out[69]:
0 1 2 3 4 5
0 -0.1979 0.9657 -1.5229 -0.1166 0.2956 -1.0477
1 1.6406 1.9058 2.7721 0.0888 -1.1442 -0.6334
2 0.9254 -0.0064 -0.8204 -0.6009 -1.0393 0.8248
3 -0.8241 -0.3377 -0.9278 -0.8401 0.2485 -0.1093
4 0.4320 -0.4607 0.3365 -3.2076 -1.5359 0.4098
5 -0.6731 -0.7411 -0.1109 -2.6729 0.8645 0.0609
In [71]: df
Out[71]:
0 1 2 3 4 5
0 0.0000 0.9657 -1.5229 0.0000 0.0000 -1.0477
1 1.6406 1.9058 2.7721 0.0000 -1.1442 -0.6334
2 0.9254 0.0000 -0.8204 -0.6009 -1.0393 0.8248
3 -0.8241 0.0000 -0.9278 -0.8401 0.0000 0.0000
4 0.0000 0.0000 0.0000 -3.2076 -1.5359 0.0000
5 -0.6731 -0.7411 0.0000 -2.6729 0.8645 0.0000
In [72]: pd.reset_option('chop_threshold')
display.colheader_justify controls the justification of the headers. The options are ‘right’, and ‘left’.
In [75]: df
Out[75]:
A B C
0 0.9331 0.3 0.0
1 0.2888 0.2 0.0
2 1.3250 0.2 0.0
3 0.5892 0.7 0.0
4 0.5314 0.1 0.0
5 -1.1987 0.7 0.0
In [77]: df
Out[77]:
A B C
0 0.9331 0.3 0.0
1 0.2888 0.2 0.0
2 1.3250 0.2 0.0
3 0.5892 0.7 0.0
4 0.5314 0.1 0.0
5 -1.1987 0.7 0.0
In [78]: pd.reset_option('colheader_justify')
pandas also allows you to set how numbers are displayed in the console. This option is not set through the
set_options API.
Use the set_eng_float_format function to alter the floating-point formatting of pandas objects to produce a
particular format.
For instance:
In [79]: import numpy as np
In [82]: s/1.e3
Out[82]:
a -236.866u
b 846.974u
c -685.597u
d 609.099u
e -303.961u
dtype: float64
a -236.866n
b 846.974n
c -685.597n
d 609.099n
e -303.961n
dtype: float64
To round floats on a case-by-case basis, you can also use round() and round().
Warning: Enabling this option will affect the performance for printing of DataFrame and Series (about 2 times
slower). Use only when it is actually required.
Some East Asian countries use Unicode characters whose width corresponds to two Latin characters. If a DataFrame
or Series contains these characters, the default output mode may not align them properly.
Note: Screen captures are attached for each output to show the actual results.
In [85]: df;
Enabling display.unicode.east_asian_width allows pandas to check each character’s “East Asian Width”
property. These characters can be aligned properly by setting this option to True. However, this will result in longer
render times than the standard len function.
In [87]: df;
In addition, Unicode characters whose width is “Ambiguous” can either be 1 or 2 characters wide depending on the
terminal setting or encoding. The option display.unicode.ambiguous_as_wide can be used to handle the
ambiguity.
By default, an “Ambiguous” character’s width, such as “¡” (inverted exclamation) in the example below, is taken to be
1.
In [89]: df;
In [91]: df;
TWELVE
Note: The Python and NumPy indexing operators [] and attribute operator . provide quick and easy access to pandas
data structures across a wide range of use cases. This makes interactive work intuitive, as there’s little new to learn if
you already know how to deal with Python dictionaries and NumPy arrays. However, since the type of the data to be
accessed isn’t known in advance, directly using standard operators has some optimization limits. For production code,
we recommended that you take advantage of the optimized pandas data access methods exposed in this chapter.
Warning: Whether a copy or a reference is returned for a setting operation, may depend on the context. This is
sometimes called chained assignment and should be avoided. See Returning a View versus Copy.
Warning: Indexing on an integer-based Index with floats has been clarified in 0.18.0, for a summary of the
changes, see here.
See the MultiIndex / Advanced Indexing for MultiIndex and more advanced indexing documentation.
See the cookbook for some advanced strategies.
Object selection has had a number of user-requested additions in order to support more explicit location based index-
ing. Pandas now supports three types of multi-axis indexing.
• .loc is primarily label based, but may also be used with a boolean array. .loc will raise KeyError when
the items are not found. Allowed inputs are:
691
pandas: powerful Python data analysis toolkit, Release 0.23.4
– A single label, e.g. 5 or 'a' (Note that 5 is interpreted as a label of the index. This use is not an integer
position along the index.).
– A list or array of labels ['a', 'b', 'c'].
– A slice object with labels 'a':'f' (Note that contrary to usual python slices, both the start and the stop
are included, when present in the index! See Slicing with labels.).
– A boolean array
– A callable function with one argument (the calling Series, DataFrame or Panel) and that returns valid
output for indexing (one of the above).
New in version 0.18.1.
See more at Selection by Label.
• .iloc is primarily integer position based (from 0 to length-1 of the axis), but may also be used with a
boolean array. .iloc will raise IndexError if a requested indexer is out-of-bounds, except slice indexers
which allow out-of-bounds indexing. (this conforms with Python/NumPy slice semantics). Allowed inputs are:
– An integer e.g. 5.
– A list or array of integers [4, 3, 0].
– A slice object with ints 1:7.
– A boolean array.
– A callable function with one argument (the calling Series, DataFrame or Panel) and that returns valid
output for indexing (one of the above).
New in version 0.18.1.
See more at Selection by Position, Advanced Indexing and Advanced Hierarchical.
• .loc, .iloc, and also [] indexing can accept a callable as indexer. See more at Selection By Callable.
Getting values from an object with multi-axes selection uses the following notation (using .loc as an example, but
the following applies to .iloc as well). Any of the axes accessors may be the null slice :. Axes left out of the
specification are assumed to be :, e.g. p.loc['a'] is equivalent to p.loc['a', :, :].
12.2 Basics
As mentioned when introducing the data structures in the last section, the primary function of indexing with [] (a.k.a.
__getitem__ for those familiar with implementing class behavior in Python) is selecting out lower-dimensional
slices. The following table shows return type values when indexing pandas objects with []:
Here we construct a simple time series data set to use for illustrating the indexing functionality:
In [3]: df
Out[3]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
In [5]: panel
Out[5]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 8 (major_axis) x 4 (minor_axis)
Items axis: one to two
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-08 00:00:00
Minor_axis axis: A to D
Note: None of the indexing functionality is time series specific unless specifically stated.
Thus, as per above, we have the most basic indexing using []:
In [6]: s = df['A']
In [7]: s[dates[5]]
Out[7]: -0.67368970808837059
In [8]: panel['two']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[8]:
A B C D
2000-01-01 0.409571 0.113086 -0.610826 -0.936507
2000-01-02 1.152571 0.222735 1.017442 -0.845111
2000-01-03 -0.921390 -1.708620 0.403304 1.270929
2000-01-04 0.662014 -0.310822 -0.141342 0.470985
2000-01-05 -0.484513 0.962970 1.174465 -0.888276
2000-01-06 -0.733231 0.509598 -0.580194 0.724113
2000-01-07 0.345164 0.972995 -0.816769 -0.840143
2000-01-08 -0.430188 -0.761943 -0.446079 1.044010
You can pass a list of columns to [] to select columns in that order. If a column is not contained in the DataFrame, an
exception will be raised. Multiple columns can also be set in this manner:
In [9]: df
Out[9]:
A B C D
(continues on next page)
In [11]: df
Out[11]:
A B C D
2000-01-01 -0.282863 0.469112 -1.509059 -1.135632
2000-01-02 -0.173215 1.212112 0.119209 -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929 1.071804
2000-01-04 -0.706771 0.721555 -1.039575 0.271860
2000-01-05 0.567020 -0.424972 0.276232 -1.087401
2000-01-06 0.113648 -0.673690 -1.478427 0.524988
2000-01-07 0.577046 0.404705 -1.715002 -1.039268
2000-01-08 -1.157892 -0.370647 -1.344312 0.844885
You may find this useful for applying a transform (in-place) to a subset of the columns.
Warning: pandas aligns all AXES when setting Series and DataFrame from .loc, and .iloc.
This will not modify df because the column alignment is before value assignment.
In [12]: df[['A', 'B']]
Out[12]:
A B
2000-01-01 -0.282863 0.469112
2000-01-02 -0.173215 1.212112
2000-01-03 -2.104569 -0.861849
2000-01-04 -0.706771 0.721555
2000-01-05 0.567020 -0.424972
2000-01-06 0.113648 -0.673690
2000-01-07 0.577046 0.404705
2000-01-08 -1.157892 -0.370647
You may access an index on a Series, column on a DataFrame, and an item on a Panel directly as an attribute:
In [17]: sa = pd.Series([1,2,3],index=list('abc'))
In [19]: sa.b
Out[19]: 2
In [20]: dfa.A
\\\\\\\\\\\Out[20]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64
In [21]: panel.one
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
In [22]: sa.a = 5
In [23]: sa
(continues on next page)
In [25]: dfa
Out[25]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885
In [27]: dfa
Out[27]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885
Warning:
• You can use this access only if the index element is a valid Python identifier, e.g. s.1 is not allowed. See
here for an explanation of valid identifiers.
• The attribute will not be available if it conflicts with an existing method name, e.g. s.min is not allowed.
• Similarly, the attribute will not be available if it conflicts with any of the following list: index,
major_axis, minor_axis, items.
• In any of these cases, standard indexing will still work, e.g. s['1'], s['min'], and s['index'] will
access the corresponding element or column.
If you are using the IPython environment, you may also use tab-completion to see these accessible attributes.
You can also assign a dict to a row of a DataFrame:
You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful; if
you try to use attribute access to create a new column, it creates a new attribute rather than a new column. In 0.21.0
and later, this will raise a UserWarning:
In[3]: df
Out[3]:
one
0 1.0
1 2.0
2 3.0
The most robust and consistent way of slicing ranges along arbitrary axes is described in the Selection by Position
section detailing the .iloc method. For now, we explain the semantics of slicing using the [] operator.
With Series, the syntax works exactly as with an ndarray, returning a slice of the values and the corresponding labels:
In [31]: s[:5]
Out[31]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
Freq: D, Name: A, dtype: float64
In [32]: s[::2]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2000-01-01 0.469112
2000-01-03 -0.861849
2000-01-05 -0.424972
2000-01-07 0.404705
Freq: 2D, Name: A, dtype: float64
In [33]: s[::-1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2000-01-08 -0.370647
2000-01-07 0.404705
2000-01-06 -0.673690
2000-01-05 -0.424972
(continues on next page)
In [34]: s2 = s.copy()
In [35]: s2[:5] = 0
In [36]: s2
Out[36]:
2000-01-01 0.000000
2000-01-02 0.000000
2000-01-03 0.000000
2000-01-04 0.000000
2000-01-05 0.000000
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64
With DataFrame, slicing inside of [] slices the rows. This is provided largely as a convenience since it is such a
common operation.
In [37]: df[:3]
Out[37]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
In [38]: df[::-1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
Warning: Whether a copy or a reference is returned for a setting operation, may depend on the context. This is
sometimes called chained assignment and should be avoided. See Returning a View versus Copy.
Warning:
.loc is strict when you present slicers that are not compatible (or convertible) with the index type.
For example using integers in a DatetimeIndex. These will raise a TypeError.
In [39]: dfl = pd.DataFrame(np.random.randn(5,4), columns=list('ABCD'), index=pd.
˓→date_range('20130101',periods=5))
In [40]: dfl
Out[40]:
A B C D
2013-01-01 1.075770 -0.109050 1.643563 -1.469388
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061
2013-01-05 0.895717 0.805244 -1.206412 2.565646
In [4]: dfl.loc[2:3]
TypeError: cannot do slice indexing on <class 'pandas.tseries.index.DatetimeIndex'>
˓→with these indexers [2] of <type 'int'>
String likes in slicing can be convertible to the type of the index and lead to natural slicing.
In [41]: dfl.loc['20130102':'20130104']
Out[41]:
A B C D
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061
Warning: Starting in 0.21.0, pandas will show a FutureWarning if indexing with a list with missing labels.
In the future this will raise a KeyError. See list-like Using loc with missing keys in a list is Deprecated.
pandas provides a suite of methods in order to have purely label based indexing. This is a strict inclusion based
protocol. Every label asked for must be in the index, or a KeyError will be raised. When slicing, both the start
bound AND the stop bound are included, if present in the index. Integers are valid labels, but they refer to the label
and not the position.
The .loc attribute is the primary access method. The following are valid inputs:
• A single label, e.g. 5 or 'a' (Note that 5 is interpreted as a label of the index. This use is not an integer
position along the index.).
• A list or array of labels ['a', 'b', 'c'].
• A slice object with labels 'a':'f' (Note that contrary to usual python slices, both the start and the stop are
included, when present in the index! See Slicing with labels.).
• A boolean array.
• A callable, see Selection By Callable.
In [42]: s1 = pd.Series(np.random.randn(6),index=list('abcdef'))
In [43]: s1
Out[43]:
a 1.431256
(continues on next page)
In [44]: s1.loc['c':]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
c -1.170299
d -0.226169
e 0.410835
f 0.813850
dtype: float64
In [45]: s1.loc['b']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→1.3403088497993827
In [47]: s1
Out[47]:
a 1.431256
b 1.340309
c 0.000000
d 0.000000
e 0.000000
f 0.000000
dtype: float64
With a DataFrame:
In [48]: df1 = pd.DataFrame(np.random.randn(6,4),
....: index=list('abcdef'),
....: columns=list('ABCD'))
....:
In [49]: df1
Out[49]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
c 1.024180 0.569605 0.875906 -2.211372
d 0.974466 -2.006747 -0.410001 -0.078638
e 0.545952 -1.219217 -1.226825 0.769804
f -1.281247 -0.727707 -0.121306 -0.097883
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
(continues on next page)
When using .loc with slices, if both the start and the stop labels are present in the index, then elements located
between the two (including them) are returned:
In [56]: s = pd.Series(list('abcde'), index=[0,3,2,5,4])
In [57]: s.loc[3:5]
(continues on next page)
If at least one of the two is absent, but the index is sorted, and can be compared against start and stop labels, then
slicing will still work as expected, by selecting labels which rank between the two:
In [58]: s.sort_index()
Out[58]:
0 a
2 c
3 b
4 e
5 d
dtype: object
In [59]: s.sort_index().loc[1:6]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[59]:
2 c
3 b
4 e
5 d
dtype: object
However, if at least one of the two is absent and the index is not sorted, an error will be raised (since doing otherwise
would be computationally expensive, as well as potentially ambiguous for mixed type indexes). For instance, in the
above example, s.loc[1:6] would raise KeyError.
Warning: Whether a copy or a reference is returned for a setting operation, may depend on the context. This is
sometimes called chained assignment and should be avoided. See Returning a View versus Copy.
Pandas provides a suite of methods in order to get purely integer based indexing. The semantics follow closely
Python and NumPy slicing. These are 0-based indexing. When slicing, the start bounds is included, while the upper
bound is excluded. Trying to use a non-integer, even a valid label will raise an IndexError.
The .iloc attribute is the primary access method. The following are valid inputs:
• An integer e.g. 5.
• A list or array of integers [4, 3, 0].
• A slice object with ints 1:7.
• A boolean array.
• A callable, see Selection By Callable.
In [60]: s1 = pd.Series(np.random.randn(5), index=list(range(0,10,2)))
In [61]: s1
(continues on next page)
In [62]: s1.iloc[:3]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[62
˓→
0 0.695775
2 0.341734
4 0.959726
dtype: float64
In [63]: s1.iloc[3]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→-1.1103361028911669
In [65]: s1
Out[65]:
0 0.000000
2 0.000000
4 0.000000
6 -1.110336
8 -0.619976
dtype: float64
With a DataFrame:
In [66]: df1 = pd.DataFrame(np.random.randn(6,4),
....: index=list(range(0,12,2)),
....: columns=list(range(0,8,2)))
....:
In [67]: df1
Out[67]:
0 2 4 6
0 0.149748 -0.732339 0.687738 0.176444
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
6 -0.826591 -0.345352 1.314232 0.690579
8 0.995761 2.396780 0.014871 3.357427
10 -0.317441 -1.236269 0.896171 -0.487602
4 6
2 0.301624 -2.179861
4 1.462696 -1.743161
6 1.314232 0.690579
8 0.014871 3.357427
In [71]: df1.iloc[1:3, :]
Out[71]:
0 2 4 6
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
In [74]: df1.iloc[1]
Out[74]:
0 0.403310
2 -0.154951
4 0.301624
6 -2.179861
Name: 2, dtype: float64
In [77]: x[4:10]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[77]: ['e', 'f']
In [78]: x[8:10]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[78]: []
In [79]: s = pd.Series(x)
In [80]: s
Out[80]:
0 a
1 b
2 c
3 d
4 e
5 f
dtype: object
In [81]: s.iloc[4:10]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[81]:
4 e
5 f
dtype: object
In [82]: s.iloc[8:10]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Series([], dtype: object)
Note that using slices that go out of bounds can result in an empty axis (e.g. an empty DataFrame being returned).
In [83]: dfl = pd.DataFrame(np.random.randn(5,2), columns=list('AB'))
In [84]: dfl
Out[84]:
A B
0 -0.082240 -2.182937
1 0.380396 0.084844
2 0.432390 1.519970
3 -0.493662 0.600178
4 0.274230 0.132885
Empty DataFrame
Columns: []
Index: [0, 1, 2, 3, 4]
B
0 -2.182937
1 0.084844
(continues on next page)
In [87]: dfl.iloc[4:6]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B
4 0.27423 0.132885
A single indexer that is out of bounds will raise an IndexError. A list of indexers where any element is out of
bounds will raise an IndexError.
dfl.iloc[[4, 5, 6]]
IndexError: positional indexers are out-of-bounds
dfl.iloc[:, 4]
IndexError: single positional indexer is out-of-bounds
In [89]: df1
Out[89]:
A B C D
a -0.023688 2.410179 1.450520 0.206053
b -0.251905 -2.213588 1.063327 1.266143
c 0.299368 -0.863838 0.408204 -1.048089
d -0.025747 -0.988387 0.094055 1.262731
e 1.289997 0.082423 -0.055758 0.536580
f -0.489682 0.369374 -0.034571 -2.484478
A B C D
c 0.299368 -0.863838 0.408204 -1.048089
e 1.289997 0.082423 -0.055758 0.536580
A B
a -0.023688 2.410179
b -0.251905 -2.213588
(continues on next page)
A B
a -0.023688 2.410179
b -0.251905 -2.213588
c 0.299368 -0.863838
d -0.025747 -0.988387
e 1.289997 0.082423
f -0.489682 0.369374
a -0.023688
b -0.251905
c 0.299368
d -0.025747
e 1.289997
f -0.489682
Name: A, dtype: float64
Using these methods / indexers, you can chain data selection operations without using temporary variable.
In [95]: bb = pd.read_csv('data/baseball.csv', index_col='id')
2007 CIN 6 379 745 101 203 35 2 36 125.0 10.0 1.0 105 127.0 14.
˓→0 1.0 1.0 15.0 18.0
DET 5 301 1062 162 283 54 4 37 144.0 24.0 7.0 97 176.0 3.
˓→0 10.0 4.0 8.0 28.0
HOU 4 311 926 109 218 47 6 14 77.0 10.0 4.0 60 212.0 3.
˓→0 9.0 16.0 6.0 17.0
LAN 11 413 1021 153 293 61 3 36 154.0 7.0 5.0 114 141.0 8.
˓→0 9.0 3.0 8.0 29.0
NYN 13 622 1854 240 509 101 3 61 243.0 22.0 4.0 174 310.0 24.
˓→0 23.0 18.0 15.0 48.0
(continues on next page)
Warning: Starting in 0.20.0, the .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers.
.ix offers a lot of magic on the inference of what the user wants to do. To wit, .ix can decide to index positionally
OR via labels depending on the data type of the index. This has caused quite a bit of user confusion over the years.
The recommended methods of indexing are:
• .loc if you want to label index.
• .iloc if you want to positionally index.
In [97]: dfd = pd.DataFrame({'A': [1, 2, 3],
....: 'B': [4, 5, 6]},
....: index=list('abc'))
....:
In [98]: dfd
Out[98]:
A B
a 1 4
b 2 5
c 3 6
Previous behavior, where you wish to get the 0th and the 2nd elements from the index in the ‘A’ column.
In [3]: dfd.ix[[0, 2], 'A']
Out[3]:
a 1
c 3
Name: A, dtype: int64
Using .loc. Here we will select the appropriate indexes from the index, then use label indexing.
In [99]: dfd.loc[dfd.index[[0, 2]], 'A']
Out[99]:
a 1
c 3
Name: A, dtype: int64
This can also be expressed using .iloc, by explicitly getting locations on the indexers, and using positional indexing
to select things.
In [100]: dfd.iloc[[0, 2], dfd.columns.get_loc('A')]
Out[100]:
(continues on next page)
Warning: Starting in 0.21.0, using .loc or [] with a list with one or more missing labels, is deprecated, in favor
of .reindex.
In prior versions, using .loc[list-of-labels] would work as long as at least 1 of the keys was found (oth-
erwise it would raise a KeyError). This behavior is deprecated and will show a warning message pointing to this
section. The recommended alternative is to use .reindex().
For example.
In [103]: s
Out[103]:
0 1
1 2
2 3
dtype: int64
Previous Behavior
Current Behavior
Out[4]:
1 2.0
2 3.0
3 NaN
dtype: float64
12.9.1 Reindexing
The idiomatic way to achieve selecting potentially not-found elmenents is via .reindex(). See also the section on
reindexing.
Alternatively, if you want to select only valid keys, the following is idiomatic and efficient; it is guaranteed to preserve
the dtype of the selection.
In [107]: s.loc[s.index.intersection(labels)]
Out[107]:
1 2
2 3
dtype: int64
In [17]: s.reindex(labels)
ValueError: cannot reindex from a duplicate axis
Generally, you can intersect the desired labels with the current axis, and then reindex.
In [110]: s.loc[s.index.intersection(labels)].reindex(labels)
Out[110]:
c 3.0
d NaN
dtype: float64
In [42]: s.loc[s.index.intersection(labels)].reindex(labels)
ValueError: cannot reindex from a duplicate axis
A random selection of rows or columns from a Series, DataFrame, or Panel with the sample() method. The method
will sample rows by default, and accepts a specific number of rows/columns to return, or a fraction of rows.
In [111]: s = pd.Series([0,1,2,3,4,5])
By default, sample will return each row at most once, but one can also sample with replacement using the replace
option:
In [115]: s = pd.Series([0,1,2,3,4,5])
# With replacement:
In [117]: s.sample(n=6, replace=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[117]:
0 0
4 4
(continues on next page)
By default, each row has an equal probability of being selected, but if you want rows to have different probabilities,
you can pass the sample function sampling weights as weights. These weights can be a list, a NumPy array, or a
Series, but they must be of the same length as the object you are sampling. Missing values will be treated as a weight
of zero, and inf values are not allowed. If weights do not sum to 1, they will be re-normalized by dividing all weights
by the sum of the weights. For example:
In [118]: s = pd.Series([0,1,2,3,4,5])
When applied to a DataFrame, you can use a column of the DataFrame as sampling weights (provided you are sampling
rows and not columns) by simply passing the name of the column as a string.
sample also allows users to sample columns instead of rows using the axis argument.
Finally, one can also set a seed for sample’s random number generator using the random_state argument, which
will accept either an integer (as a seed) or a NumPy RandomState object.
# With a given seed, the sample will always draw the same rows.
In [128]: df4.sample(n=2, random_state=2)
Out[128]:
col1 col2
2 3 4
1 2 3
The .loc/[] operations can perform enlargement when setting a non-existent key for that axis.
In the Series case this is effectively an appending operation.
In [130]: se = pd.Series([1,2,3])
In [131]: se
Out[131]:
0 1
1 2
2 3
dtype: int64
In [132]: se[5] = 5.
In [133]: se
Out[133]:
0 1.0
1 2.0
2 3.0
5 5.0
dtype: float64
In [135]: dfi
Out[135]:
A B
0 0 1
1 2 3
2 4 5
In [138]: dfi.loc[3] = 5
In [139]: dfi
Out[139]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
3 5 5 5
Since indexing with [] must handle a lot of cases (single-label access, slicing, boolean indexing, etc.), it has a bit of
overhead in order to figure out what you’re asking for. If you only want to access a scalar value, the fastest way is to
use the at and iat methods, which are implemented on all of the data structures.
Similarly to loc, at provides label based scalar lookups, while, iat provides integer based lookups analogously to
iloc
In [140]: s.iat[5]
Out[140]: 5
In [142]: df.iat[3, 0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[142]: 0.72155516224436689
In [144]: df.iat[3, 0] = 7
In [145]: df.at[dates[-1]+1, 0] = 7
In [146]: df
Out[146]:
A B C D E 0
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632 NaN NaN
2000-01-02 1.212112 -0.173215 0.119209 -1.044236 NaN NaN
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804 NaN NaN
2000-01-04 7.000000 -0.706771 -1.039575 0.271860 NaN NaN
(continues on next page)
Another common operation is the use of boolean vectors to filter the data. The operators are: | for or, & for and, and
~ for not. These must be grouped by using parentheses, since by default Python will evaluate an expression such as
df.A > 2 & df.B < 3 as df.A > (2 & df.B) < 3, while the desired evaluation order is (df.A > 2)
& (df.B < 3).
Using a boolean vector to index a Series works exactly as in a NumPy ndarray:
In [148]: s
Out[148]:
0 -3
1 -2
2 -1
3 0
4 1
5 2
6 3
dtype: int64
0 -3
1 -2
4 1
5 2
6 3
dtype: int64
3 0
4 1
5 2
6 3
dtype: int64
You may select rows from a DataFrame using a boolean vector the same length as the DataFrame’s index (for example,
List comprehensions and map method of Series can also be used to produce more complex criteria:
In [153]: df2 = pd.DataFrame({'a' : ['one', 'one', 'two', 'three', 'two', 'one', 'six
˓→'],
In [155]: df2[criterion]
Out[155]:
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075
# Multiple criteria
In [157]: df2[criterion & (df2['b'] == 'x')]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b c
3 three x 0.361719
With the choice methods Selection by Label, Selection by Position, and Advanced Indexing you may select along more
than one axis using boolean vectors combined with other indexing expressions.
Consider the isin() method of Series, which returns a boolean vector that is true wherever the Series elements
exist in the passed list. This allows you to select rows where one or more columns have values you want:
In [160]: s
Out[160]:
4 0
3 1
2 2
1 3
0 4
dtype: int64
2 2
0 4
dtype: int64
The same method is available for Index objects and is useful for the cases when you don’t know which of the sought
labels are in fact present:
In addition to that, MultiIndex allows selecting a separate level to use in the membership check:
.....:
In [166]: s_mi
Out[166]:
0 a 0
b 1
c 2
1 a 3
(continues on next page)
0 c 2
1 a 3
dtype: int64
0 a 0
c 2
1 a 3
c 5
dtype: int64
DataFrame also has an isin() method. When calling isin, pass a set of values as either an array or dict. If values is
an array, isin returns a DataFrame of booleans that is the same shape as the original DataFrame, with True wherever
the element is in the sequence of values.
In [171]: df.isin(values)
Out[171]:
vals ids ids2
0 True True True
1 False True False
2 True False False
3 False False False
Oftentimes you’ll want to match certain values with certain columns. Just make values a dict where the key is the
column, and the value is a list of items you want to check for.
In [173]: df.isin(values)
Out[173]:
vals ids ids2
0 True True False
1 False True False
2 True False False
3 False False False
Combine DataFrame’s isin with the any() and all() methods to quickly select subsets of your data that meet a
given criteria. To select a row where each column meets its own criterion:
In [174]: values = {'ids': ['a', 'b'], 'ids2': ['a', 'c'], 'vals': [1, 3]}
In [176]: df[row_mask]
Out[176]:
vals ids ids2
0 1 a a
Selecting values from a Series with a boolean vector generally returns a subset of the data. To guarantee that selection
output has the same shape as the original data, you can use the where method in Series and DataFrame.
To return only the selected rows:
Selecting values from a DataFrame with a boolean criterion now also preserves input data shape. where is used under
the hood as the implementation. The code below is equivalent to df.where(df < 0).
In addition, where takes an optional other argument for replacement of values where the condition is False, in the
returned copy.
You may wish to set values based on some boolean criteria. This can be done intuitively like so:
In [181]: s2 = s.copy()
In [183]: s2
Out[183]:
4 0
3 1
2 2
1 3
0 4
dtype: int64
In [186]: df2
Out[186]:
A B C D
2000-01-01 0.000000 0.000000 0.485855 0.245166
2000-01-02 0.000000 0.390389 0.000000 1.655824
2000-01-03 0.000000 0.299674 0.000000 0.281059
2000-01-04 0.846958 0.000000 0.600705 0.000000
2000-01-05 0.669692 0.000000 0.000000 0.342416
2000-01-06 0.868584 0.000000 2.297780 0.000000
2000-01-07 0.000000 0.000000 0.168904 0.000000
2000-01-08 0.801196 1.392071 0.000000 0.000000
By default, where returns a modified copy of the data. There is an optional parameter inplace so that the original
data can be modified without creating a copy:
In [189]: df_orig
Out[189]:
A B C D
2000-01-01 2.104139 1.309525 0.485855 0.245166
2000-01-02 0.352480 0.390389 1.192319 1.655824
2000-01-03 0.864883 0.299674 0.227870 0.281059
2000-01-04 0.846958 1.222082 0.600705 1.233203
2000-01-05 0.669692 0.605656 1.169184 0.342416
2000-01-06 0.868584 0.948458 2.297780 0.684718
(continues on next page)
Note: The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).
alignment
Furthermore, where aligns the input boolean condition (ndarray or DataFrame), such that partial selection with setting
is possible. This is analogous to partial setting via .loc (but on the contents rather than the axis labels).
In [193]: df2
Out[193]:
A B C D
2000-01-01 -2.104139 -1.309525 0.485855 0.245166
2000-01-02 -0.352480 3.000000 -1.192319 3.000000
2000-01-03 -0.864883 3.000000 -0.227870 3.000000
2000-01-04 3.000000 -1.222082 3.000000 -1.233203
2000-01-05 0.669692 -0.605656 -1.169184 0.342416
2000-01-06 0.868584 -0.948458 2.297780 -0.684718
2000-01-07 -2.670153 -0.114722 0.168904 -0.048048
2000-01-08 0.801196 1.392071 -0.048788 -0.808838
Where can also accept axis and level parameters to align the input when performing the where.
In [195]: df2.where(df2>0,df2['A'],axis='index')
Out[195]:
A B C D
2000-01-01 -2.104139 -2.104139 0.485855 0.245166
2000-01-02 -0.352480 0.390389 -0.352480 1.655824
2000-01-03 -0.864883 0.299674 -0.864883 0.281059
2000-01-04 0.846958 0.846958 0.600705 0.846958
2000-01-05 0.669692 0.669692 0.669692 0.342416
2000-01-06 0.868584 0.868584 2.297780 0.868584
2000-01-07 -2.670153 -2.670153 0.168904 -2.670153
2000-01-08 0.801196 1.392071 0.801196 0.801196
12.15.1 Mask
DataFrame objects have a query() method that allows selection using an expression.
You can get the value of the frame where column b has values between the values of columns a and c. For example:
In [202]: n = 10
In [204]: df
Out[204]:
a b c
0 0.438921 0.118680 0.863670
1 0.138138 0.577363 0.686602
2 0.595307 0.564592 0.520630
3 0.913052 0.926075 0.616184
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
6 0.792342 0.216974 0.564056
7 0.397890 0.454131 0.915716
8 0.074315 0.437913 0.019794
9 0.559209 0.502065 0.026437
# pure python
In [205]: df[(df.a < df.b) & (df.b < df.c)]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b c
1 0.138138 0.577363 0.686602
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716
# query
In [206]: df.query('(a < b) & (b < c)')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b c
1 0.138138 0.577363 0.686602
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716
Do the same thing but fall back on a named index if there is no column with the name a.
In [209]: df
Out[209]:
b c
a
0 0 4
1 0 1
2 3 4
(continues on next page)
b c
a
2 3 4
If instead you don’t want to or cannot name your index, you can use the name index in your query expression:
In [212]: df
Out[212]:
b c
0 3 1
1 3 0
2 5 6
3 5 2
4 7 4
5 0 1
6 2 5
7 0 1
8 6 0
9 7 9
b c
2 5 6
Note: If the name of your index overlaps with a column name, the column name is given precedence. For example,
In [216]: df.query('a > 2') # uses the column 'a', not the index
Out[216]:
a
a
1 3
3 3
You can still use the index in a query expression by using the special identifier ‘index’:
If for some reason you have a column named index, then you can refer to the index as ilevel_0 as well, but at
this point you should consider renaming your columns to something less ambiguous.
You can also use the levels of a DataFrame with a MultiIndex as if they were columns in the frame:
In [218]: n = 10
In [221]: colors
Out[221]:
array(['red', 'red', 'red', 'green', 'green', 'green', 'green', 'green',
'green', 'green'],
dtype='<U5')
In [222]: foods
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [225]: df
Out[225]:
0 1
color food
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
green eggs -0.748199 1.318931
eggs -2.029766 0.792652
ham 0.461007 -0.542749
ham -0.305384 -0.479195
eggs 0.095031 -0.270099
eggs -0.707140 -0.773882
eggs 0.229453 0.304418
If the levels of the MultiIndex are unnamed, you can refer to them using special names:
In [228]: df
Out[228]:
0 1
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
green eggs -0.748199 1.318931
eggs -2.029766 0.792652
ham 0.461007 -0.542749
ham -0.305384 -0.479195
eggs 0.095031 -0.270099
eggs -0.707140 -0.773882
eggs 0.229453 0.304418
0 1
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
The convention is ilevel_0, which means “index level 0” for the 0th level of the index.
A use case for query() is when you have a collection of DataFrame objects that have a subset of column names
(or index levels/names) in common. You can pass the same query to both frames without having to specify which
frame you’re interested in querying
In [231]: df
Out[231]:
a b c
0 0.224283 0.736107 0.139168
1 0.302827 0.657803 0.713897
2 0.611185 0.136624 0.984960
3 0.195246 0.123436 0.627712
4 0.618673 0.371660 0.047902
5 0.480088 0.062993 0.185760
6 0.568018 0.483467 0.445289
7 0.309040 0.274580 0.587101
8 0.258993 0.477769 0.370255
9 0.550459 0.840870 0.304611
(continues on next page)
In [233]: df2
Out[233]:
a b c
0 0.357579 0.229800 0.596001
1 0.309059 0.957923 0.965663
2 0.123102 0.336914 0.318616
3 0.526506 0.323321 0.860813
4 0.518736 0.486514 0.384724
5 0.190804 0.505723 0.614533
6 0.891939 0.623977 0.676639
7 0.480559 0.378528 0.460858
8 0.420223 0.136404 0.141295
9 0.732206 0.419540 0.604675
10 0.604466 0.848974 0.896165
11 0.589168 0.920046 0.732716
In [237]: df
Out[237]:
a b c
0 7 8 9
1 1 0 7
2 2 7 2
3 6 2 2
4 2 6 3
5 3 8 2
6 1 7 2
7 5 1 5
8 9 8 0
9 1 5 0
a b c
0 7 8 9
a b c
0 7 8 9
Slightly nicer by removing the parentheses (by binding making comparison operators bind tighter than & and |).
query() also supports special use of Python’s in and not in comparison operators, providing a succinct syntax
for calling the isin method of a Series or DataFrame.
# get all rows where columns "a" and "b" have overlapping values
In [243]: df = pd.DataFrame({'a': list('aabbccddeeff'), 'b': list('aaaabbbbcccc'),
.....: 'c': np.random.randint(5, size=12),
.....: 'd': np.random.randint(9, size=12)})
.....:
In [244]: df
Out[244]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
(continues on next page)
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
a b c d
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
# pure Python
In [248]: df[~df.a.isin(df.b)]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b c d
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
You can combine this with other expressions for very succinct queries:
# rows where cols a and b have overlapping values and col c's values are less than
˓→col d's
# pure Python
In [250]: df[df.b.isin(df.a) & (df.c < df.d)]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[25
˓→
a b c d
0 a a 2 6
(continues on next page)
Note: Note that in and not in are evaluated in Python, since numexpr has no equivalent of this operation.
However, only the in/not in expression itself is evaluated in vanilla Python. For example, in the expression
df.query('a in b + c + d')
(b + c + d) is evaluated by numexpr and then the in operation is evaluated in plain Python. In general, any
operations that can be evaluated using numexpr will be.
Comparing a list of values to a column using ==/!= works similarly to in/not in.
In [251]: df.query('b == ["a", "b", "c"]')
Out[251]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
# pure Python
In [252]: df[df.b.isin(["a", "b", "c"])]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6
# using in/not in
In [255]: df.query('[1, 2] in c')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6
# pure Python
In [257]: df[df.c.isin([1, 2])]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
You can negate boolean expressions with the word not or the ~ operator.
In [260]: df.query('~bools')
Out[260]:
a b c bools
2 0.697753 0.212799 0.329209 False
7 0.275396 0.691034 0.826619 False
8 0.190649 0.558748 0.262467 False
a b c bools
2 0.697753 0.212799 0.329209 False
7 0.275396 0.691034 0.826619 False
8 0.190649 0.558748 0.262467 False
a b c bools
2 True True True True
7 True True True True
8 True True True True
In [265]: shorter
Out[265]:
a b c bools
7 0.275396 0.691034 0.826619 False
In [266]: longer
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[266]:
˓→
a b c bools
7 0.275396 0.691034 0.826619 False
a b c bools
7 True True True True
DataFrame.query() using numexpr is slightly faster than Python for large frames.
Note: You will only see the performance benefits of using the numexpr engine with DataFrame.query() if
your frame has more than approximately 200,000 rows.
This plot was created using a DataFrame with 3 columns each containing floating point values generated using
numpy.random.randn().
If you want to identify and remove duplicate rows in a DataFrame, there are two methods that will help: duplicated
and drop_duplicates. Each takes as an argument the columns to use to identify duplicated rows.
• duplicated returns a boolean vector whose length is the number of rows, and which indicates whether a row
is duplicated.
• drop_duplicates removes duplicate rows.
By default, the first observed row of a duplicate set is considered unique, but each method has a keep parameter to
specify targets to be kept.
• keep='first' (default): mark / drop duplicates except for the first occurrence.
• keep='last': mark / drop duplicates except for the last occurrence.
• keep=False: mark / drop all duplicates.
In [268]: df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'two', 'two', 'three', 'four
˓→'],
In [269]: df2
Out[269]:
a b c
0 one x -1.067137
1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329
In [270]: df2.duplicated('a')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 False
1 True
2 False
3 True
4 True
5 False
6 False
dtype: bool
0 True
1 False
2 True
3 True
4 False
5 False
6 False
dtype: bool
0 True
1 True
(continues on next page)
In [273]: df2.drop_duplicates('a')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b c
0 one x -1.067137
2 two x -0.211056
5 three x -1.964475
6 four x 1.298329
a b c
1 one y 0.309500
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329
a b c
5 three x -1.964475
6 four x 1.298329
a b c
0 one x -1.067137
1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
5 three x -1.964475
6 four x 1.298329
To drop duplicates by index value, use Index.duplicated then perform slicing. The same set of options are
In [279]: df3
Out[279]:
a b
a 0 1.440455
a 1 2.456086
b 2 1.038402
c 3 -0.894409
b 4 0.683536
a 5 3.082764
In [280]: df3.index.duplicated()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→array([False, True, False, False, True, True], dtype=bool)
In [281]: df3[~df3.index.duplicated()]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b
a 0 1.440455
b 2 1.038402
c 3 -0.894409
In [282]: df3[~df3.index.duplicated(keep='last')]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b
c 3 -0.894409
b 4 0.683536
a 5 3.082764
In [283]: df3[~df3.index.duplicated(keep=False)]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b
c 3 -0.894409
Each of Series, DataFrame, and Panel have a get method which can return a default value.
Sometimes you want to extract a set of values given a sequence of row labels and column labels, and the lookup
method allows for this and returns a NumPy array. For instance:
In [287]: dflookup = pd.DataFrame(np.random.rand(20,4), columns = ['A','B','C','D'])
The pandas Index class and its subclasses can be viewed as implementing an ordered multiset. Duplicates are
allowed. However, if you try to convert an Index object with duplicate entries into a set, an exception will be
raised.
Index also provides the infrastructure necessary for lookups, data alignment, and reindexing. The easiest way to
create an Index directly is to pass a list or other sequence to Index:
In [289]: index = pd.Index(['e', 'd', 'a', 'b'])
In [290]: index
Out[290]: Index(['e', 'd', 'a', 'b'], dtype='object')
In [293]: index.name
Out[293]: 'something'
In [297]: df
Out[297]:
cols A B C
rows
0 1.295989 0.185778 0.436259
1 0.678101 0.311369 -0.528378
2 -0.674808 -1.103529 -0.656157
3 1.889957 2.076651 -1.102192
4 -1.211795 -0.791746 0.634724
In [298]: df['A']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Indexes are “mostly immutable”, but it is possible to set and change their metadata, like the index name (or, for
MultiIndex, levels and labels).
You can use the rename, set_names, set_levels, and set_labels to set these attributes directly. They
default to returning a copy; however, you can specify inplace=True to have the data change in place.
See Advanced Indexing for usage of MultiIndexes.
In [300]: ind.rename("apple")
Out[300]: Int64Index([1, 2, 3], dtype='int64', name='apple')
In [301]: ind
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[301]: Int64Index([1,
˓→2, 3], dtype='int64')
In [304]: ind
Out[304]: Int64Index([1, 2, 3], dtype='int64', name='bob')
In [306]: index
Out[306]:
MultiIndex(levels=[[0, 1, 2], ['one', 'two']],
labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]],
names=['first', 'second'])
In [307]: index.levels[1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Index(['one', 'two'], dtype='object', name='second')
The two main operations are union (|) and intersection (&). These can be directly called as instance
methods or used via overloaded operators. Difference is provided via the .difference() method.
In [311]: a | b
Out[311]: Index(['a', 'b', 'c', 'd', 'e'], dtype='object')
In [312]: a & b
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[312]: Index(['c'],
˓→dtype='object')
In [313]: a.difference(b)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out
˓→Index(['a', 'b'], dtype='object')
Also available is the symmetric_difference (^) operation, which returns elements that appear in either idx1
or idx2, but not in both. This is equivalent to the Index created by idx1.difference(idx2).union(idx2.
difference(idx1)), with duplicates dropped.
In [316]: idx1.symmetric_difference(idx2)
Out[316]: Int64Index([1, 5], dtype='int64')
Note: The resulting index from a set operation will be sorted in ascending order.
Important: Even though Index can hold missing values (NaN), it should be avoided if you do not want any
unexpected results. For example, some operations exclude missing values implicitly.
In [319]: idx1
Out[319]: Float64Index([1.0, nan, 3.0, 4.0], dtype='float64')
In [320]: idx1.fillna(2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[320]:
˓→Float64Index([1.0, 2.0, 3.0, 4.0], dtype='float64')
In [322]: idx2
Out[322]: DatetimeIndex(['2011-01-01', 'NaT', '2011-01-03'], dtype='datetime64[ns]',
˓→freq=None)
In [323]: idx2.fillna(pd.Timestamp('2011-01-02'))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[3
˓→DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], dtype='datetime64[ns]',
˓→freq=None)
Occasionally you will load or create a data set into a DataFrame and want to add an index after you’ve already done
so. There are a couple of different ways.
DataFrame has a set_index() method which takes a column name (for a regular Index) or a list of column names
(for a MultiIndex). To create a new, re-indexed DataFrame:
In [324]: data
Out[324]:
a b c d
0 bar one z 1.0
1 bar two y 2.0
2 foo one x 3.0
3 foo two w 4.0
In [326]: indexed1
Out[326]:
a b d
c
z bar one 1.0
y bar two 2.0
x foo one 3.0
w foo two 4.0
In [328]: indexed2
Out[328]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
The append keyword option allow you to keep the existing index and append the given columns to a MultiIndex:
In [331]: frame
Out[331]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
Other options in set_index allow you not drop the index columns or to add the index in-place (without creating a
new object):
In [334]: data
Out[334]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
As a convenience, there is a new function on DataFrame called reset_index() which transfers the index values
into the DataFrame’s columns and sets a simple integer index. This is the inverse operation of set_index().
In [335]: data
Out[335]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
In [336]: data.reset_index()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b c d
(continues on next page)
The output is more similar to a SQL table or a record array. The names for the columns derived from the index are the
ones stored in the names attribute.
You can use the level keyword to remove only a portion of the index:
In [337]: frame
Out[337]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
In [338]: frame.reset_index(level=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a c d
c b
z one bar z 1.0
y two bar y 2.0
x one foo x 3.0
w two foo w 4.0
reset_index takes an optional parameter drop which if true simply discards the index, instead of putting index
values in the DataFrame’s columns.
If you create an index yourself, you can just assign it to the index field:
data.index = index
When setting values in a pandas object, care must be taken to avoid what is called chained indexing. Here is an
example.
In [339]: dfmi = pd.DataFrame([list('abcd'),
.....: list('efgh'),
.....: list('ijkl'),
.....: list('mnop')],
.....: columns=pd.MultiIndex.from_product([['one','two'],
.....: ['first','second
˓→']]))
.....:
In [340]: dfmi
(continues on next page)
In [341]: dfmi['one']['second']
Out[341]:
0 b
1 f
2 j
3 n
Name: second, dtype: object
In [342]: dfmi.loc[:,('one','second')]
Out[342]:
0 b
1 f
2 j
3 n
Name: (one, second), dtype: object
These both yield the same results, so which should you use? It is instructive to understand the order of operations on
these and why method 2 (.loc) is much preferred over method 1 (chained []).
dfmi['one'] selects the first level of the columns and returns a DataFrame that is singly-indexed. Then an-
other Python operation dfmi_with_one['second'] selects the series indexed by 'second'. This is indicated
by the variable dfmi_with_one because pandas sees these operations as separate events. e.g. separate calls to
__getitem__, so it has to treat them as linear operations, they happen one after another.
Contrast this to df.loc[:,('one','second')] which passes a nested tuple of (slice(None),('one',
'second')) to a single call to __getitem__. This allows pandas to deal with this as a single entity. Furthermore
this order of operations can be significantly faster, and allows one to index both axes if so desired.
The problem in the previous section is just a performance issue. What’s up with the SettingWithCopy warning?
We don’t usually throw warnings around when you do something that might cost a few extra milliseconds!
But it turns out that assigning to the product of chained indexing has inherently unpredictable results. To see this,
think about how the Python interpreter executes this code:
dfmi.loc[:,('one','second')] = value
# becomes
dfmi.loc.__setitem__((slice(None), ('one', 'second')), value)
dfmi['one']['second'] = value
# becomes
dfmi.__getitem__('one').__setitem__('second', value)
See that __getitem__ in there? Outside of simple cases, it’s very hard to predict whether it will return a view or a
copy (it depends on the memory layout of the array, about which pandas makes no guarantees), and therefore whether
the __setitem__ will modify dfmi or a temporary object that gets thrown out immediately afterward. That’s what
SettingWithCopy is warning you about!
Note: You may be wondering whether we should be concerned about the loc property in the first example. But
dfmi.loc is guaranteed to be dfmi itself with modified indexing behavior, so dfmi.loc.__getitem__ /
dfmi.loc.__setitem__ operate on dfmi directly. Of course, dfmi.loc.__getitem__(idx) may be
a view or a copy of dfmi.
Sometimes a SettingWithCopy warning will arise at times when there’s no obvious chained indexing going on.
These are the bugs that SettingWithCopy is designed to catch! Pandas is probably trying to warn you that you’ve
done this:
def do_something(df):
foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows!
# ... many lines here ...
foo['quux'] = value # We don't know whether this will modify df or not!
return foo
Yikes!
When you use chained indexing, the order and type of the indexing operation partially determine whether the result is
a slice into the original object, or a copy of the slice.
Pandas has the SettingWithCopyWarning because assigning to a copy of a slice is frequently not intentional,
but a mistake caused by chained indexing returning a copy where a slice was expected.
If you would like pandas to be more or less trusting about assignment to a chained indexing expression, you can set
the option mode.chained_assignment to one of these values:
• 'warn', the default, means a SettingWithCopyWarning is printed.
• 'raise' means pandas will raise a SettingWithCopyException you have to deal with.
• None will suppress the warnings entirely.
>>> pd.set_option('mode.chained_assignment','warn')
>>> dfb[dfb.a.str.startswith('o')]['c'] = 42
Traceback (most recent call last)
...
SettingWithCopyWarning:
(continues on next page)
In [346]: dfc.loc[0,'A'] = 11
In [347]: dfc
Out[347]:
A B
0 11 1
1 bbb 2
2 ccc 3
This can work at times, but it is not guaranteed to, and therefore should be avoided:
In [350]: dfc
Out[350]:
A B
0 111 1
1 bbb 2
2 ccc 3
>>> pd.set_option('mode.chained_assignment','raise')
>>> dfc.loc[0]['A'] = 1111
Traceback (most recent call last)
...
SettingWithCopyException:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
Warning: The chained assignment warnings / exceptions are aiming to inform the user of a possibly invalid
assignment. There may be false positives; situations where a chained assignment is inadvertently reported.
THIRTEEN
This section covers indexing with a MultiIndex and more advanced indexing features.
See the Indexing and Selecting Data for general indexing documentation.
Warning: Whether a copy or a reference is returned for a setting operation, may depend on the context. This is
sometimes called chained assignment and should be avoided. See Returning a View versus Copy.
Hierarchical / Multi-level indexing is very exciting as it opens the door to some quite sophisticated data analysis and
manipulation, especially for working with higher dimensional data. In essence, it enables you to store and manipulate
data with an arbitrary number of dimensions in lower dimensional data structures like Series (1d) and DataFrame (2d).
In this section, we will show what exactly we mean by “hierarchical” indexing and how it integrates with all of the
pandas indexing functionality described above and in prior sections. Later, when discussing group by and pivoting and
reshaping data, we’ll show non-trivial applications to illustrate how it aids in structuring data for analysis.
See the cookbook for some advanced strategies.
The MultiIndex object is the hierarchical analogue of the standard Index object which typically stores the axis
labels in pandas objects. You can think of MultiIndex as an array of tuples where each tuple is unique. A
MultiIndex can be created from a list of arrays (using MultiIndex.from_arrays), an array of tuples (us-
ing MultiIndex.from_tuples), or a crossed set of iterables (using MultiIndex.from_product). The
Index constructor will attempt to return a MultiIndex when it is passed a list of tuples. The following examples
demonstrate different ways to initialize MultiIndexes.
In [1]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
...: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
...:
In [3]: tuples
Out[3]:
[('bar', 'one'),
(continues on next page)
747
pandas: powerful Python data analysis toolkit, Release 0.23.4
In [5]: index
Out[5]:
MultiIndex(levels=[['bar', 'baz', 'foo', 'qux'], ['one', 'two']],
labels=[[0, 0, 1, 1, 2, 2, 3, 3], [0, 1, 0, 1, 0, 1, 0, 1]],
names=['first', 'second'])
In [7]: s
Out[7]:
first second
bar one 0.469112
two -0.282863
baz one -1.509059
two -1.135632
foo one 1.212112
two -0.173215
qux one 0.119209
two -1.044236
dtype: float64
When you want every pairing of the elements in two iterables, it can be easier to use the MultiIndex.
from_product function:
As a convenience, you can pass a list of arrays directly into Series or DataFrame to construct a MultiIndex automati-
cally:
In [10]: arrays = [np.array(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux']),
....: np.array(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'])]
....:
In [12]: s
Out[12]:
bar one -0.861849
two -2.104569
baz one -0.494929
(continues on next page)
In [14]: df
Out[14]:
0 1 2 3
bar one -0.424972 0.567020 0.276232 -1.087401
two -0.673690 0.113648 -1.478427 0.524988
baz one 0.404705 0.577046 -1.715002 -1.039268
two -0.370647 -1.157892 -1.344312 0.844885
foo one 1.075770 -0.109050 1.643563 -1.469388
two 0.357021 -0.674600 -1.776904 -0.968914
qux one -1.294524 0.413738 0.276662 -0.472035
two -0.013960 -0.362543 -0.006154 -0.923061
All of the MultiIndex constructors accept a names argument which stores string names for the levels themselves.
If no names are provided, None will be assigned:
In [15]: df.index.names
Out[15]: FrozenList([None, None])
This index can back any axis of a pandas object, and the number of levels of the index is up to you:
In [17]: df
Out[17]:
first bar baz foo qux
second one two one two one two one two
A 0.895717 0.805244 -1.206412 2.565646 1.431256 1.340309 -1.170299 -0.226169
B 0.410835 0.813850 0.132003 -0.827317 -0.076467 -1.187678 1.130127 -1.436737
C -1.413681 1.607920 1.024180 0.569605 0.875906 -2.211372 0.974466 -2.006747
We’ve “sparsified” the higher levels of the indexes to make the console output a bit easier on the eyes. Note that how
the index is displayed can be controlled using the multi_sparse option in pandas.set_options():
It’s worth keeping in mind that there’s nothing preventing you from using tuples as atomic labels on an axis:
In [20]: pd.Series(np.random.randn(8), index=tuples)
Out[20]:
(bar, one) -1.236269
(bar, two) 0.896171
(baz, one) -0.487602
(baz, two) -0.082240
(foo, one) -2.182937
(foo, two) 0.380396
(qux, one) 0.084844
(qux, two) 0.432390
dtype: float64
The reason that the MultiIndex matters is that it can allow you to do grouping, selection, and reshaping operations
as we will describe below and in subsequent areas of the documentation. As you will see in later sections, you can find
yourself working with hierarchically-indexed data without creating a MultiIndex explicitly yourself. However,
when loading data from a file, you may wish to generate your own MultiIndex when preparing the data set.
The method get_level_values will return a vector of the labels for each location at a particular level:
In [21]: index.get_level_values(0)
Out[21]: Index(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], dtype='object
˓→', name='first')
In [22]: index.get_level_values('second')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Index(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'], dtype='object',
˓→name='second')
One of the important features of hierarchical indexing is that you can select data by a “partial” label identifying a
subgroup in the data. Partial selection “drops” levels of the hierarchical index in the result in a completely analogous
way to selecting a column in a regular DataFrame:
In [23]: df['bar']
Out[23]:
second one two
A 0.895717 0.805244
B 0.410835 0.813850
C -1.413681 1.607920
A 0.895717
B 0.410835
(continues on next page)
In [25]: df['bar']['one']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A 0.895717
B 0.410835
C -1.413681
Name: one, dtype: float64
In [26]: s['qux']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
one -1.039575
two 0.271860
dtype: float64
See Cross-section with hierarchical index for how to select on a deeper level.
The repr of a MultiIndex shows all the defined levels of an index, even if the they are not actually used. When
slicing an index, you may notice this. For example:
This is done to avoid a recomputation of the levels in order to make slicing highly performant. If you want to see only
the used levels, you can use the MultiIndex.get_level_values() method.
In [29]: df[['foo','qux']].columns.values
Out[29]: array([('foo', 'one'), ('foo', 'two'), ('qux', 'one'), ('qux', 'two')],
˓→dtype=object)
To reconstruct the MultiIndex with only the used levels, the remove_unused_levels method may be used.
New in version 0.20.0.
In [31]: df[['foo','qux']].columns.remove_unused_levels()
Out[31]:
MultiIndex(levels=[['foo', 'qux'], ['one', 'two']],
labels=[[0, 0, 1, 1], [0, 1, 0, 1]],
names=['first', 'second'])
Operations between differently-indexed objects having MultiIndex on the axes will work as you expect; data
alignment will work the same as an Index of tuples:
In [32]: s + s[:-2]
Out[32]:
bar one -1.723698
two -4.209138
baz one -0.989859
two 2.143608
foo one 1.443110
two -1.413542
qux one NaN
two NaN
dtype: float64
In [33]: s + s[::2]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
reindex can be called with another MultiIndex, or even a list or array of tuples:
In [34]: s.reindex(index[:3])
Out[34]:
first second
bar one -0.861849
two -2.104569
baz one -0.494929
dtype: float64
Syntactically integrating MultiIndex in advanced indexing with .loc is a bit challenging, but we’ve made every
effort to do so. In general, MultiIndex keys take the form of tuples. For example, the following works as you would
expect:
In [36]: df = df.T
In [37]: df
Out[37]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747
A 0.805244
B 0.813850
C 1.607920
Name: (bar, two), dtype: float64
Note that df.loc['bar', 'two'] would also work in this example, but this shorthand notation can lead to
ambiguity in general.
If you also want to index a specific column with .loc, you must use a tuple like this:
You don’t have to specify all levels of the MultiIndex by passing only the first elements of the tuple. For example,
you can use “partial” indexing to get all elements with bar in the first level as follows:
df.loc[‘bar’]
This is a shortcut for the slightly more verbose notation df.loc[('bar',),] (equivalent to df.loc['bar',]
in this example).
“Partial” slicing also works quite nicely.
In [40]: df.loc['baz':'foo']
Out[40]:
A B C
first second
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
A B C
first second
baz two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
Note: It is important to note that tuples and lists are not treated identically in pandas when it comes to indexing.
Whereas a tuple is interpreted as one multi-level key, a list is used to specify several keys. Or in other words, tuples
go horizontally (traversing levels), lists go vertically (scanning levels).
Importantly, a list of tuples indexes several complete MultiIndex keys, whereas a tuple of lists refer to several
values within a level:
....:
Warning: You should specify all axes in the .loc specifier, meaning the indexer for the index and for the
columns. There are some ambiguous cases where the passed indexer could be mis-interpreted as indexing both
axes, rather than into say the MultiIndex for the rows.
You should do this:
df.loc[(slice('A1','A3'),.....), :]
....: index=miindex,
....: columns=micolumns).sort_index().sort_index(axis=1)
....:
In [51]: dfmi
Out[51]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9 8 11 10
D1 13 12 15 14
C2 D0 17 16 19 18
D1 21 20 23 22
C3 D0 25 24 27 26
... ... ... ... ...
A3 B1 C0 D1 229 228 231 230
(continues on next page)
You can use pandas.IndexSlice to facilitate a more natural syntax using :, rather than using slice(None).
In [53]: idx = pd.IndexSlice
It is possible to perform quite complicated selections using this method on multiple axes at the same time.
In [55]: dfmi.loc['A1', (slice(None), 'foo')]
Out[55]:
lvl0 a b
lvl1 foo foo
B0 C0 D0 64 66
D1 68 70
C1 D0 72 74
D1 76 78
C2 D0 80 82
D1 84 86
C3 D0 88 90
... ... ...
B1 C0 D1 100 102
C1 D0 104 106
D1 108 110
C2 D0 112 114
D1 116 118
C3 D0 120 122
D1 124 126
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
D1 44 46
C3 D0 56 58
... ... ...
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
Using a boolean indexer you can provide selection related to the values.
In [57]: mask = dfmi[('a', 'foo')] > 200
You can also specify the axis argument to .loc to interpret the passed slicers on a single axis.
In [59]: dfmi.loc(axis=0)[:, :, ['C1', 'C3']]
Out[59]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C1 D0 9 8 11 10
D1 13 12 15 14
C3 D0 25 24 27 26
D1 29 28 31 30
B1 C1 D0 41 40 43 42
D1 45 44 47 46
C3 D0 57 56 59 58
... ... ... ... ...
A3 B0 C1 D1 205 204 207 206
C3 D0 217 216 219 218
D1 221 220 223 222
B1 C1 D0 233 232 235 234
D1 237 236 239 238
C3 D0 249 248 251 250
D1 253 252 255 254
Furthermore you can set the values using the following methods.
In [60]: df2 = dfmi.copy()
In [62]: df2
Out[62]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
C2 D0 17 16 19 18
D1 21 20 23 22
C3 D0 -10 -10 -10 -10
... ... ... ... ...
A3 B1 C0 D1 229 228 231 230
C1 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
(continues on next page)
In [65]: df2
Out[65]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9000 8000 11000 10000
D1 13000 12000 15000 14000
C2 D0 17 16 19 18
D1 21 20 23 22
C3 D0 25000 24000 27000 26000
... ... ... ... ...
A3 B1 C0 D1 229 228 231 230
C1 D0 233000 232000 235000 234000
D1 237000 236000 239000 238000
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249000 248000 251000 250000
D1 253000 252000 255000 254000
13.2.2 Cross-section
The xs method of DataFrame additionally takes a level argument to make selecting data at a particular level of a
MultiIndex easier.
In [66]: df
Out[66]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747
A B C
first
bar 0.895717 0.410835 -1.413681
(continues on next page)
You can also select on the columns with xs(), by providing the axis argument.
In [69]: df = df.T
You can pass drop_level=False to xs() to retain the level that was selected.
Compare the above with the result using drop_level=True (the default value).
In [75]: df.xs('one', level='second', axis=1, drop_level=True)
Out[75]:
first bar baz foo qux
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
The parameter level has been added to the reindex and align methods of pandas objects. This is useful to
broadcast values across a level. For instance:
In [76]: midx = pd.MultiIndex(levels=[['zero', 'one'], ['x','y']],
....: labels=[[1,1,0,0],[1,0,1,0]])
....:
In [78]: df
Out[78]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [80]: df2
Out[80]:
0 1
one 1.060074 -0.109716
zero 1.271532 0.713416
0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416
# aligning
In [82]: df_aligned, df2_aligned = df.align(df2, level=0)
In [84]: df2_aligned
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416
In [85]: df[:5]
Out[85]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
0 1
y one 1.519970 -0.493662
x one 0.600178 0.274230
y zero 0.132885 -0.023688
x zero 2.410179 1.450520
The reorder_levels function generalizes the swaplevel function, allowing you to permute the hierarchical
index levels in one step:
For MultiIndex-ed objects to be indexed and sliced effectively, they need to be sorted. As with any index, you can use
sort_index.
In [90]: s
Out[90]:
baz one 0.206053
foo two -0.251905
one -2.213588
baz two 1.063327
qux two 1.266143
bar two 0.299368
one -0.863838
qux one 0.408204
dtype: float64
In [91]: s.sort_index()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [92]: s.sort_index(level=0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [93]: s.sort_index(level=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
You may also pass a level name to sort_index if the MultiIndex levels are named.
In [95]: s.sort_index(level='L1')
Out[95]:
L1 L2
bar one -0.863838
two 0.299368
baz one 0.206053
two 1.063327
foo one -2.213588
two -0.251905
qux one 0.408204
two 1.266143
dtype: float64
In [96]: s.sort_index(level='L2')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
L1 L2
bar one -0.863838
baz one 0.206053
foo one -2.213588
qux one 0.408204
bar two 0.299368
baz two 1.063327
foo two -0.251905
qux two 1.266143
dtype: float64
On higher dimensional objects, you can sort any of the other axes by level if they have a MultiIndex:
Indexing will work even if the data are not sorted, but will be rather inefficient (and show a PerformanceWarning).
It will also return a copy of the data rather than a view:
In [100]: dfm
Out[100]:
jolie
(continues on next page)
Out[4]:
jolie
jim joe
1 z 0.64094
Furthermore if you try to index something that is not fully lexsorted, this can raise:
The is_lexsorted() method on an Index show if the index is sorted, and the lexsort_depth property
returns the sort depth:
In [101]: dfm.index.is_lexsorted()
Out[101]: False
In [102]: dfm.index.lexsort_depth
\\\\\\\\\\\\\\\\Out[102]: 1
In [104]: dfm
Out[104]:
jolie
jim joe
0 x 0.490671
x 0.120248
1 y 0.110968
z 0.537020
In [105]: dfm.index.is_lexsorted()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→True
In [106]: dfm.index.lexsort_depth
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→2
Similar to NumPy ndarrays, pandas Index, Series, and DataFrame also provides the take method that retrieves
elements along a given axis at the given indices. The given indices must be either a list or an ndarray of integer index
positions. take will also accept negative integers as relative positions to the end of the object.
In [109]: index
Out[109]: Int64Index([214, 502, 712, 567, 786, 175, 993, 133, 758, 329], dtype='int64
˓→')
In [111]: index[positions]
Out[111]: Int64Index([214, 329, 567], dtype='int64')
In [112]: index.take(positions)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[112]: Int64Index([214, 329,
˓→567], dtype='int64')
In [114]: ser.iloc[positions]
Out[114]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64
In [115]: ser.take(positions)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[115]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64
For DataFrames, the given indices should be a 1d list or ndarray that specifies row or column positions.
0 2
0 0.595974 0.601544
1 -1.237881 -1.276829
2 -0.767101 1.499591
3 0.979542 0.615855
4 0.629675 1.857704
It is important to note that the take method on pandas objects are not intended to work on boolean indices and may
return unexpected results.
0 0.233141
1 -0.223540
dtype: float64
Finally, as a small note on performance, because the take method handles a narrower range of inputs, it can offer
performance that is a good deal faster than fancy indexing.
We have discussed MultiIndex in the previous sections pretty extensively. DatetimeIndex and PeriodIndex
are shown here, and information about TimedeltaIndex‘ is found here.
In the following sub-sections we will highlight some other index types.
13.5.1 CategoricalIndex
CategoricalIndex is a type of index that is useful for supporting indexing with duplicates. This is a container
around a Categorical and allows efficient indexing and storage of an index with a large number of duplicated
elements.
In [128]: df
(continues on next page)
In [129]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[129]:
A int64
B category
dtype: object
In [130]: df.B.cat.categories
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Index(['c', 'a', 'b'], dtype='object')
In [132]: df2.index
Out[132]: CategoricalIndex(['a', 'a', 'b', 'b', 'c', 'a'], categories=['c', 'a', 'b'],
˓→ ordered=False, name='B', dtype='category')
Indexing with __getitem__/.iloc/.loc works similarly to an Index with duplicates. The indexers must be
in the category or the operation will raise a KeyError.
In [133]: df2.loc['a']
Out[133]:
A
B
a 0
a 1
a 5
Sorting the index will sort by the order of the categories (recall that we created the index with
CategoricalDtype(list('cab')), so the sorted order is cab).
In [135]: df2.sort_index()
Out[135]:
A
B
c 4
a 0
a 1
a 5
b 2
b 3
Groupby operations on the index will preserve the index nature as well.
In [136]: df2.groupby(level=0).sum()
Out[136]:
A
B
c 4
a 6
b 5
In [137]: df2.groupby(level=0).sum().index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[137]: CategoricalIndex(['c', 'a', 'b'],
˓→categories=['c', 'a', 'b'], ordered=False, name='B', dtype='category')
Reindexing operations will return a resulting index based on the type of the passed indexer. Passing a list will return
a plain-old Index; indexing with a Categorical will return a CategoricalIndex, indexed according to the
categories of the passed Categorical dtype. This allows one to arbitrarily index these even with values not in the
categories, similarly to how you can reindex any pandas index.
In [138]: df2.reindex(['a','e'])
Out[138]:
A
B
a 0.0
a 1.0
a 5.0
e NaN
In [139]: df2.reindex(['a','e']).index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[139]: Index(['a', 'a', 'a',
˓→'e'], dtype='object', name='B')
In [140]: df2.reindex(pd.Categorical(['a','e'],categories=list('abcde')))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A
B
a 0.0
a 1.0
a 5.0
e NaN
In [141]: df2.reindex(pd.Categorical(['a','e'],categories=list('abcde'))).index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→CategoricalIndex(['a', 'a', 'a', 'e'], categories=['a', 'b', 'c', 'd', 'e'],
Warning: Reshaping and Comparison operations on a CategoricalIndex must have the same categories or
a TypeError will be raised.
In [9]: df3 = pd.DataFrame({'A' : np.arange(6),
'B' : pd.Series(list('aabbca')).astype('category')})
In [11]: df3.index
Out[11]: CategoricalIndex([u'a', u'a', u'b', u'b', u'c', u'a'], categories=[u'a', u
˓→'b', u'c'], ordered=False, name=u'B', dtype='category')
Warning: Indexing on an integer-based Index with floats has been clarified in 0.18.0, for a summary of the
changes, see here.
Int64Index is a fundamental basic index in pandas. This is an Immutable array implementing an ordered, sliceable
set. Prior to 0.18.0, the Int64Index would provide the default index for all NDFrame objects.
RangeIndex is a sub-class of Int64Index added in version 0.18.0, now providing the default index for all
NDFrame objects. RangeIndex is an optimized version of Int64Index that can represent a monotonic ordered
set. These are analogous to Python range types.
13.5.3 Float64Index
By default a Float64Index will be automatically created when passing floating, or mixed-integer-floating values
in index creation. This enables a pure label-based slicing paradigm that makes [],ix,loc for scalar indexing and
slicing work exactly the same.
In [143]: indexf
Out[143]: Float64Index([1.5, 2.0, 3.0, 4.5, 5.0], dtype='float64')
In [145]: sf
Out[145]:
1.5 0
2.0 1
3.0 2
4.5 3
5.0 4
dtype: int64
Scalar selection for [],.loc will always be label based. An integer will match an equal float index (e.g. 3 is
equivalent to 3.0).
In [146]: sf[3]
Out[146]: 2
In [147]: sf[3.0]
\\\\\\\\\\\\Out[147]: 2
In [148]: sf.loc[3]
\\\\\\\\\\\\\\\\\\\\\\\\Out[148]: 2
In [149]: sf.loc[3.0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[149]: 2
In [150]: sf.iloc[3]
Out[150]: 3
A scalar index that is not found will raise a KeyError. Slicing is primarily on the values of the index when using
[],ix,loc, and always positional when using iloc. The exception is when the slice is boolean, in which case it
will always be positional.
In [151]: sf[2:4]
Out[151]:
2.0 1
3.0 2
dtype: int64
In [152]: sf.loc[2:4]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[152]:
2.0 1
3.0 2
dtype: int64
In [153]: sf.iloc[2:4]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[153]:
˓→
3.0 2
4.5 3
dtype: int64
In [154]: sf[2.1:4.6]
Out[154]:
3.0 2
4.5 3
dtype: int64
In [155]: sf.loc[2.1:4.6]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[155]:
3.0 2
4.5 3
dtype: int64
In [1]: pd.Series(range(5))[3.5]
TypeError: the label [3.5] is not a proper indexer for this index type (Int64Index)
In [1]: pd.Series(range(5))[3.5:4.5]
TypeError: the slice start [3.5] is not a proper indexer for this index type
˓→(Int64Index)
Warning: Using a scalar float indexer for .iloc has been removed in 0.18.0, so the following will raise a
TypeError:
In [3]: pd.Series(range(5)).iloc[3.0]
TypeError: cannot do positional indexing on <class 'pandas.indexes.range.RangeIndex
˓→'> with these indexers [3.0] of <type 'float'>
Here is a typical use-case for using this type of indexing. Imagine that you have a somewhat irregular timedelta-like
indexing scheme, but the data is recorded as floats. This could for example be millisecond offsets.
In [157]: dfir
Out[157]:
A B
0.0 0.997289 -1.693316
250.0 -0.179129 -1.598062
500.0 0.936914 0.912560
750.0 -1.003401 1.632781
1000.0 -0.724626 0.178219
1000.4 0.310610 -0.108002
1250.5 -0.974226 -1.147708
1500.6 -2.281374 0.760010
1750.7 -0.742532 1.533318
2000.8 2.495362 -0.432771
2250.9 -0.068954 0.043520
Selection operations then will always work on a value basis, for all selection operators.
In [158]: dfir[0:1000.4]
Out[158]:
A B
0.0 0.997289 -1.693316
250.0 -0.179129 -1.598062
500.0 0.936914 0.912560
750.0 -1.003401 1.632781
1000.0 -0.724626 0.178219
1000.4 0.310610 -0.108002
In [159]: dfir.loc[0:1001,'A']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0.0 0.997289
250.0 -0.179129
500.0 0.936914
750.0 -1.003401
1000.0 -0.724626
1000.4 0.310610
Name: A, dtype: float64
In [160]: dfir.loc[1000.4]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A 0.310610
B -0.108002
Name: 1000.4, dtype: float64
You could retrieve the first 1 second (1000 ms) of data as such:
In [161]: dfir[0:1000]
Out[161]:
A B
0.0 0.997289 -1.693316
250.0 -0.179129 -1.598062
500.0 0.936914 0.912560
750.0 -1.003401 1.632781
1000.0 -0.724626 0.178219
In [162]: dfir.iloc[0:5]
Out[162]:
A B
0.0 0.997289 -1.693316
250.0 -0.179129 -1.598062
500.0 0.936914 0.912560
750.0 -1.003401 1.632781
1000.0 -0.724626 0.178219
13.5.4 IntervalIndex
Warning: These indexing behaviors are provisional and may change in a future version of pandas.
In [164]: df
Out[164]:
A
(0, 1] 1
(1, 2] 2
(2, 3] 3
(3, 4] 4
Label based indexing via .loc along the edges of an interval works as you would expect, selecting that particular
interval.
In [165]: df.loc[2]
Out[165]:
A 2
Name: (1, 2], dtype: int64
If you select a label contained within an interval, this will also select the interval.
In [167]: df.loc[2.5]
Out[167]:
A 3
Name: (2, 3], dtype: int64
In [170]: c
Out[170]:
[(-0.003, 1.5], (-0.003, 1.5], (1.5, 3.0], (1.5, 3.0]]
Categories (2, interval[float64]): [(-0.003, 1.5] < (1.5, 3.0]]
In [171]: c.categories
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Furthermore, IntervalIndex allows one to bin other data with these same bins, with NaN representing a missing
value similar to other dtypes.
In [172]: pd.cut([0, 3, 5, 1], bins=c.categories)
Out[172]:
[(-0.003, 1.5], (1.5, 3.0], NaN, (-0.003, 1.5]]
Categories (2, interval[float64]): [(-0.003, 1.5] < (1.5, 3.0]]
If we need intervals on a regular frequency, we can use the interval_range() function to create an
IntervalIndex using various combinations of start, end, and periods. The default frequency for
interval_range is a 1 for numeric intervals, and calendar day for datetime-like intervals:
In [173]: pd.interval_range(start=0, end=5)
Out[173]:
IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]]
closed='right',
dtype='interval[int64]')
closed='right',
dtype='interval[datetime64[ns]]')
closed='right',
dtype='interval[timedelta64[ns]]')
The freq parameter can used to specify non-default frequencies, and can utilize a variety of frequency aliases with
datetime-like intervals:
closed='right',
dtype='interval[datetime64[ns]]')
closed='right',
dtype='interval[timedelta64[ns]]')
Additionally, the closed parameter can be used to specify which side(s) the intervals are closed on. Intervals are
closed on the right side by default.
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
closed='right',
dtype='interval[datetime64[ns]]')
Label-based indexing with integer axis labels is a thorny topic. It has been discussed heavily on mailing lists and
among various members of the scientific Python community. In pandas, our general viewpoint is that labels matter
more than integer locations. Therefore, with an integer axis index only label-based indexing is possible with the
standard tools like .loc. The following code will generate exceptions:
s = pd.Series(range(5))
s[-1]
df = pd.DataFrame(np.random.randn(5, 4))
df
df.loc[-2:]
This deliberate decision was made to prevent ambiguities and subtle bugs (many users reported finding bugs when the
API change was made to stop “falling back” on position-based indexing).
If the index of a Series or DataFrame is monotonically increasing or decreasing, then the bounds of a label-based
slice can be outside the range of the index, much like slice indexing a normal Python list. Monotonicity of an index
can be tested with the is_monotonic_increasing and is_monotonic_decreasing attributes.
In [184]: df.index.is_monotonic_increasing
Out[184]: True
On the other hand, if the index is not monotonic, then both slice bounds must be unique members of the index.
In [188]: df.index.is_monotonic_increasing
Out[188]: False
In [191]: weakly_monotonic
Out[191]: Index(['a', 'b', 'c', 'c'], dtype='object')
In [192]: weakly_monotonic.is_monotonic_increasing
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[192]: True
Compared with standard Python sequence slicing in which the slice endpoint is not inclusive, label-based slicing in
pandas is inclusive. The primary reason for this is that it is often not possible to easily determine the “successor” or
next element after a particular label in an index. For example, consider the following Series:
In [194]: s = pd.Series(np.random.randn(6), index=list('abcdef'))
In [195]: s
Out[195]:
a 0.112246
b 0.871721
c -0.816064
d -0.784880
e 1.030659
f 0.187483
dtype: float64
Suppose we wished to slice from c to e, using integers this would be accomplished as such:
In [196]: s[2:5]
Out[196]:
c -0.816064
d -0.784880
e 1.030659
dtype: float64
However, if you only had c and e, determining the next element in the index can be somewhat complicated. For
example, the following does not work:
s.loc['c':'e'+1]
A very common use case is to limit a time series to start and end at two specific dates. To enable this, we made the
design to make label-based slicing include both endpoints:
In [197]: s.loc['c':'e']
Out[197]:
c -0.816064
d -0.784880
e 1.030659
dtype: float64
This is most definitely a “practicality beats purity” sort of thing, but it is something to watch out for if you expect
label-based slicing to behave exactly in the way that standard Python integer slicing works.
The different indexing operation can potentially change the dtype of a Series.
In [198]: series1 = pd.Series([1, 2, 3])
In [199]: series1.dtype
Out[199]: dtype('int64')
In [202]: res
\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[202]:
0 1.0
4 NaN
dtype: float64
In [204]: series2.dtype
Out[204]: dtype('bool')
In [206]: res.dtype
Out[206]: dtype('O')
In [207]: res
\\\\\\\\\\\\\\\\\\\\\Out[207]:
0 True
1 NaN
2 NaN
dtype: object
This is because the (re)indexing operations above silently inserts NaNs and the dtype changes accordingly. This can
cause some issues when using numpy ufuncs such as numpy.logical_and.
See the this old issue for a more detailed discussion.
FOURTEEN
COMPUTATIONAL TOOLS
Series, DataFrame, and Panel all have a method pct_change() to compute the percent change over a given
number of periods (using fill_method to fill NA/null values before computing the percent change).
In [2]: ser.pct_change()
Out[2]:
0 NaN
1 -1.602976
2 4.334938
3 -0.247456
4 -2.067345
5 -1.142903
6 -1.688214
7 -9.759729
dtype: float64
In [4]: df.pct_change(periods=3)
Out[4]:
0 1 2 3
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 -0.218320 -1.054001 1.987147 -0.510183
4 -0.439121 -1.816454 0.649715 -4.822809
5 -0.127833 -3.042065 -5.866604 -1.776977
6 -2.596833 -1.959538 -2.111697 -3.798900
7 -0.117826 -2.169058 0.036094 -0.067696
8 2.492606 -1.357320 -1.205802 -1.558697
9 -1.012977 2.324558 -1.003744 -0.371806
14.1.2 Covariance
Series.cov() can be used to compute covariance between series (excluding missing values).
781
pandas: powerful Python data analysis toolkit, Release 0.23.4
In [5]: s1 = pd.Series(np.random.randn(1000))
In [6]: s2 = pd.Series(np.random.randn(1000))
In [7]: s1.cov(s2)
Out[7]: 0.00068010881743108204
Analogously, DataFrame.cov() to compute pairwise covariances among the series in the DataFrame, also exclud-
ing NA/null values.
Note: Assuming the missing data are missing at random this results in an estimate for the covariance matrix which
is unbiased. However, for many applications this estimate may not be acceptable because the estimated covariance
matrix is not guaranteed to be positive semi-definite. This could lead to estimated correlations having absolute values
which are greater than one, and/or a non-invertible covariance matrix. See Estimation of covariance matrices for more
details.
In [9]: frame.cov()
Out[9]:
a b c d e
a 1.000882 -0.003177 -0.002698 -0.006889 0.031912
b -0.003177 1.024721 0.000191 0.009212 0.000857
c -0.002698 0.000191 0.950735 -0.031743 -0.005087
d -0.006889 0.009212 -0.031743 1.002983 -0.047952
e 0.031912 0.000857 -0.005087 -0.047952 1.042487
DataFrame.cov also supports an optional min_periods keyword that specifies the required minimum number
of observations for each column pair in order to have a valid result.
In [13]: frame.cov()
Out[13]:
a b c
a 1.123670 -0.412851 0.018169
b -0.412851 1.154141 0.305260
c 0.018169 0.305260 1.301149
In [14]: frame.cov(min_periods=12)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b c
a 1.123670 NaN 0.018169
b NaN 1.154141 0.305260
c 0.018169 0.305260 1.301149
14.1.3 Correlation
Correlation may be computed using the corr() method. Using the method parameter, several methods for com-
puting correlations are provided:
All of these are currently computed using pairwise complete observations. Wikipedia has articles covering the above
correlation coefficients:
• Pearson correlation coefficient
• Kendall rank correlation coefficient
• Spearman’s rank correlation coefficient
Note: Please see the caveats associated with this method of calculating correlation matrices in the covariance section.
Note that non-numeric columns will be automatically excluded from the correlation calculation.
Like cov, corr also supports the optional min_periods keyword:
In [20]: frame = pd.DataFrame(np.random.randn(20, 3), columns=['a', 'b', 'c'])
In [23]: frame.corr()
Out[23]:
(continues on next page)
In [24]: frame.corr(min_periods=12)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b c
a 1.000000 NaN 0.069544
b NaN 1.000000 0.051742
c 0.069544 0.051742 1.000000
A related method corrwith() is implemented on DataFrame to compute the correlation between like-labeled Series
contained in different DataFrame objects.
In [25]: index = ['a', 'b', 'c', 'd', 'e']
In [29]: df1.corrwith(df2)
Out[29]:
one -0.125501
two -0.493244
three 0.344056
four 0.004183
dtype: float64
a -0.675817
b 0.458296
c 0.190809
d -0.186275
e NaN
dtype: float64
The rank() method produces a data ranking with ties being assigned the mean of the ranks (by default) for the group:
In [31]: s = pd.Series(np.random.np.random.randn(5), index=list('abcde'))
In [33]: s.rank()
Out[33]:
a 5.0
b 2.5
c 1.0
(continues on next page)
rank() is also a DataFrame method and can rank either the rows (axis=0) or the columns (axis=1). NaN values
are excluded from the ranking.
In [36]: df
Out[36]:
0 1 2 3 4 5
0 -0.904948 -1.163537 -1.457187 0.135463 -1.457187 0.294650
1 -0.976288 -0.244652 -0.748406 -0.999601 -0.748406 -0.800809
2 0.401965 1.460840 1.256057 1.308127 1.256057 0.876004
3 0.205954 0.369552 -0.669304 0.038378 -0.669304 1.140296
4 -0.477586 -0.730705 -1.129149 -0.601463 -1.129149 -0.211196
5 -1.092970 -0.689246 0.908114 0.204848 NaN 0.463347
6 0.376892 0.959292 0.095572 -0.593740 NaN -0.069180
7 -1.002601 1.957794 -0.120708 0.094214 NaN -1.467422
8 -0.547231 0.664402 -0.519424 -0.073254 NaN -1.263544
9 -0.250277 -0.237428 -1.056443 0.419477 NaN 1.375064
In [37]: df.rank(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1 2 3 4 5
0 4.0 3.0 1.5 5.0 1.5 6.0
1 2.0 6.0 4.5 1.0 4.5 3.0
2 1.0 6.0 3.5 5.0 3.5 2.0
3 4.0 5.0 1.5 3.0 1.5 6.0
4 5.0 3.0 1.5 4.0 1.5 6.0
5 1.0 2.0 5.0 3.0 NaN 4.0
6 4.0 5.0 3.0 1.0 NaN 2.0
7 2.0 5.0 3.0 4.0 NaN 1.0
8 2.0 5.0 3.0 4.0 NaN 1.0
9 2.0 3.0 1.0 4.0 NaN 5.0
rank optionally takes a parameter ascending which by default is true; when false, data is reverse-ranked, with
larger values assigned a smaller rank.
rank supports different tie-breaking methods, specified with the method parameter:
• average : average rank of tied group
• min : lowest rank in the group
• max : highest rank in the group
• first : ranks assigned in the order they appear in the array
For working with data, a number of window functions are provided for computing common window or rolling statistics.
Among these are count, sum, mean, median, correlation, variance, covariance, standard deviation, skewness, and
kurtosis.
The rolling() and expanding() functions can be used directly from DataFrameGroupBy objects, see the
groupby docs.
Note: The API for window statistics is quite similar to the way one works with GroupBy objects, see the documen-
tation here.
We work with rolling, expanding and exponentially weighted data through the corresponding objects,
Rolling, Expanding and EWM.
In [39]: s = s.cumsum()
In [40]: s
Out[40]:
2000-01-01 -0.268824
2000-01-02 -1.771855
2000-01-03 -0.818003
2000-01-04 -0.659244
2000-01-05 -1.942133
2000-01-06 -1.869391
2000-01-07 0.563674
...
2002-09-20 -68.233054
2002-09-21 -66.765687
2002-09-22 -67.457323
2002-09-23 -69.253182
2002-09-24 -70.296818
2002-09-25 -70.844674
2002-09-26 -72.475016
Freq: D, Length: 1000, dtype: float64
In [41]: r = s.rolling(window=60)
In [42]: r
Out[42]: Rolling [window=60,center=False,axis=0]
In [14]: r.
r.agg r.apply r.count r.exclusions r.max r.median r.
˓→name r.skew r.sum
r.aggregate r.corr r.cov r.kurt r.mean r.min r.
˓→quantile r.std r.var
Generally these methods all have the same interface. They all accept the following arguments:
• window: size of moving window
• min_periods: threshold of non-null data points to require (otherwise result is NA)
• center: boolean, whether to set the labels at the center (default is False)
We can then call methods on these rolling objects. These return like-indexed objects:
In [43]: r.mean()
Out[43]:
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 NaN
2000-01-04 NaN
2000-01-05 NaN
2000-01-06 NaN
2000-01-07 NaN
...
2002-09-20 -62.694135
2002-09-21 -62.812190
2002-09-22 -62.914971
2002-09-23 -63.061867
2002-09-24 -63.213876
2002-09-25 -63.375074
2002-09-26 -63.539734
Freq: D, Length: 1000, dtype: float64
In [44]: s.plot(style='k--')
Out[44]: <matplotlib.axes._subplots.AxesSubplot at 0x7f2115c02ba8>
In [45]: r.mean().plot(style='k')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[45]:
˓→<matplotlib.axes._subplots.AxesSubplot at 0x7f2115c02ba8>
They can also be applied to DataFrame objects. This is really just syntactic sugar for applying the moving window
operator to all of the DataFrame’s columns:
In [47]: df = df.cumsum()
In [48]: df.rolling(window=60).sum().plot(subplots=True)
Out[48]:
array([<matplotlib.axes._subplots.AxesSubplot object at 0x7f21156c40f0>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f2115662ef0>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f21156950f0>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f211563d2b0>],
˓→dtype=object)
Method Description
count() Number of non-null observations
sum() Sum of values
mean() Mean of values
median() Arithmetic median of values
min() Minimum
max() Maximum
std() Bessel-corrected sample standard deviation
var() Unbiased variance
skew() Sample skewness (3rd moment)
kurt() Sample kurtosis (4th moment)
quantile() Sample quantile (value at %)
apply() Generic apply
cov() Unbiased covariance (binary)
corr() Correlation (binary)
The apply() function takes an extra func argument and performs generic rolling computations. The func argu-
ment should be a single function that produces a single value from an ndarray input. Suppose we wanted to compute
the mean absolute deviation on a rolling basis:
Passing win_type to .rolling generates a generic rolling window computation, that is weighted according the
win_type. The following methods are available:
Method Description
sum() Sum of values
mean() Mean of values
The weights used in the window are specified by the win_type keyword. The list of recognized types are the
scipy.signal window functions:
• boxcar
• triang
• blackman
• hamming
• bartlett
• parzen
• bohman
• blackmanharris
• nuttall
• barthann
• kaiser (needs beta)
• gaussian (needs std)
• general_gaussian (needs power, width)
• slepian (needs width).
In [51]: ser = pd.Series(np.random.randn(10), index=pd.date_range('1/1/2000',
˓→periods=10))
In [54]: ser.rolling(window=5).mean()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 NaN
2000-01-04 NaN
2000-01-05 -0.841164
2000-01-06 -0.779948
2000-01-07 -0.565487
2000-01-08 -0.502815
2000-01-09 -0.553755
2000-01-10 -0.472211
Freq: D, dtype: float64
Note: For .sum() with a win_type, there is no normalization done to the weights for the window. Passing custom
weights of [1, 1, 1] will yield a different result than passing weights of [2, 2, 2], for example. When passing
a win_type instead of explicitly specifying the weights, the weights are already normalized so that the largest weight
is 1.
In contrast, the nature of the .mean() calculation is such that the weights are normalized with respect to each other.
Weights of [1, 1, 1] and [2, 2, 2] yield the same result.
....:
In [57]: dft
Out[57]:
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 2.0
2013-01-01 09:00:03 NaN
2013-01-01 09:00:04 4.0
This is a regular frequency index. Using an integer window parameter works to roll along the window frequency.
In [58]: dft.rolling(2).sum()
Out[58]:
B
2013-01-01 09:00:00 NaN
2013-01-01 09:00:01 1.0
(continues on next page)
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:04 4.0
In [60]: dft.rolling('2s').sum()
Out[60]:
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:04 4.0
Using a non-regular, but still monotonic index, rolling with an integer window does not impart any special calculation.
In [62]: dft
Out[62]:
B
foo
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
In [63]: dft.rolling(2).sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
B
foo
2013-01-01 09:00:00 NaN
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 NaN
Using the time-specification generates variable windows for this sparse data.
In [64]: dft.rolling('2s').sum()
Out[64]:
B
foo
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
Furthermore, we now allow an optional on parameter to specify a column (rather than the default of the index) in a
DataFrame.
In [66]: dft
Out[66]:
foo B
0 2013-01-01 09:00:00 0.0
1 2013-01-01 09:00:02 1.0
2 2013-01-01 09:00:03 2.0
3 2013-01-01 09:00:05 NaN
4 2013-01-01 09:00:06 4.0
foo B
0 2013-01-01 09:00:00 0.0
1 2013-01-01 09:00:02 1.0
2 2013-01-01 09:00:03 3.0
3 2013-01-01 09:00:05 NaN
4 2013-01-01 09:00:06 4.0
For example, having the right endpoint open is useful in many problems that require that there is no contamination
from present information back to past information. This allows the rolling window to compute statistics “up to that
point in time”, but not including that point in time.
In [73]: df
Out[73]:
x right both left neither
2013-01-01 09:00:01 1 1.0 1.0 NaN NaN
2013-01-01 09:00:02 1 2.0 2.0 1.0 1.0
2013-01-01 09:00:03 1 2.0 3.0 2.0 1.0
2013-01-01 09:00:04 1 2.0 3.0 2.0 1.0
2013-01-01 09:00:06 1 1.0 2.0 1.0 NaN
Currently, this feature is only implemented for time-based windows. For fixed windows, the closed parameter cannot
be set and the rolling window will always have both endpoints closed.
Using .rolling() with a time-based index is quite similar to resampling. They both operate and perform reductive
operations on time-indexed pandas objects.
When using .rolling() with an offset. The offset is a time-delta. Take a backwards-in-time looking window, and
aggregate all of the values in that window (including the end-point, but not the start-point). This is the new value at
that point in the result. These are variable sized windows in time-space for each point of the input. You will get a same
sized result as the input.
When using .resample() with an offset. Construct a new index that is the frequency of the offset. For each
frequency bin, aggregate points from the input within a backwards-in-time looking window that fall in that bin. The
result of this aggregation is the output for that frequency point. The windows are fixed size in the frequency space.
Your result will have the shape of a regular frequency between the min and the max of the original input object.
To summarize, .rolling() is a time-based window operation, while .resample() is a frequency-based window
operation.
By default the labels are set to the right edge of the window, but a center keyword is available so the labels can be
set at the center.
In [74]: ser.rolling(window=5).mean()
Out[74]:
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 NaN
2000-01-04 NaN
(continues on next page)
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 -0.841164
2000-01-04 -0.779948
2000-01-05 -0.565487
2000-01-06 -0.502815
2000-01-07 -0.553755
2000-01-08 -0.472211
2000-01-09 NaN
2000-01-10 NaN
Freq: D, dtype: float64
cov() and corr() can compute moving window statistics about two Series or any combination of DataFrame/
Series or DataFrame/DataFrame. Here is the behavior in each case:
• two Series: compute the statistic for the pairing.
• DataFrame/Series: compute the statistics for each column of the DataFrame with the passed Series, thus
returning a DataFrame.
• DataFrame/DataFrame: by default compute the statistic for matching column names, returning a
DataFrame. If the keyword argument pairwise=True is passed then computes the statistic for each pair
of columns, returning a MultiIndexed DataFrame whose index are the dates in question (see the next
section).
For example:
In [77]: df = df.cumsum()
In [79]: df2.rolling(window=5).corr(df2['B'])
Out[79]:
A B C D
2000-01-01 NaN NaN NaN NaN
2000-01-02 NaN NaN NaN NaN
2000-01-03 NaN NaN NaN NaN
2000-01-04 NaN NaN NaN NaN
(continues on next page)
Warning: Prior to version 0.20.0 if pairwise=True was passed, a Panel would be returned. This will now
return a 2-level MultiIndexed DataFrame, see the whatsnew here.
In financial data analysis and other fields it’s common to compute covariance and correlation matrices for a collection
of time series. Often one is also interested in moving-window covariance and correlation matrices. This can be done
by passing the pairwise keyword argument, which in the case of DataFrame inputs will yield a MultiIndexed
DataFrame whose index are the dates in question. In the case of a single DataFrame argument the pairwise
argument can even be omitted:
Note: Missing values are ignored and each entry is computed using the pairwise complete observations. Please see
the covariance section for caveats associated with this method of calculating covariance and correlation matrices.
In [81]: covs.loc['2002-09-22':]
Out[81]:
B C D
2002-09-22 A 1.367467 8.676734 -8.047366
B 3.067315 0.865946 -1.052533
C 0.865946 7.739761 -4.943924
2002-09-23 A 0.910343 8.669065 -8.443062
B 2.625456 0.565152 -0.907654
C 0.565152 7.825521 -5.367526
2002-09-24 A 0.463332 8.514509 -8.776514
B 2.306695 0.267746 -0.732186
C 0.267746 7.771425 -5.696962
2002-09-25 A 0.467976 8.198236 -9.162599
B 2.307129 0.267287 -0.754080
C 0.267287 7.466559 -5.822650
2002-09-26 A 0.545781 7.899084 -9.326238
B 2.311058 0.322295 -0.844451
C 0.322295 7.038237 -5.684445
In [83]: correls.loc['2002-09-22':]
Out[83]:
A B C D
2002-09-22 A 1.000000 0.186397 0.744551 -0.769767
B 0.186397 1.000000 0.177725 -0.240802
C 0.744551 0.177725 1.000000 -0.712051
D -0.769767 -0.240802 -0.712051 1.000000
2002-09-23 A 1.000000 0.134723 0.743113 -0.758758
B 0.134723 1.000000 0.124683 -0.209934
C 0.743113 0.124683 1.000000 -0.719088
... ... ... ... ...
2002-09-25 B 0.075157 1.000000 0.064399 -0.164179
C 0.731888 0.064399 1.000000 -0.704686
D -0.739160 -0.164179 -0.704686 1.000000
2002-09-26 A 1.000000 0.087756 0.727792 -0.736562
B 0.087756 1.000000 0.079913 -0.179477
C 0.727792 0.079913 1.000000 -0.692303
D -0.736562 -0.179477 -0.692303 1.000000
You can efficiently retrieve the time series of correlations between two columns by reshaping and indexing:
14.3 Aggregation
Once the Rolling, Expanding or EWM objects have been created, several methods are available to perform multiple
computations on the data. These operations are similar to the aggregating API, groupby API, and resample API.
In [86]: r = dfa.rolling(window=60,min_periods=1)
In [87]: r
Out[87]: Rolling [window=60,min_periods=1,center=False,axis=0]
We can aggregate by passing a function to the entire DataFrame, or select a Series (or multiple Series) via standard
__getitem__.
In [88]: r.aggregate(np.sum)
Out[88]:
A B C
2000-01-01 -0.289838 -0.370545 -1.284206
2000-01-02 -0.216612 -1.675528 -1.169415
(continues on next page)
In [89]: r['A'].aggregate(np.sum)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2000-01-01 -0.289838
2000-01-02 -0.216612
2000-01-03 1.154661
2000-01-04 2.969393
2000-01-05 4.690630
2000-01-06 3.880630
2000-01-07 4.001957
...
2002-09-20 2.652493
2002-09-21 0.844497
2002-09-22 2.860036
2002-09-23 3.510163
2002-09-24 6.524983
2002-09-25 6.409626
2002-09-26 5.093787
Freq: D, Name: A, Length: 1000, dtype: float64
In [90]: r[['A','B']].aggregate(np.sum)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B
2000-01-01 -0.289838 -0.370545
2000-01-02 -0.216612 -1.675528
2000-01-03 1.154661 -1.634017
2000-01-04 2.969393 -4.003274
2000-01-05 4.690630 -4.682017
2000-01-06 3.880630 -4.447700
2000-01-07 4.001957 -2.884072
... ... ...
2002-09-20 2.652493 -10.528875
2002-09-21 0.844497 -9.280944
2002-09-22 2.860036 -9.270337
2002-09-23 3.510163 -8.151439
2002-09-24 6.524983 -10.168078
2002-09-25 6.409626 -9.956226
2002-09-26 5.093787 -7.074515
As you can see, the result of the aggregation will have the selected columns, or all columns if none are selected.
With windowed Series you can also pass a list of functions to do aggregation with, outputting a DataFrame:
On a windowed DataFrame, you can pass a list of functions to apply to each column, which produces an aggregated
result with a hierarchical index:
Passing a dict of functions has different behavior by default, see the next section.
By passing a dict to aggregate you can apply a different aggregation to the columns of a DataFrame:
In [93]: r.agg({'A' : np.sum,
....: 'B' : lambda x: np.std(x, ddof=1)})
....:
Out[93]:
A B
2000-01-01 -0.289838 NaN
2000-01-02 -0.216612 0.660747
2000-01-03 1.154661 0.689929
2000-01-04 2.969393 1.072199
2000-01-05 4.690630 0.939657
2000-01-06 3.880630 0.966848
2000-01-07 4.001957 1.240137
... ... ...
2002-09-20 2.652493 1.114814
2002-09-21 0.844497 1.113220
2002-09-22 2.860036 1.113208
2002-09-23 3.510163 1.132381
2002-09-24 6.524983 1.080963
2002-09-25 6.409626 1.082911
2002-09-26 5.093787 1.136199
The function names can also be strings. In order for a string to be valid it must be implemented on the windowed
object
In [94]: r.agg({'A' : 'sum', 'B' : 'std'})
Out[94]:
A B
2000-01-01 -0.289838 NaN
2000-01-02 -0.216612 0.660747
2000-01-03 1.154661 0.689929
2000-01-04 2.969393 1.072199
2000-01-05 4.690630 0.939657
2000-01-06 3.880630 0.966848
2000-01-07 4.001957 1.240137
... ... ...
2002-09-20 2.652493 1.114814
2002-09-21 0.844497 1.113220
2002-09-22 2.860036 1.113208
2002-09-23 3.510163 1.132381
2002-09-24 6.524983 1.080963
2002-09-25 6.409626 1.082911
2002-09-26 5.093787 1.136199
Furthermore you can pass a nested dict to indicate different aggregations on different columns.
In [95]: r.agg({'A' : ['sum','std'], 'B' : ['mean','std'] })
Out[95]:
A B
sum std mean std
2000-01-01 -0.289838 NaN -0.370545 NaN
(continues on next page)
A common alternative to rolling statistics is to use an expanding window, which yields the value of the statistic with
all the data available up to that point in time.
These follow a similar interface to .rolling, with the .expanding method returning an Expanding object.
As these calculations are a special case of rolling statistics, they are implemented in pandas such that the following
two calls are equivalent:
In [97]: df.expanding(min_periods=1).mean()[:5]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D
2000-01-01 0.314226 -0.001675 0.071823 0.892566
2000-01-02 0.654522 -0.171495 0.179278 0.853361
2000-01-03 0.708733 -0.064489 -0.238271 1.371111
2000-01-04 0.987613 0.163472 -0.919693 1.566485
2000-01-05 1.426971 0.288267 -1.358877 1.808650
Function Description
count() Number of non-null observations
sum() Sum of values
mean() Mean of values
median() Arithmetic median of values
min() Minimum
max() Maximum
std() Unbiased standard deviation
var() Unbiased variance
skew() Unbiased skewness (3rd moment)
kurt() Unbiased kurtosis (4th moment)
quantile() Sample quantile (value at %)
apply() Generic apply
cov() Unbiased covariance (binary)
corr() Correlation (binary)
Aside from not having a window parameter, these functions have the same interfaces as their .rolling counter-
parts. Like above, the parameters they all accept are:
• min_periods: threshold of non-null data points to require. Defaults to minimum needed to compute statistic.
No NaNs will be output once min_periods non-null data points have been seen.
• center: boolean, whether to set the labels at the center (default is False).
Note: The output of the .rolling and .expanding methods do not return a NaN if there are at least
min_periods non-null values in the current window. For example:
In [98]: sn = pd.Series([1, 2, np.nan, 3, np.nan, 4])
In [99]: sn
Out[99]:
0 1.0
1 2.0
2 NaN
3 3.0
4 NaN
5 4.0
dtype: float64
In [100]: sn.rolling(2).max()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[100]:
˓→
0 NaN
1 2.0
2 NaN
3 NaN
4 NaN
5 NaN
dtype: float64
In case of expanding functions, this differs from cumsum(), cumprod(), cummax(), and cummin(), which
return NaN in the output wherever a NaN is encountered in the input. In order to match the output of cumsum with
expanding, use fillna():
In [102]: sn.expanding().sum()
Out[102]:
0 1.0
1 3.0
2 3.0
3 6.0
4 6.0
5 10.0
dtype: float64
In [103]: sn.cumsum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[103]:
˓→
0 1.0
1 3.0
2 NaN
3 6.0
4 NaN
5 10.0
dtype: float64
In [104]: sn.cumsum().fillna(method='ffill')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1.0
1 3.0
2 3.0
3 6.0
4 6.0
5 10.0
dtype: float64
An expanding window statistic will be more stable (and less responsive) than its rolling window counterpart as the
increasing window size decreases the relative impact of an individual data point. As an example, here is the mean()
output for the previous time series dataset:
In [105]: s.plot(style='k--')
Out[105]: <matplotlib.axes._subplots.AxesSubplot at 0x7f210fc68518>
In [106]: s.expanding().mean().plot(style='k')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[106]:
˓→<matplotlib.axes._subplots.AxesSubplot at 0x7f210fc68518>
A related set of functions are exponentially weighted versions of several of the above statistics. A similar interface
to .rolling and .expanding is accessed through the .ewm method to receive an EWM object. A number of
expanding EW (exponentially weighted) methods are provided:
Function Description
mean() EW moving average
var() EW moving variance
std() EW moving standard deviation
corr() EW moving correlation
cov() EW moving covariance
where 𝑥𝑡 is the input, 𝑦𝑡 is the result and the 𝑤𝑖 are the weights.
The EW functions support two variants of exponential weights. The default, adjust=True, uses the weights 𝑤𝑖 =
𝑦𝑡 = 𝛼′ 𝑦𝑡−1 + (1 − 𝛼′ )𝑥𝑡 .
The difference between the above two variants arises because we are dealing with series which have finite history.
Consider a series of infinite history:
𝑥𝑡 + (1 − 𝛼)𝑥𝑡−1 + (1 − 𝛼)2 𝑥𝑡−2 + ...
𝑦𝑡 =
1 + (1 − 𝛼) + (1 − 𝛼)2 + ...
Noting that the denominator is a geometric series with initial term equal to 1 and a ratio of 1 − 𝛼 we have
𝑥𝑡 + (1 − 𝛼)𝑥𝑡−1 + (1 − 𝛼)2 𝑥𝑡−2 + ...
𝑦𝑡 = 1
1−(1−𝛼)
One must specify precisely one of span, center of mass, half-life and alpha to the EW functions:
• Span corresponds to what is commonly called an “N-day EW moving average”.
• Center of mass has a more physical interpretation and can be thought of in terms of span: 𝑐 = (𝑠 − 1)/2.
• Half-life is the period of time for the exponential weight to reduce to one half.
• Alpha specifies the smoothing factor directly.
Here is an example for a univariate time series:
In [107]: s.plot(style='k--')
Out[107]: <matplotlib.axes._subplots.AxesSubplot at 0x7f210fbff908>
In [108]: s.ewm(span=20).mean().plot(style='k')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[108]:
˓→<matplotlib.axes._subplots.AxesSubplot at 0x7f210fbff908>
EWM has a min_periods argument, which has the same meaning it does for all the .expanding and .rolling
methods: no output values will be set until at least min_periods non-null values are encountered in the (expanding)
window.
EWM also has an ignore_na argument, which determines how intermediate null values affect the calculation of
the weights. When ignore_na=False (the default), weights are calculated based on absolute positions, so that
intermediate null values affect the result. When ignore_na=True, weights are calculated by ignoring intermediate
null values. For example, assuming adjust=True, if ignore_na=False, the weighted average of 3, NaN, 5
would be calculated as
(1 − 𝛼)2 · 3 + 1 · 5
.
(1 − 𝛼)2 + 1
Whereas if ignore_na=True, the weighted average would be calculated as
(1 − 𝛼) · 3 + 1 · 5
.
(1 − 𝛼) + 1
The var(), std(), and cov() functions have a bias argument, specifying whether the result should con-
tain biased or unbiased statistics. For example, if bias=True, ewmvar(x) is calculated as ewmvar(x) =
ewma(x**2) - ewma(x)**2; whereas if bias=False (the default), the biased variance statistics are scaled
by debiasing factors
(︁∑︀ )︁2
𝑡
𝑖=0 𝑤𝑖
(︁∑︀ )︁2 .
𝑡 ∑︀𝑡 2
𝑖=0 𝑤𝑖 − 𝑖=0 𝑤𝑖
(For 𝑤𝑖 = 1, this reduces to the usual 𝑁/(𝑁 − 1) factor, with 𝑁 = 𝑡 + 1.) See Weighted Sample Variance on
Wikipedia for further details.
FIFTEEN
In this section, we will discuss missing (also referred to as NA) values in pandas.
Note: The choice of using NaN internally to denote missing data was largely for simplicity and performance reasons.
It differs from the MaskedArray approach of, for example, scikits.timeseries. We are hopeful that NumPy
will soon be able to provide a native NA type solution (similar to R) performant enough to be used in pandas.
Some might quibble over our usage of missing. By “missing” we simply mean NA (“not available”) or “not present
for whatever reason”. Many data sets simply arrive with missing data, either because it exists and was not collected or
it never existed. For example, in a collection of financial time series, some of the time series might start on different
dates. Thus, values prior to the start date would generally be marked as missing.
In pandas, one of the most common ways that missing data is introduced into a data set is by reindexing. For example:
In [1]: df = pd.DataFrame(np.random.randn(5, 3), index=['a', 'c', 'e', 'f', 'h'],
...: columns=['one', 'two', 'three'])
...:
In [4]: df
Out[4]:
one two three four five
a -0.166778 0.501113 -0.355322 bar False
c -0.337890 0.580967 0.983801 bar False
e 0.057802 0.761948 -0.712964 bar True
f -0.443160 -0.974602 1.047704 bar False
h -0.717852 -1.053898 -0.019369 bar False
In [5]: df2 = df.reindex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'])
In [6]: df2
Out[6]:
(continues on next page)
811
pandas: powerful Python data analysis toolkit, Release 0.23.4
As data comes in many shapes and forms, pandas aims to be flexible with regard to handling missing data. While
NaN is the default missing value marker for reasons of computational speed and convenience, we need to be able to
easily detect this value with data of different types: floating point, integer, boolean, and general object. In many cases,
however, the Python None will arise and we wish to also consider that “missing” or “not available” or “NA”.
Note: If you want to consider inf and -inf to be “NA” in computations, you can set pandas.options.mode.
use_inf_as_na = True.
To make detecting missing values easier (and across different array dtypes), pandas provides the isna() and
notna() functions, which are also methods on Series and DataFrame objects:
In [7]: df2['one']
Out[7]:
a -0.166778
b NaN
c -0.337890
d NaN
e 0.057802
f -0.443160
g NaN
h -0.717852
Name: one, dtype: float64
In [8]: pd.isna(df2['one'])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a False
b True
c False
d True
e False
f False
g True
h False
Name: one, dtype: bool
In [9]: df2['four'].notna()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a True
b False
(continues on next page)
In [10]: df2.isna()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Warning: One has to be mindful that in Python (and NumPy), the nan's don’t compare equal, but None's do.
Note that pandas/NumPy uses the fact that np.nan != np.nan, and treats None like np.nan.
In [11]: None == None
Out[11]: True
So as compared to above, a scalar equality comparison versus a None/np.nan doesn’t provide useful informa-
tion.
In [13]: df2['one'] == np.nan
Out[13]:
a False
b False
c False
d False
e False
f False
g False
h False
Name: one, dtype: bool
15.2 Datetimes
For datetime64[ns] types, NaT represents missing values. This is a pseudo-native sentinel value that can be represented
by NumPy in a singular dtype (datetime64[ns]). pandas objects provide intercompatibility between NaT and NaN.
In [14]: df2 = df.copy()
In [16]: df2
Out[16]:
one two three four five timestamp
a -0.166778 0.501113 -0.355322 bar False 2012-01-01
c -0.337890 0.580967 0.983801 bar False 2012-01-01
e 0.057802 0.761948 -0.712964 bar True 2012-01-01
f -0.443160 -0.974602 1.047704 bar False 2012-01-01
h -0.717852 -1.053898 -0.019369 bar False 2012-01-01
In [18]: df2
Out[18]:
one two three four five timestamp
a NaN 0.501113 -0.355322 bar False NaT
c NaN 0.580967 0.983801 bar False NaT
e 0.057802 0.761948 -0.712964 bar True 2012-01-01
f -0.443160 -0.974602 1.047704 bar False 2012-01-01
h NaN -1.053898 -0.019369 bar False NaT
In [19]: df2.get_dtype_counts()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
float64 3
object 1
bool 1
datetime64[ns] 1
dtype: int64
You can insert missing values by simply assigning to containers. The actual missing value used will be chosen based
on the dtype.
For example, numeric containers will always use NaN regardless of the missing value type chosen:
In [20]: s = pd.Series([1, 2, 3])
In [22]: s
Out[22]:
0 NaN
1 2.0
2 3.0
dtype: float64
In [26]: s
Out[26]:
0 None
1 NaN
2 c
dtype: object
Missing values propagate naturally through arithmetic operations between pandas objects.
In [27]: a
Out[27]:
one two
a NaN 0.501113
c NaN 0.580967
e 0.057802 0.761948
f -0.443160 -0.974602
h -0.443160 -1.053898
In [28]: b
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [29]: a + b
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
The descriptive statistics and computational methods discussed in the data structure overview (and listed here and
here) are all written to account for missing data. For example:
• When summing data, NA (missing) values will be treated as zero.
• If the data are all NA, the result will be 0.
• Cumulative methods like cumsum() and cumprod() ignore NA values by default, but preserve them in the
resulting arrays. To override this behaviour and include NA values, use skipna=False.
In [30]: df
Out[30]:
(continues on next page)
In [31]: df['one'].sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→-0.38535826528461409
In [32]: df.mean(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a 0.072895
c 0.782384
e 0.035595
f -0.123353
h -0.536633
dtype: float64
In [33]: df.cumsum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [34]: df.cumsum(skipna=False)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Warning: This behavior is now standard as of v0.22.0 and is consistent with the default in numpy; previously
sum/prod of all-NA or empty Series/DataFrames would return NaN. See v0.22.0 whatsnew for more.
In [35]: pd.Series([np.nan]).sum()
Out[35]: 0.0
In [36]: pd.Series([]).sum()
\\\\\\\\\\\\\Out[36]: 0.0
In [37]: pd.Series([np.nan]).prod()
Out[37]: 1.0
In [38]: pd.Series([]).prod()
\\\\\\\\\\\\\Out[38]: 1.0
NA groups in GroupBy are automatically excluded. This behavior is consistent with R, for example:
In [39]: df
Out[39]:
one two three
a NaN 0.501113 -0.355322
c NaN 0.580967 0.983801
e 0.057802 0.761948 -0.712964
f -0.443160 -0.974602 1.047704
h NaN -1.053898 -0.019369
In [40]: df.groupby('one').mean()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
two three
one
-0.443160 -0.974602 1.047704
0.057802 0.761948 -0.712964
pandas objects are equipped with various data manipulation methods for dealing with missing data.
fillna() can “fill in” NA values with non-NA data in a couple of ways, which we illustrate:
Replace NA with a scalar value
In [41]: df2
Out[41]:
one two three four five timestamp
a NaN 0.501113 -0.355322 bar False NaT
c NaN 0.580967 0.983801 bar False NaT
e 0.057802 0.761948 -0.712964 bar True 2012-01-01
f -0.443160 -0.974602 1.047704 bar False 2012-01-01
h NaN -1.053898 -0.019369 bar False NaT
In [42]: df2.fillna(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [43]: df2['one'].fillna('missing')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a missing
c missing
e 0.057802
f -0.44316
h missing
Name: one, dtype: object
In [44]: df
Out[44]:
one two three
a NaN 0.501113 -0.355322
c NaN 0.580967 0.983801
e 0.057802 0.761948 -0.712964
f -0.443160 -0.974602 1.047704
h NaN -1.053898 -0.019369
In [45]: df.fillna(method='pad')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [46]: df
Out[46]:
one two three
a NaN 0.501113 -0.355322
c NaN 0.580967 0.983801
e NaN NaN NaN
f NaN NaN NaN
h NaN -1.053898 -0.019369
Method Action
pad / ffill Fill values forward
bfill / backfill Fill values backward
With time series data, using pad/ffill is extremely common so that the “last known value” is available at every time
point.
ffill() is equivalent to fillna(method='ffill') and bfill() is equivalent to
fillna(method='bfill')
You can also fillna using a dict or Series that is alignable. The labels of the dict or index of the Series must match the
columns of the frame you wish to fill. The use case of this is to fill a DataFrame with the mean of that column.
In [52]: dff
Out[52]:
A B C
0 0.758887 2.340598 0.219039
1 -1.235583 0.031785 0.701683
2 -1.557016 -0.636986 -1.238610
3 NaN -1.002278 0.654052
4 NaN NaN 1.053999
5 0.651981 NaN NaN
6 0.109001 -0.533294 NaN
7 -1.037831 -1.150016 NaN
8 -0.687693 1.921056 -0.121113
9 -0.258742 -0.706329 0.402547
In [53]: dff.fillna(dff.mean())
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C
0 0.758887 2.340598 0.219039
1 -1.235583 0.031785 0.701683
2 -1.557016 -0.636986 -1.238610
3 -0.407125 -1.002278 0.654052
4 -0.407125 0.033067 1.053999
(continues on next page)
In [54]: dff.fillna(dff.mean()['B':'C'])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C
0 0.758887 2.340598 0.219039
1 -1.235583 0.031785 0.701683
2 -1.557016 -0.636986 -1.238610
3 NaN -1.002278 0.654052
4 NaN 0.033067 1.053999
5 0.651981 0.033067 0.238800
6 0.109001 -0.533294 0.238800
7 -1.037831 -1.150016 0.238800
8 -0.687693 1.921056 -0.121113
9 -0.258742 -0.706329 0.402547
Same result as above, but is aligning the ‘fill’ value which is a Series in this case.
In [55]: dff.where(pd.notna(dff), dff.mean(), axis='columns')
Out[55]:
A B C
0 0.758887 2.340598 0.219039
1 -1.235583 0.031785 0.701683
2 -1.557016 -0.636986 -1.238610
3 -0.407125 -1.002278 0.654052
4 -0.407125 0.033067 1.053999
5 0.651981 0.033067 0.238800
6 0.109001 -0.533294 0.238800
7 -1.037831 -1.150016 0.238800
8 -0.687693 1.921056 -0.121113
9 -0.258742 -0.706329 0.402547
You may wish to simply exclude labels from a data set which refer to missing data. To do this, use dropna():
In [56]: df
Out[56]:
one two three
a NaN 0.501113 -0.355322
c NaN 0.580967 0.983801
e NaN 0.000000 0.000000
f NaN 0.000000 0.000000
h NaN -1.053898 -0.019369
In [57]: df.dropna(axis=0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Empty DataFrame
Columns: [one, two, three]
(continues on next page)
In [58]: df.dropna(axis=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
two three
a 0.501113 -0.355322
c 0.580967 0.983801
e 0.000000 0.000000
f 0.000000 0.000000
h -1.053898 -0.019369
In [59]: df['one'].dropna()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Series([], Name: one, dtype: float64)
An equivalent dropna() is available for Series. DataFrame.dropna has considerably more options than Se-
ries.dropna, which can be examined in the API.
15.5.4 Interpolation
In [60]: ts
Out[60]:
2000-01-31 0.469112
2000-02-29 NaN
2000-03-31 NaN
2000-04-28 NaN
2000-05-31 NaN
2000-06-30 NaN
2000-07-31 NaN
...
2007-10-31 -3.305259
2007-11-30 -5.485119
2007-12-31 -6.854968
2008-01-31 -7.809176
2008-02-29 -6.346480
2008-03-31 -8.089641
2008-04-30 -8.916232
Freq: BM, Length: 100, dtype: float64
In [61]: ts.count()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→61
In [62]: ts.interpolate().count()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→100
In [63]: ts.interpolate().plot()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→<matplotlib.axes._subplots.AxesSubplot at 0x7f20cf59ca58>
In [65]: ts2.interpolate()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2000-01-31 0.469112
2000-02-29 -2.610313
2002-07-31 -5.689738
2005-01-31 -7.302985
2008-04-30 -8.916232
dtype: float64
In [66]: ts2.interpolate(method='time')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2000-01-31 0.469112
2000-02-29 0.273272
(continues on next page)
In [67]: ser
Out[67]:
0.0 0.0
1.0 NaN
10.0 10.0
dtype: float64
In [68]: ser.interpolate()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[68]:
0.0 0.0
1.0 5.0
10.0 10.0
dtype: float64
In [69]: ser.interpolate(method='values')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0.0 0.0
1.0 1.0
10.0 10.0
dtype: float64
In [71]: df
Out[71]:
A B
0 1.0 0.25
1 2.1 NaN
2 NaN NaN
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
In [72]: df.interpolate()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B
0 1.0 0.25
1 2.1 1.50
2 3.4 2.75
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
The method argument gives access to fancier interpolation methods. If you have scipy installed, you can pass the
name of a 1-d interpolation routine to method. You’ll want to consult the full scipy interpolation documentation and
reference guide for details. The appropriate interpolation method will depend on the type of data you are working
with.
• If you are dealing with a time series that is growing at an increasing rate, method='quadratic' may be
appropriate.
• If you have values approximating a cumulative distribution function, then method='pchip' should work
well.
• To fill missing values with goal of smooth plotting, consider method='akima'.
In [73]: df.interpolate(method='barycentric')
Out[73]:
A B
0 1.00 0.250
1 2.10 -7.660
2 3.53 -4.515
3 4.70 4.000
4 5.60 12.200
5 6.80 14.400
In [74]: df.interpolate(method='pchip')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B
0 1.00000 0.250000
1 2.10000 0.672808
2 3.43454 1.928950
3 4.70000 4.000000
4 5.60000 12.200000
5 6.80000 14.400000
In [75]: df.interpolate(method='akima')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B
0 1.000000 0.250000
1 2.100000 -0.873316
2 3.406667 0.320034
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
When interpolating via a polynomial or spline approximation, you must also specify the degree or order of the approx-
imation:
A B
0 1.000000 0.250000
1 2.100000 -2.703846
2 3.451351 -1.453846
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
In [78]: np.random.seed(2)
In [80]: bad = np.array([4, 13, 14, 15, 16, 17, 18, 20, 29])
In [84]: df.plot()
Out[84]: <matplotlib.axes._subplots.AxesSubplot at 0x7f20cf573fd0>
Another use case is interpolation at new values. Suppose you have 100 observations from some distribution. And let’s
suppose that you’re particularly interested in what’s happening around the middle. You can mix pandas’ reindex
and interpolate methods to interpolate at the new values.
# interpolate at new_index
In [86]: new_index = ser.index | pd.Index([49.25, 49.5, 49.75, 50.25, 50.5, 50.75])
In [88]: interp_s[49:51]
Out[88]:
49.00 0.471410
49.25 0.476841
49.50 0.481780
49.75 0.485998
50.00 0.489266
50.25 0.491814
50.50 0.493995
50.75 0.495763
51.00 0.497074
dtype: float64
Like other pandas fill methods, interpolate() accepts a limit keyword argument. Use this argument to limit
the number of consecutive NaN values filled since the last valid observation:
In [89]: ser = pd.Series([np.nan, np.nan, 5, np.nan, np.nan, np.nan, 13, np.nan, np.
˓→nan])
0 NaN
1 NaN
2 5.0
3 7.0
4 NaN
5 NaN
6 13.0
7 13.0
8 NaN
dtype: float64
By default, NaN values are filled in a forward direction. Use limit_direction parameter to fill backward or
from both directions.
# fill one consecutive value backwards
In [92]: ser.interpolate(limit=1, limit_direction='backward')
Out[92]:
0 NaN
1 5.0
2 5.0
3 NaN
4 NaN
5 11.0
6 13.0
7 NaN
8 NaN
dtype: float64
0 5.0
1 5.0
2 5.0
3 7.0
4 9.0
5 11.0
6 13.0
7 13.0
8 13.0
dtype: float64
By default, NaN values are filled whether they are inside (surrounded by) existing valid values, or outside existing
valid values. Introduced in v0.23 the limit_area parameter restricts filling to either inside or outside values.
0 5.0
1 5.0
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 NaN
8 NaN
(continues on next page)
0 5.0
1 5.0
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 13.0
8 13.0
dtype: float64
In [99]: ser.replace(0, 5)
Out[99]:
0 5.0
1 1.0
2 2.0
3 3.0
4 4.0
dtype: float64
Instead of replacing with specified values, you can treat all given values as missing and interpolate over them:
In [104]: ser.replace([1, 2, 3], method='pad')
Out[104]:
0 0.0
1 0.0
2 0.0
3 0.0
4 4.0
dtype: float64
Note: Python strings prefixed with the r character such as r'hello world' are so-called “raw” strings. They
have different semantics regarding backslashes than strings without this prefix. Backslashes in raw strings will be
interpreted as an escaped backslash, e.g., r'\' == '\\'. You should read about them if this is unclear.
In [106]: df = pd.DataFrame(d)
Now do it with a regular expression that removes surrounding whitespace (regex -> regex):
In [108]: df.replace(r'\s*\.\s*', np.nan, regex=True)
Out[108]:
a b c
(continues on next page)
Same as the previous example, but use a regular expression for searching instead (dict of regex -> dict):
You can pass nested dictionaries of regular expressions that use regex=True:
You can also use the group of a regular expression match when replacing (dict of regex -> dict of regex), this works
for lists as well.
In [115]: df.replace({'b': r'\s*(\.)\s*'}, {'b': r'\1ty'}, regex=True)
Out[115]:
a b c
0 0 a a
1 1 b b
2 2 .ty NaN
3 3 .ty d
You can pass a list of regular expressions, of which those that match will be replaced with a scalar (list of regex ->
regex).
In [116]: df.replace([r'\s*\.\s*', r'a|b'], np.nan, regex=True)
Out[116]:
a b c
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN d
All of the regular expression examples can also be passed with the to_replace argument as the regex argument.
In this case the value argument must be passed explicitly by name or regex must be a nested dictionary. The
previous example, in this case, would then be:
In [117]: df.replace(regex=[r'\s*\.\s*', r'a|b'], value=np.nan)
Out[117]:
a b c
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN d
This can be convenient if you do not want to pass regex=True every time you want to use a regular expression.
Note: Anywhere in the above replace examples that you see a regular expression a compiled regular expression is
valid as well.
In [123]: df[1].dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→dtype('float64')
Warning: When replacing multiple bool or datetime64 objects, the first argument to replace
(to_replace) must match the type of the value being replaced. For example,
s = pd.Series([True, False, True])
s.replace({'a string': 'new value', True: False}) # raises
will raise a TypeError because one of the dict keys is not of the correct type for replacement.
However, when replacing a single object such as,
the original NDFrame object will be returned untouched. We’re working on unifying this API, but for backwards
compatibility reasons we cannot break the latter behavior. See GH6354 for more details.
While pandas supports storing arrays of integer and boolean type, these types are not capable of storing missing data.
Until we can switch to using a native NA type in NumPy, we’ve established some “casting rules”. When a reindexing
operation introduces missing data, the Series will be cast according to the rules introduced in the table below.
For example:
In [128]: s > 0
Out[128]:
0 True
2 True
4 True
6 True
7 True
dtype: bool
In [131]: crit
Out[131]:
0 True
1 NaN
2 True
3 NaN
4 True
5 NaN
6 True
(continues on next page)
In [132]: crit.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→dtype('O')
Ordinarily NumPy will complain if you try to use an object array (even if it contains boolean values) instead of a
boolean array to get or set values from an ndarray (e.g. selecting values based on some criteria). If a boolean vector
contains NAs, an exception will be generated:
In [134]: reindexed[crit]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-134-0dac417a4890> in <module>()
----> 1 reindexed[crit]
/pandas/pandas/core/common.py in is_bool_indexer(key)
105 if not lib.is_bool_array(key):
106 if isna(key).any():
--> 107 raise ValueError('cannot index with vector containing '
108 'NA / NaN values')
109 return False
However, these can be filled in using fillna() and it will work fine:
In [135]: reindexed[crit.fillna(False)]
Out[135]:
0 0.126504
2 0.696198
4 0.697416
6 0.601516
7 0.003659
dtype: float64
In [136]: reindexed[crit.fillna(True)]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[1
˓→
0 0.126504
1 0.000000
2 0.696198
3 0.000000
4 0.697416
5 0.000000
6 0.601516
(continues on next page)
SIXTEEN
By “group by” we are referring to a process involving one or more of the following steps:
• Splitting the data into groups based on some criteria.
• Applying a function to each group independently.
• Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many situations we may wish to split the data set into
groups and do something with those groups. In the apply step, we might wish to one of the following:
• Aggregation: compute a summary statistic (or statistics) for each group. Some examples:
– Compute group sums or means.
– Compute group sizes / counts.
• Transformation: perform some group-specific computations and return a like-indexed object. Some examples:
– Standardize data (zscore) within a group.
– Filling NAs within groups with a value derived from each group.
• Filtration: discard some groups, according to a group-wise computation that evaluates True or False. Some
examples:
– Discard data that belongs to groups with only a few members.
– Filter out data based on the group sum or mean.
• Some combination of the above: GroupBy will examine the results of the apply step and try to return a sensibly
combined result if it doesn’t fit into either of the above two categories.
Since the set of object instance methods on pandas data structures are generally rich and expressive, we often simply
want to invoke, say, a DataFrame function on each group. The name GroupBy should be quite familiar to those who
have used a SQL-based tool (or itertools), in which you can write code like:
We aim to make operations like this natural and easy to express using pandas. We’ll address each area of GroupBy
functionality then provide some non-trivial examples / use cases.
See the cookbook for some advanced strategies.
837
pandas: powerful Python data analysis toolkit, Release 0.23.4
pandas objects can be split on any of their axes. The abstract definition of grouping is to provide a mapping of labels
to group names. To create a GroupBy object (more on what the GroupBy object is later), you may do the following:
# default is axis=0
>>> grouped = obj.groupby(key)
>>> grouped = obj.groupby(key, axis=1)
>>> grouped = obj.groupby([key1, key2])
In [2]: df
Out[2]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby(). We could naturally group by either the A or B
columns, or both:
These will split the DataFrame on its index (rows). We could also split by the columns:
pandas Index objects support duplicate values. If a non-unique index is used as the group key in a groupby operation,
all values for the same index value will be considered to be in one group and thus the output of aggregation functions
will only contain unique index values:
In [10]: grouped.first()
Out[10]:
1 1
2 2
3 3
dtype: int64
In [11]: grouped.last()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[11]:
1 10
2 20
3 30
dtype: int64
In [12]: grouped.sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[12]:
˓→
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object only verifies that you’ve passed a valid
mapping.
Note: Many kinds of complicated data manipulations can be expressed in terms of GroupBy operations (though can’t
be guaranteed to be the most efficient). You can get quite creative with the label mapping functions.
By default the group keys are sorted during the groupby operation. You may however pass sort=False for
potential speedups:
In [13]: df2 = pd.DataFrame({'X' : ['B', 'B', 'A', 'A'], 'Y' : [1, 2, 3, 4]})
In [14]: df2.groupby(['X']).sum()
Out[14]:
Y
X
A 7
B 3
Note that groupby will preserve the order in which observations are sorted within each group. For example, the
groups created by groupby() below are in the order they appeared in the original DataFrame:
In [16]: df3 = pd.DataFrame({'X' : ['A', 'B', 'A', 'B'], 'Y' : [1, 4, 3, 2]})
In [17]: df3.groupby(['X']).get_group('A')
Out[17]:
X Y
0 A 1
2 A 3
In [18]: df3.groupby(['X']).get_group('B')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[18]:
X Y
1 B 4
3 B 2
The groups attribute is a dict whose keys are the computed unique groups and corresponding values being the axis
labels belonging to each group. In the above example we have:
In [19]: df.groupby('A').groups
Out[19]:
{'bar': Int64Index([1, 3, 5], dtype='int64'),
'foo': Int64Index([0, 2, 4, 6, 7], dtype='int64')}
Calling the standard Python len function on the GroupBy object just returns the length of the groups dict, so it is
largely just a convenience:
In [22]: grouped.groups
Out[22]:
{('bar', 'one'): Int64Index([1], dtype='int64'),
('bar', 'three'): Int64Index([3], dtype='int64'),
('bar', 'two'): Int64Index([5], dtype='int64'),
('foo', 'one'): Int64Index([0, 6], dtype='int64'),
('foo', 'three'): Int64Index([7], dtype='int64'),
('foo', 'two'): Int64Index([2, 4], dtype='int64')}
In [23]: len(grouped)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→6
In [25]: gb = df.groupby('gender')
In [26]: gb.<TAB>
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group
˓→gb.height gb.last gb.median gb.ngroups gb.plot gb.rank
˓→gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups
˓→gb.hist gb.max gb.min gb.nth gb.prod gb.resample
˓→gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head
˓→gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size
˓→gb.tail gb.weight
With hierarchically-indexed data, it’s quite natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [27]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
....:
In [30]: s
Out[30]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
In [32]: grouped.sum()
Out[32]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level number:
In [33]: s.groupby(level='second').sum()
Out[33]:
second
one 0.980950
two 1.991575
dtype: float64
The aggregation functions such as sum will take the level parameter directly. Additionally, the resulting index will be
named according to the chosen level:
In [34]: s.sum(level='second')
Out[34]:
second
one 0.980950
two 1.991575
dtype: float64
In [35]: s
Out[35]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
(continues on next page)
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
A DataFrame may be grouped by a combination of columns and index levels by specifying the column names as
strings and the index levels as pd.Grouper objects.
In [38]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
....:
In [41]: df
Out[41]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
(continues on next page)
The following example groups df by the second index level and the A column.
In [42]: df.groupby([pd.Grouper(level=1), 'A']).sum()
Out[42]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Once you have created the GroupBy object from a DataFrame, you might want to do something different for each of
the columns. Thus, using [] similar to getting a column from a DataFrame, you can do:
In [45]: grouped = df.groupby(['A'])
This is mainly syntactic sugar for the alternative and much more verbose:
In [48]: df['C'].groupby(df['A'])
Out[48]: <pandas.core.groupby.groupby.SeriesGroupBy object at 0x7f21344f5b70>
Additionally this method avoids recomputing the internal grouping information derived from the passed key.
With the GroupBy object in hand, iterating through the grouped data is very natural and functions similarly to
itertools.groupby():
In the case of grouping by multiple keys, the group name will be a tuple:
It’s standard Python-fu but remember you can unpack the tuple in the for loop statement if you wish: for (k1,
k2), group in grouped:.
In [52]: grouped.get_group('bar')
Out[52]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
16.4 Aggregation
Once the GroupBy object has been created, several methods are available to perform a computation on the grouped
data. These operations are similar to the aggregating API, window functions API, and resample API.
An obvious one is aggregation via the aggregate() or equivalently agg() method:
In [55]: grouped.aggregate(np.sum)
Out[55]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [57]: grouped.aggregate(np.sum)
Out[57]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the new index along the grouped axis. In
the case of multiple keys, the result is a MultiIndex by default, though this can be changed by using the as_index
option:
In [58]: grouped = df.groupby(['A', 'B'], as_index=False)
In [59]: grouped.aggregate(np.sum)
Out[59]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the same result as the column names are
stored in the resulting MultiIndex:
In [61]: df.groupby(['A', 'B']).sum().reset_index()
Out[61]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group. This is included in GroupBy as the size
method. It returns a Series whose index are the group names and whose values are the sizes of each group.
In [62]: grouped.size()
Out[62]:
A B
bar one 1
three 1
two 1
foo one 2
three 1
two 2
dtype: int64
In [63]: grouped.describe()
Out[63]:
C D
˓→
Note: Aggregation functions will not return the groups that you are aggregating over if they are named columns,
when as_index=True, the default. The grouped columns will be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects. Some common aggregating
functions are tabulated below:
Function Description
mean() Compute mean of groups
sum() Compute sum of group values
size() Compute group sizes
count() Compute count of group
std() Standard deviation of groups
var() Compute variance of groups
sem() Standard error of the mean of groups
describe() Generates descriptive statistics
first() Compute first of group values
last() Compute last of group values
nth() Take nth value, or a subset if n is a list
min() Compute min of group values
max() Compute max of group values
The aggregating functions above will exclude NA values. Any function which reduces a Series to a scalar value is
an aggregation function and will work, a trivial example is df.groupby('A').agg(lambda ser: 1). Note
that nth() can act as a reducer or a filter, see here.
With grouped Series you can also pass a list or dict of functions to do aggregation with, outputting a DataFrame:
On a grouped DataFrame, you can pass a list of functions to apply to each column, which produces an aggregated
result with a hierarchical index:
The resulting aggregations are named for the functions themselves. If you need to rename, then you can add in a
chained operation for a Series like this:
By passing a dict to aggregate you can apply a different aggregation to the columns of a DataFrame:
The function names can also be strings. In order for a string to be valid it must be either implemented on GroupBy or
available via dispatching:
Note: If you pass a dict to aggregate, the ordering of the output columns is non-deterministic. If you want to
be sure the output columns will be in a specific order, you can use an OrderedDict. Compare the output of the
following two commands:
D C
A
bar 1.366330 0.130980
foo 0.884785 -0.359284
Some common aggregations, currently only sum, mean, std, and sem, have optimized Cython implementations:
In [73]: df.groupby('A').sum()
Out[73]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above code would work even without the special
versions via dispatching (see below).
16.5 Transformation
The transform method returns an object that is indexed the same (same size) as the one being grouped. The
transform function must:
• Return a result that is either the same size as the group chunk or broadcastable to the size of the group chunk
(e.g., a scalar, grouped.transform(lambda x: x.iloc[-1])).
• Operate column-by-column on the group chunk. The transform is applied to the first group chunk using
chunk.apply.
• Not perform in-place operations on the group chunk. Group chunks should be treated as immutable, and changes
to a group chunk may produce unexpected results. For example, when using fillna, inplace must be
False (grouped.transform(lambda x: x.fillna(inplace=False))).
• (Optionally) operates on the entire group chunk. If this is supported, a fast path is used starting from the second
chunk.
For example, suppose we wished to standardize the data within each group:
In [77]: ts = ts.rolling(window=100,min_periods=100).mean().dropna()
In [78]: ts.head()
Out[78]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [79]: ts.tail()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
We would expect the result to now have mean 0 and standard deviation 1 within each group, which we can easily
check:
# Original Data
In [83]: grouped = ts.groupby(key)
In [84]: grouped.mean()
(continues on next page)
In [85]: grouped.std()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[85]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [86]: grouped_trans = transformed.groupby(key)
In [87]: grouped_trans.mean()
Out[87]:
2000 1.168208e-15
2001 1.454544e-15
2002 1.726657e-15
dtype: float64
In [88]: grouped_trans.std()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[88]:
˓→
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [90]: compare.plot()
Out[90]: <matplotlib.axes._subplots.AxesSubplot at 0x7f21345fcb00>
Transformation functions that have lower dimension outputs are broadcast to match the shape of the input array.
In [92]: ts.groupby(key).transform(data_range)
Out[92]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
2000-01-13 0.623893
2000-01-14 0.623893
...
2002-09-28 0.558275
2002-09-29 0.558275
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively the built-in methods can be could be used to produce the same outputs
Another common data transform is to replace missing data with the group mean.
In [94]: data_df
Out[94]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
5 0.815643 0.367816 -0.469478
6 -0.030651 1.376106 -0.645129
.. ... ... ...
993 0.012359 0.554602 -1.976159
994 0.042312 -1.628835 1.013822
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
We can verify that the group means have not changed in the transformed data and that the transformed data contains
no NAs.
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
GR 228
JP 267
UK 247
US 258
dtype: int64
Note: Some functions will automatically transform the input when applied to a GroupBy object, but returning an
object of the same shape as the original. Passing as_index=False will not affect these transformation methods.
In [109]: df_re
Out[109]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
5 1 5
6 1 6
.. .. ..
13 5 13
14 5 14
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
(continues on next page)
In [110]: df_re.groupby('A').rolling(4).B.mean()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
5 3.5
6 4.5
...
5 13 11.5
14 12.5
15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation (sum() in the example) for all the members of each
particular group.
In [111]: df_re.groupby('A').expanding().sum()
Out[111]:
A B
A
1 0 1.0 0.0
1 2.0 1.0
2 3.0 3.0
3 4.0 6.0
4 5.0 10.0
5 6.0 15.0
6 7.0 21.0
... ... ...
5 13 20.0 46.0
14 25.0 60.0
15 30.0 75.0
16 35.0 91.0
17 40.0 108.0
18 45.0 126.0
19 50.0 145.0
Suppose you want to use the resample() method to get a daily frequency in each group of your dataframe and wish
to complete the missing values with the ffill() method.
In [113]: df_re
Out[113]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [114]: df_re.groupby('group').resample('1D').ffill()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
2016-01-08 1 5
2016-01-09 1 5
... ... ...
2 2016-01-18 2 7
2016-01-19 2 7
2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
16.6 Filtration
The filter method returns a subset of the original object. Suppose we want to take only elements that belong to
groups with a group sum greater than 2.
The argument of filter must be a function that, applied to the group as a whole, returns True or False.
Another useful operation is filtering out elements that belong to groups with only a couple members.
Alternatively, instead of dropping the offending groups, we can return a like-indexed objects where the groups that do
not pass the filter are filled with NaNs.
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
Note: Some functions when applied to a groupby object will act as a filter on the input, returning a reduced shape of
the original (and potentially eliminating groups), but with the index unchanged. Passing as_index=False will not
affect these transformation methods.
For example: head, tail.
In [122]: dff.groupby('B').head(2)
Out[122]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
When doing an aggregation or transformation, you might just want to call an instance method on each data group.
This is pretty easy to do by passing lambda functions:
But, it’s rather verbose and can be untidy if you need to pass additional arguments. Using a bit of metaprogramming
cleverness, GroupBy now has the ability to “dispatch” method calls to the groups:
In [125]: grouped.std()
Out[125]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being generated. When invoked, it takes any passed
arguments and invokes the function with any arguments on each group (in the above example, the std function). The
results are then combined together much in the style of agg and transform (it actually uses apply to infer the
gluing, documented next). This enables some operations to be carried out rather succinctly:
In [129]: grouped.fillna(method='pad')
Out[129]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
2000-01-06 0.030091 0.186460 -0.680149
2000-01-07 0.030091 0.186460 -0.680149
... ... ... ...
2002-09-20 2.310215 0.157482 -0.064476
2002-09-21 2.310215 0.157482 -0.064476
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
(continues on next page)
In this example, we chopped the collection of time series into yearly chunks then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [131]: g = pd.Series(list('abababab'))
In [132]: gb = s.groupby(g)
In [133]: gb.nlargest(3)
Out[133]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [134]: gb.nsmallest(3)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Some operations on the grouped data might not fit into either the aggregate or transform categories. Or, you may simply
want GroupBy to infer how to combine the results. For these, use the apply function, which can be substituted for
both aggregate and transform in many standard use cases. However, apply can handle some exceptional use
cases, for example:
In [135]: df
Out[135]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [140]: grouped.apply(f)
Out[140]:
original demeaned
0 -0.575247 -0.215962
1 0.254161 0.123181
2 -1.143704 -0.784420
3 0.215897 0.084917
4 1.193555 1.552839
5 -0.077118 -0.208098
6 -0.408530 -0.049245
7 -0.862495 -0.503211
apply on a Series can operate on a returned value from the applied function, that is itself a series, and possibly upcast
the result to a DataFrame:
In [142]: s
Out[142]:
0 9.0
1 8.0
2 7.0
(continues on next page)
In [143]: s.apply(f)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
x x^2
0 9.0 81.00
1 8.0 64.00
2 7.0 49.00
3 5.0 25.00
4 19.0 361.00
5 1.0 1.00
6 4.2 17.64
7 3.3 10.89
Note: apply can act as a reducer, transformer, or filter function, depending on exactly what is passed to it. So
depending on the path taken, and exactly what you are grouping. Thus the grouped columns(s) may be included in the
output as well as set the indices.
Warning: In the current implementation apply calls func twice on the first group to decide whether it can take a
fast or slow code path. This can lead to unexpected behavior if func has side-effects, as they will take effect twice
for the first group.
In [144]: d = pd.DataFrame({"a":["x", "y"], "b":[1,2]})
In [146]: d.groupby("a").apply(identity)
a b
0 x 1
a b
0 x 1
a b
1 y 2
Out[146]:
a b
0 x 1
1 y 2
In [147]: df
Out[147]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A column. There is a slight problem, namely that
we don’t care about the data in column B. We refer to this as a “nuisance” column. If the passed aggregation function
can’t be applied to some columns, the troublesome columns will be (silently) dropped. Thus, this does not pose any
problems:
In [148]: df.groupby('A').std()
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
When using a Categorical grouper (as a single grouper, or as part of multipler groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those that
are observed groupers (observed=True).
Show all values:
Out[149]:
a 3
b 0
dtype: int64
Out[150]:
(continues on next page)
The returned dtype of the grouped will always include all of the catergories that were grouped.
In [152]: s.index.dtype
Out[152]: CategoricalDtype(categories=['a', 'b'], ordered=False)
If there are any NaN or NaT values in the grouping key, these will be automatically excluded. In other words, there will
never be an “NA group” or “NaT group”. This was not the case in older versions of pandas, but users were generally
discarding the NA group anyway (and supporting it was an implementation headache).
Categorical variables represented as instance of pandas’s Categorical class can be used as group keys. If so, the
order of the levels will be preserved:
In [155]: data.groupby(factor).mean()
Out[155]:
(-2.618, -0.684] -1.331461
(-0.684, -0.0232] -0.272816
(-0.0232, 0.541] 0.263607
(0.541, 2.369] 1.166038
dtype: float64
You may need to specify a bit more data to properly group. You can use the pd.Grouper to provide this local
control.
In [157]: df = pd.DataFrame({
.....: 'Branch' : 'A A A A A A A B'.split(),
.....: 'Buyer': 'Carl Mark Carl Carl Joe Joe Joe Carl'.split(),
.....: 'Quantity': [1,3,5,1,8,1,9,3],
.....: 'Date' : [
.....: datetime.datetime(2013,1,1,13,0),
.....: datetime.datetime(2013,1,1,13,5),
.....: datetime.datetime(2013,10,1,20,0),
.....: datetime.datetime(2013,10,2,10,0),
.....: datetime.datetime(2013,10,1,20,0),
(continues on next page)
In [158]: df
Out[158]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [159]: df.groupby([pd.Grouper(freq='1M',key='Date'),'Buyer']).sum()
Out[159]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column that could be potential groupers.
In [160]: df = df.set_index('Date')
In [162]: df.groupby([pd.Grouper(freq='6M',key='Date'),'Buyer']).sum()
Out[162]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [163]: df.groupby([pd.Grouper(freq='6M',level='Date'),'Buyer']).sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [165]: df
Out[165]:
A B
0 1 2
1 1 4
2 5 6
In [166]: g = df.groupby('A')
In [167]: g.head(1)
Out[167]:
A B
0 1 2
2 5 6
In [168]: g.tail(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[168]:
A B
1 1 4
2 5 6
To select from a DataFrame or Series the nth item, use nth(). This is a reduction method, and will return a single
row (or no row) per group if you pass an int for n:
In [170]: g = df.groupby('A')
In [171]: g.nth(0)
Out[171]:
B
A
1 NaN
5 6.0
In [172]: g.nth(-1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[172]:
B
A
1 4.0
5 6.0
In [173]: g.nth(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[173]:
˓→
B
(continues on next page)
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or
'all' just like you would pass to dropna:
In [175]: g.first()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[175]:
B
A
1 4.0
5 6.0
B
A
1 4.0
5 6.0
In [177]: g.last()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
B
A
1 4.0
5 6.0
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [180]: g = df.groupby('A',as_index=False)
In [181]: g.nth(0)
Out[181]:
A B
0 1 NaN
(continues on next page)
In [182]: g.nth(-1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[182]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [183]: business_dates = pd.date_range(start='4/1/2014', end='6/30/2014', freq='B')
# get the first, 4th, and last date index for each month
In [185]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[185]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
To see the order in which each row appears within its group, use the cumcount method:
In [186]: dfg = pd.DataFrame(list('aaabba'), columns=['A'])
In [187]: dfg
Out[187]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [188]: dfg.groupby('A').cumcount()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[188]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [189]: dfg.groupby('A').cumcount(ascending=False)
(continues on next page)
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
In [191]: dfg
Out[191]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [192]: dfg.groupby('A').ngroup()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[192]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [193]: dfg.groupby('A').ngroup(ascending=False)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
16.9.10 Plotting
Groupby also works with some plotting methods. For example, suppose we suspect that some features in a DataFrame
may differ by group, in this case, the values in column 1 where the group is “B” are 3 higher on average.
In [194]: np.random.seed(1234)
In [198]: df.groupby('g').boxplot()
Out[198]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values of our grouping column g (“A” and “B”).
The values of the resulting dictionary can be controlled by the return_type keyword of boxplot. See the
visualization documentation for more.
In [200]: n = 1000
In [202]: df.head(2)
Out[202]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Piping can also be expressive when you want to deliver a grouped object to some arbitrary function, for example:
(df.groupby(['Store', 'Product']).pipe(report_func)
where report_func takes a GroupBy object and creates a report from that.
16.10 Examples
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [204]: df = pd.DataFrame({'a':[1,0,0], 'b':[0,1,0], 'c':[1,0,0], 'd':[2,3,4]})
In [205]: df
Out[205]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
By using ngroup(), we can extract information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies naturally to multiple columns of mixed type and different sources.
This can be useful as an intermediate categorical-like step in processing, when the relationships between the group
rows are more important than their content, or as input to an algorithm which only accepts the integer encoding.
(For more information about support in pandas for full categorical data, see the Categorical introduction and the API
documentation.)
In [207]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [208]: dfg
Out[208]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
0 0
(continues on next page)
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that
generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the
groupby operation.
Note: The below example shows how we can downsample by consolidation of samples into fewer samples. Here by
using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information
contained in many samples into a small subset of values which is their standard deviation thereby reducing the number
of samples.
In [211]: df = pd.DataFrame(np.random.randn(10,2))
In [212]: df
Out[212]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [213]: df.index // 5
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Group DataFrame columns, compute a set of metrics and return a named Series. The Series name is used as the name
for the column index. This is especially useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [215]: df = pd.DataFrame({
.....: 'a': [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: 'b': [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: 'c': [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: 'd': [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: })
.....:
In [218]: result
Out[218]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [219]: result.stack()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
SEVENTEEN
pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various
kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
The concat() function (in the main pandas namespace) does all of the heavy lifting of performing concatenation
operations along an axis while performing optional set logic (union or intersection) of the indexes (if any) on the other
axes. Note that I say “if any” because there is only a single possible axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is a simple example:
877
pandas: powerful Python data analysis toolkit, Release 0.23.4
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat takes a list or dict of
homogeneously-typed objects and concatenates them with some configurable handling of “what to do with the other
axes”:
• objs : a sequence or mapping of Series, DataFrame, or Panel objects. If a dict is passed, the sorted keys will be
used as the keys argument, unless it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a ValueError will be raised.
• axis : {0, 1, . . . }, default 0. The axis to concatenate along.
• join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on other axis(es). Outer for union and inner
for intersection.
• ignore_index : boolean, default False. If True, do not use the index values on the concatenation axis. The
resulting axis will be labeled 0, . . . , n - 1. This is useful if you are concatenating objects where the concatenation
axis does not have meaningful indexing information. Note the index values on the other axes are still respected
in the join.
• join_axes : list of Index objects. Specific indexes to use for the other n - 1 axes instead of performing
inner/outer set logic.
• keys : sequence, default None. Construct hierarchical index using the passed keys as the outermost level. If
multiple levels passed, should contain tuples.
• levels : list of sequences, default None. Specific levels (unique values) to use for constructing a MultiIndex.
Otherwise they will be inferred from the keys.
• names : list, default None. Names for the levels in the resulting hierarchical index.
• verify_integrity : boolean, default False. Check whether the new concatenated axis contains duplicates.
This can be very expensive relative to the actual data concatenation.
As you can see (if you’ve read the rest of the documentation), the resulting object’s index has a hierarchical index.
This means that we can now select out each chunk by key:
In [7]: result.loc['y']
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this functionality below.
Note: It is worth noting that concat() (and therefore append()) makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need to use the operation over several datasets,
use a list comprehension.
When gluing together multiple DataFrames, you have a choice of how to handle the other axes (other than the one
being concatenated). This can be done in the following three ways:
• Take the union of them all, join='outer'. This is the default option as it results in zero information loss.
• Take the intersection, join='inner'.
• Use a specific index, as passed to the join_axes argument.
Here is an example of each of these methods. First, the default join='outer' behavior:
Lastly, suppose we just wanted to reuse the exact index from the original DataFrame:
A useful shortcut to concat() are the append() instance methods on Series and DataFrame. These methods
actually predated concat. They concatenate along axis=0, namely the index:
In the case of DataFrame, the indexes must be disjoint but the columns do not need to be:
Note: Unlike the append() method, which appends to the original list and returns None, append() here does
not modify df1 and returns its copy with df2 appended.
For DataFrame s which don’t have a meaningful index, you may wish to append them and ignore the fact that they
may have overlapping indexes. To do this, use the ignore_index argument:
You can concatenate a mix of Series and DataFrame s. The Series will be transformed to DataFrame with
the column name as the name of the Series.
Note: Since we’re concatenating a Series to a DataFrame, we could have achieved the same result with
DataFrame.assign(). To concatenate an arbitrary number of pandas objects (DataFrame or Series), use
concat.
A fairly common use of the keys argument is to override the column names when creating a new DataFrame based
on existing Series. Notice how the default behaviour consists on letting the resulting DataFrame inherit the parent
Series’ name, when these existed.
Through the keys argument we can override the existing column names.
You can also pass a dict to concat in which case the dict keys will be used for the keys argument (unless other keys
are specified):
The MultiIndex created has levels that are constructed from the passed keys and the index of the DataFrame pieces:
In [31]: result.index.levels
Out[31]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can do so using the levels argument:
In [33]: result.index.levels
Out[33]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things like GroupBy where the order of a categorical
variable is meaningful.
While not especially efficient (since a new object must be created), you can append a single row to a DataFrame by
passing a Series or dict to append, which returns a new DataFrame as above.
You should use ignore_index with this method to instruct DataFrame to discard its index. If you wish to preserve
the index, you should construct an appropriately-indexed DataFrame and append or concatenate those objects.
You can also pass a list of dicts or Series:
pandas has full-featured, high performance in-memory join operations idiomatically very similar to relational
databases like SQL. These methods perform significantly better (in some cases well over an order of magnitude better)
than other open source implementations (like base::merge.data.frame in R). The reason for this is careful
algorithmic design and the internal layout of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a comparison with SQL.
pandas provides a single function, merge(), as the entry point for all standard database join operations between
DataFrame objects:
Note: Support for specifying index levels as the on, left_on, and right_on parameters was added in version
0.23.0.
The return type will be the same as left. If left is a DataFrame and right is a subclass of DataFrame, the
return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a DataFrame instance method merge(),
with the calling DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the index-on-index (by default) and column(s)-on-index join.
If you are joining on index only, you may wish to use DataFrame.join to save yourself some typing.
Experienced users of relational databases like SQL will be familiar with the terminology used to describe join oper-
ations between two SQL-table like structures (DataFrame objects). There are several cases to consider which are
very important to understand:
• one-to-one joins: for example when joining two DataFrame objects on their indexes (which must contain
unique values).
• many-to-one joins: for example when joining an index (unique) to one or more columns in a different
DataFrame.
• many-to-many joins: joining columns on columns.
Note: When joining columns on columns (potentially a many-to-many join), any indexes on the passed DataFrame
objects will be discarded.
It is worth spending some time understanding the result of the many-to-many join case. In SQL / standard relational
algebra, if a key combination appears more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique key combination:
Here is a more complicated example with multiple join keys. Only the keys appearing in left and right are present
(the intersection), since how='inner' by default.
The how argument to merge specifies how to determine which keys are to be included in the resulting table. If a
key combination does not appear in either the left or right tables, the values in the joined table will be NA. Here is a
summary of the how options and their SQL equivalent names:
Warning: Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row
dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in
keys before joining large DataFrames.
If the user is aware of the duplicates in the right DataFrame but wants to ensure there are no duplicates in the left
DataFrame, one can use the validate='one_to_many' argument instead, which will not raise an exception.
merge() accepts the argument indicator. If True, a Categorical-type column called _merge will be added to
the output object that takes on values:
The indicator argument will also accept string arguments, in which case the indicator function will use the value
of the passed string as the name for the indicator column.
In [59]: left
Out[59]:
key v1
0 1 10
In [61]: right
Out[61]:
key v1
0 1 20
1 2 30
Of course if you have missing values that are introduced, then the resulting dtype will be upcast.
....:
In [70]: left
Out[70]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [71]: left.dtypes
(continues on next page)
X category
Y object
dtype: object
In [73]: right
Out[73]:
X Z
0 foo 1
1 bar 2
In [74]: right.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[74]:
X category
Z int64
dtype: object
In [76]: result
Out[76]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [77]: result.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
X category
Y object
Z int64
dtype: object
Note: The category dtypes must be exactly the same, meaning the same categories and the ordered attribute. Other-
wise the result will coerce to object dtype.
Note: Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
DataFrame.join() is a convenient method for combining the columns of two potentially differently-indexed
DataFrames into a single result DataFrame. Here is a very basic example:
The data alignment here is on the indexes (row labels). This same behavior can be achieved using merge plus
additional arguments instructing it to use the indexes:
join() takes an optional on argument which may be a column or multiple column names, which specifies that the
passed DataFrame is to be aligned on that column in the DataFrame. These two function calls are completely
equivalent:
left.join(right, on=key_or_keys)
pd.merge(left, right, left_on=key_or_keys, right_index=True,
how='left', sort=False)
Obviously you can choose whichever form you find more convenient. For many-to-one joins (where one of the
DataFrame’s is already indexed by the join key), using join may be more convenient. Here is a simple example:
Now this can be joined by passing the two key column names:
In [92]: result = left.join(right, on=['key1', 'key2'])
The de-
fault for DataFrame.join is to perform a left join (essentially a “VLOOKUP” operation, for Excel users), which
uses only the keys found in the calling DataFrame. Other join types, for example inner join, can be just as easily
performed:
As you can see, this drops any rows where there was no match.
You can join a singly-indexed DataFrame with a level of a multi-indexed DataFrame. The level will match on the
name of the index of the singly-indexed frame against a level name of the multi-indexed frame.
This is equivalent but less verbose and more memory efficient / faster than this.
This is not implemented via join at-the-moment, however it can be done using the following code.
Note: When DataFrames are merged on a string that matches an index level in both frames, the index level is
preserved as an index level in the resulting DataFrame.
Note: If a string matches both a column name and an index level name, then a warning is issued and the column takes
precedence. This will result in an ambiguity error in a future version.
The merge suffixes argument takes a tuple of list of strings to append to overlapping column names in the input
DataFrames to disambiguate the result columns:
A list or tuple of DataFrames can also be passed to join() to join them together on their indexes.
Another fairly common situation is to have two like-indexed (or similarly indexed) Series or DataFrame objects
and wanting to “patch” values in one object from values for matching indices in the other. Here is an example:
Note that this method only takes values from the right DataFrame if they are missing in the left DataFrame. A
related method, update(), alters non-NA values inplace:
In [119]: df1.update(df2)
A merge_ordered() function allows combining time series and other ordered data. In particular it has an optional
fill_method keyword to fill/interpolate missing data:
In [125]: trades
Out[125]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [126]: quotes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
We only asof within 2ms between the quote time and the trade time.
We only asof within 10ms between the quote time and the trade time and we exclude exact matches on time. Note
that though we exclude the exact matches (of the quotes), prior quotes do propagate to that point in time.
EIGHTEEN
Data is often stored in CSV files or databases in so-called “stacked” or “record” format:
In [1]: df
Out[1]:
date variable value
0 2000-01-03 A 0.469112
1 2000-01-04 A -0.282863
2 2000-01-05 A -1.509059
3 2000-01-03 B -1.135632
4 2000-01-04 B 1.212112
5 2000-01-05 B -0.173215
6 2000-01-03 C 0.119209
7 2000-01-04 C -1.044236
8 2000-01-05 C -0.861849
9 2000-01-03 D -2.104569
10 2000-01-04 D -0.494929
11 2000-01-05 D 1.071804
For the curious here is how the above DataFrame was created:
909
pandas: powerful Python data analysis toolkit, Release 0.23.4
But suppose we wish to do time series operations with the variables. A better representation would be where the
columns are the unique variables and an index of dates identifies individual observations. To reshape the data into
this form, we use the DataFrame.pivot() method (also implemented as a top level function pivot()):
If the values argument is omitted, and the input DataFrame has more than one column of values which are not
used as column or index inputs to pivot, then the resulting “pivoted” DataFrame will have hierarchical columns
whose topmost level indicates the respective value column:
In [6]: pivoted
Out[6]:
value value2
˓→
variable A B C D A B C
˓→ D
date
˓→
In [7]: pivoted['value2']
Out[7]:
variable A B C D
date
2000-01-03 0.938225 -2.271265 0.238417 -4.209138
2000-01-04 -0.565727 2.424224 -2.088472 -0.989859
2000-01-05 -3.018117 -0.346429 -1.723698 2.143608
Note that this returns a view on the underlying data in the case where the data are homogeneously-typed.
Closely related to the pivot() method are the related stack() and unstack() methods available on Series
and DataFrame. These methods are designed to work together with MultiIndex objects (see the section on
hierarchical indexing). Here are essentially what these methods do:
• stack: “pivot” a level of the (possibly hierarchical) column labels, returning a DataFrame with an index
with a new inner-most level of row labels.
• unstack: (inverse operation of stack) “pivot” a level of the (possibly hierarchical) row index to the column
axis, producing a reshaped DataFrame with a new inner-most level of column labels.
The clearest way to explain is by example. Let’s take a prior example data set from the hierarchical indexing section:
In [12]: df2
Out[12]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401
The stack function “compresses” a level in the DataFrame’s columns to produce either:
• A Series, in the case of a simple column Index.
• A DataFrame, in the case of a MultiIndex in the columns.
If the columns have a MultiIndex, you can choose which level to stack. The stacked level becomes the new lowest
level in a MultiIndex on the columns:
In [14]: stacked
Out[14]:
first second
bar one A 0.721555
B -0.706771
two A -1.039575
B 0.271860
baz one A -0.424972
B 0.567020
two A 0.276232
B -1.087401
dtype: float64
With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack
is unstack, which by default unstacks the last level:
In [15]: stacked.unstack()
Out[15]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401
In [16]: stacked.unstack(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [17]: stacked.unstack(0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
If the indexes have names, you can use the level names instead of specifying the level numbers:
In [18]: stacked.unstack('second')
Out[18]:
second one two
first
bar A 0.721555 -1.039575
B -0.706771 0.271860
baz A -0.424972 0.276232
B 0.567020 -1.087401
Notice that the stack and unstack methods implicitly sort the index levels involved. Hence a call to stack and
then unstack, or vice versa, will result in a sorted copy of the original DataFrame or Series:
In [21]: df
Out[21]:
A
2 a -0.370647
b -1.157892
1 a -1.344312
b 0.844885
The above code will raise a TypeError if the call to sort_index is removed.
You may also stack or unstack more than one level at a time by passing a list of levels, in which case the end result is
as if each level in the list were processed individually.
In [25]: df
Out[25]:
exp A B A B
animal cat cat dog dog
hair_length long long short short
0 1.075770 -0.109050 1.643563 -1.469388
1 0.357021 -0.674600 -1.776904 -0.968914
2 -1.294524 0.413738 0.276662 -0.472035
3 -0.013960 -0.362543 -0.006154 -0.923061
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061
The list of levels can contain either level names or level numbers (but not a mixture of the two).
# df.stack(level=['animal', 'hair_length'])
# from above is equivalent to:
In [27]: df.stack(level=[1, 2])
Out[27]:
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061
These functions are intelligent about handling missing data and do not expect each subgroup within the hierarchical
index to have the same set of labels. They also can handle the index being unsorted (but you can make it sorted by
calling sort_index, of course). Here is a more complex example:
In [28]: columns = pd.MultiIndex.from_tuples([('A', 'cat'), ('B', 'dog'),
....: ('B', 'cat'), ('A', 'dog')],
....: names=['exp', 'animal'])
(continues on next page)
In [32]: df2
Out[32]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
foo one -1.413681 1.607920 1.024180 0.569605
two 0.875906 -2.211372 0.974466 -2.006747
qux two -1.226825 0.769804 -1.281247 -0.727707
As mentioned above, stack can be called with a level argument to select which level in the columns to stack:
In [33]: df2.stack('exp')
Out[33]:
animal cat dog
first second exp
bar one A 0.895717 2.565646
B -1.206412 0.805244
two A 1.431256 -0.226169
B -1.170299 1.340309
baz one A 0.410835 -0.827317
B 0.132003 0.813850
foo one A -1.413681 0.569605
B 1.024180 1.607920
two A 0.875906 -2.006747
B 0.974466 -2.211372
qux two A -1.226825 -0.727707
B -1.281247 0.769804
In [34]: df2.stack('animal')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
exp A B
first second animal
bar one cat 0.895717 -1.206412
dog 2.565646 0.805244
two cat 1.431256 -1.170299
dog -0.226169 1.340309
baz one cat 0.410835 0.132003
dog -0.827317 0.813850
foo one cat -1.413681 1.024180
dog 0.569605 1.607920
two cat 0.875906 0.974466
dog -2.006747 -2.211372
(continues on next page)
Unstacking can result in missing values if subgroups do not have the same set of labels. By default, missing values
will be replaced with the default fill value for that data type, NaN for float, NaT for datetimelike, etc. For integer types,
by default data will converted to float and missing values will be set to NaN.
In [36]: df3
Out[36]:
exp B
animal dog cat
first second
bar one 0.805244 -1.206412
two 1.340309 -1.170299
foo one 1.607920 1.024180
qux two 0.769804 -1.281247
In [37]: df3.unstack()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
exp B
animal dog cat
second one two one two
first
bar 0.805244 1.340309 -1.206412 -1.170299
foo 1.607920 NaN 1.024180 NaN
qux NaN 0.769804 NaN -1.281247
In [38]: df3.unstack(fill_value=-1e9)
Out[38]:
exp B
animal dog cat
second one two one two
first
bar 8.052440e-01 1.340309e+00 -1.206412e+00 -1.170299e+00
foo 1.607920e+00 -1.000000e+09 1.024180e+00 -1.000000e+09
qux -1.000000e+09 7.698036e-01 -1.000000e+09 -1.281247e+00
Unstacking when the columns are a MultiIndex is also careful about doing the right thing:
In [39]: df[:3].unstack(0)
Out[39]:
exp A B A
animal cat dog cat dog
first bar baz bar baz bar baz bar baz
second
one 0.895717 0.410835 0.805244 0.81385 -1.206412 0.132003 2.565646 -0.827317
(continues on next page)
In [40]: df2.unstack(1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
exp A B A
animal cat dog cat dog
second one two one two one two one two
first
bar 0.895717 1.431256 0.805244 1.340309 -1.206412 -1.170299 2.565646 -0.226169
baz 0.410835 NaN 0.813850 NaN 0.132003 NaN -0.827317 NaN
foo -1.413681 0.875906 1.607920 -2.211372 1.024180 0.974466 0.569605 -2.006747
qux NaN -1.226825 NaN 0.769804 NaN -1.281247 NaN -0.727707
The top-level melt() function and the corresponding DataFrame.melt() are useful to massage a DataFrame
into a format where one or more columns are identifier variables, while all other columns, considered measured
variables, are “unpivoted” to the row axis, leaving just two non-identifier columns, “variable” and “value”. The names
of those columns can be customized by supplying the var_name and value_name parameters.
For instance,
In [41]: cheese = pd.DataFrame({'first' : ['John', 'Mary'],
....: 'last' : ['Doe', 'Bo'],
....: 'height' : [5.5, 6.0],
....: 'weight' : [130, 150]})
....:
In [42]: cheese
Out[42]:
(continues on next page)
Another way to transform is to use the wide_to_long() panel data convenience function. It is less flexible than
melt(), but more user-friendly.
In [47]: dft
Out[47]:
A1970 A1980 B1970 B1980 X id
0 a d 2.5 3.2 -0.121306 0
1 b e 1.2 1.3 -0.097883 1
2 c f 0.7 0.1 0.695775 2
X A B
id year
0 1970 -0.121306 a 2.5
1 1970 -0.097883 b 1.2
2 1970 0.695775 c 0.7
0 1980 -0.121306 d 3.2
1 1980 -0.097883 e 1.3
2 1980 0.695775 f 0.1
It should be no shock that combining pivot / stack / unstack with GroupBy and the basic Series and DataFrame
statistical functions can produce some very expressive and fast data manipulations.
In [49]: df
Out[49]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
two -0.076467 -1.187678 1.130127 -1.436737
foo one -1.413681 1.607920 1.024180 0.569605
two 0.875906 -2.211372 0.974466 -2.006747
qux one -0.410001 -0.078638 0.545952 -1.219217
two -1.226825 0.769804 -1.281247 -0.727707
In [50]: df.stack().mean(1).unstack()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [52]: df.stack().groupby(level=1).mean()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
exp A B
second
one 0.071448 0.455513
two -0.424186 -0.204486
In [53]: df.mean().unstack(0)
(continues on next page)
exp A B
animal
cat 0.060843 0.018596
dog -0.413580 0.232430
While pivot() provides general purpose pivoting with various data types (strings, numerics, etc.), pandas also
provides pivot_table() for pivoting with aggregation of numeric data.
The function pivot_table() can be used to create spreadsheet-style pivot tables. See the cookbook for some
advanced strategies.
It takes a number of arguments:
• data: a DataFrame object.
• values: a column or a list of columns to aggregate.
• index: a column, Grouper, array which has the same length as data, or list of them. Keys to group by on the
pivot table index. If an array is passed, it is being used as the same manner as column values.
• columns: a column, Grouper, array which has the same length as data, or list of them. Keys to group by on
the pivot table column. If an array is passed, it is being used as the same manner as column values.
• aggfunc: function to use for aggregation, defaulting to numpy.mean.
Consider a data set like this:
In [54]: import datetime
....:
In [56]: df
Out[56]:
A B C D E F
0 one A foo 0.341734 -0.317441 2013-01-01
1 one B foo 0.959726 -1.236269 2013-02-01
2 two C foo -1.110336 0.896171 2013-03-01
3 three A bar -0.619976 -0.487602 2013-04-01
4 one B bar 0.149748 -0.082240 2013-05-01
5 one C bar -0.732339 -2.182937 2013-06-01
6 two A foo 0.687738 0.380396 2013-07-01
.. ... .. ... ... ... ...
17 one C bar -0.345352 0.206053 2013-06-15
18 two A foo 1.314232 -0.251905 2013-07-15
(continues on next page)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
D E
˓→
The result object is a DataFrame having potentially hierarchical indexes on the rows and columns. If the values
column name is not given, the pivot table will include all of the data that can be aggregated in an additional level of
hierarchy in the columns:
Also, you can use Grouper for index and columns keywords. For detail of Grouper, see Grouping with a
Grouper specification.
Out[61]:
C bar foo
F
2013-01-31 NaN -0.514058
2013-02-28 NaN 0.002759
2013-03-31 NaN 0.176180
2013-04-30 -1.181568 NaN
2013-05-31 -0.338421 NaN
2013-06-30 -0.538846 NaN
2013-07-31 NaN 1.000985
2013-08-31 NaN 0.433512
2013-09-30 NaN 0.699535
2013-10-31 1.120915 NaN
2013-11-30 0.158248 NaN
2013-12-31 0.588783 NaN
You can render a nice output of the table omitting the missing values by calling to_string if you wish:
In [63]: print(table.to_string(na_rep=''))
D E
C bar foo bar foo
A B
one A 1.120915 -0.514058 1.393057 -0.021605
B -0.338421 0.002759 0.684140 -0.551692
C -0.538846 0.699535 -0.988442 0.747859
three A -1.181568 0.961289
B 0.433512 -1.064372
C 0.588783 -0.131830
two A 1.000985 0.064245
B 0.158248 -0.097147
C 0.176180 0.436241
Note that pivot_table is also available as an instance method on DataFrame, i.e. DataFrame.
pivot_table().
If you pass margins=True to pivot_table, special All columns and rows will be added with partial group
aggregates across the categories on the rows and columns:
In [64]: df.pivot_table(index=['A', 'B'], columns='C', margins=True, aggfunc=np.std)
Out[64]:
D E
C bar foo All bar foo All
A B
one A 1.804346 1.210272 1.569879 0.179483 0.418374 0.858005
B 0.690376 1.353355 0.898998 1.083825 0.968138 1.101401
C 0.273641 0.418926 0.771139 1.689271 0.446140 1.422136
three A 0.794212 NaN 0.794212 2.049040 NaN 2.049040
B NaN 0.363548 0.363548 NaN 1.625237 1.625237
C 3.915454 NaN 3.915454 1.035215 NaN 1.035215
two A NaN 0.442998 0.442998 NaN 0.447104 0.447104
B 0.202765 NaN 0.202765 0.560757 NaN 0.560757
C NaN 1.819408 1.819408 NaN 0.650439 0.650439
All 1.556686 0.952552 1.246608 1.250924 0.899904 1.059389
Use crosstab() to compute a cross-tabulation of two (or more) factors. By default crosstab computes a fre-
quency table of the factors unless an array of values and an aggregation function are passed.
It takes a number of arguments
• index: array-like, values to group by in the rows.
• columns: array-like, values to group by in the columns.
• values: array-like, optional, array of values to aggregate according to the factors.
• aggfunc: function, optional, If no values array is passed, computes a frequency table.
• rownames: sequence, default None, must match number of row arrays passed.
• colnames: sequence, default None, if passed, must match number of column arrays passed.
• margins: boolean, default False, Add row/column margins (subtotals)
• normalize: boolean, {‘all’, ‘index’, ‘columns’}, or {0,1}, default False. Normalize by dividing all values
by the sum of values.
Any Series passed will have their name attributes used unless row or column names for the cross-tabulation are
specified
For example:
In [65]: foo, bar, dull, shiny, one, two = 'foo', 'bar', 'dull', 'shiny', 'one', 'two'
In [71]: df
Out[71]:
A B C
0 1 3 1.0
1 2 3 1.0
2 2 4 NaN
3 2 4 1.0
4 2 4 1.0
B 3 4
A
1 1 0
2 1 3
Any input passed containing Categorical data will have all of its categories included in the cross-tabulation, even
if the actual data does not contain any instances of a particular category.
In [73]: foo = pd.Categorical(['a', 'b'], categories=['a', 'b', 'c'])
18.6.1 Normalization
normalize can also normalize values within each row or within each column:
crosstab can also be passed a third Series and an aggregation function (aggfunc) that will be applied to the
values of the third Series within each group defined by the first two Series:
18.7 Tiling
The cut() function computes groupings for the values of the input array and is often used to transform continuous
variables to discrete or categorical variables:
In [80]: ages = np.array([10, 15, 13, 12, 23, 25, 28, 59, 60])
Categories (3, interval[float64]): [(9.95, 26.667] < (26.667, 43.333] < (43.333, 60.
˓→0]]
If the bins keyword is an integer, then equal-width bins are formed. Alternatively we can specify custom bin-edges:
In [83]: c
Out[83]:
(continues on next page)
To convert a categorical variable into a “dummy” or “indicator” DataFrame, for example a column in a DataFrame
(a Series) which has k distinct values, can derive a DataFrame containing k columns of 1s and 0s using
get_dummies():
In [85]: pd.get_dummies(df['key'])
Out[85]:
a b c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
Sometimes it’s useful to prefix the column names, for example when merging the result with the original DataFrame:
In [87]: dummies
Out[87]:
key_a key_b key_c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
In [88]: df[['data1']].join(dummies)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
This function is often used along with discretization functions like cut:
In [90]: values
Out[90]:
array([ 0.4082, -1.0481, -0.0257, -0.9884, 0.0941, 1.2627, 1.29 ,
0.0824, -0.0558, 0.5366])
In [94]: pd.get_dummies(df)
Out[94]:
C A_a A_b B_b B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
All non-object columns are included untouched in the output. You can control the columns that are encoded with the
columns keyword.
In [95]: pd.get_dummies(df, columns=['A'])
Out[95]:
B C A_a A_b
0 c 1 1 0
1 c 2 0 1
2 b 3 1 0
Notice that the B column is still included in the output, it just hasn’t been encoded. You can drop B before calling
get_dummies if you don’t want to include it in the output.
As with the Series version, you can pass values for the prefix and prefix_sep. By default the column name
is used as the prefix, and ‘_’ as the prefix separator. You can specify prefix and prefix_sep in 3 ways:
• string: Use the same value for prefix or prefix_sep for each column to be encoded.
• list: Must be the same length as the number of columns being encoded.
• dict: Mapping column name to prefix.
In [97]: simple
Out[97]:
C new_prefix_a new_prefix_b new_prefix_b new_prefix_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [99]: from_list
Out[99]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [101]: from_dict
Out[101]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [103]: pd.get_dummies(s)
Out[103]:
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
4 1 0 0
b c
0 0 0
1 1 0
2 0 1
3 0 0
4 0 0
When a column contains only one level, it will be omitted in the result.
In [105]: df = pd.DataFrame({'A':list('aaaaa'),'B':list('ababc')})
In [106]: pd.get_dummies(df)
(continues on next page)
B_b B_c
0 0 0
1 1 0
2 0 0
3 1 0
4 0 1
By default new columns will have np.uint8 dtype. To choose another dtype, use the‘‘dtype‘‘ argument:
In [111]: x
Out[111]:
0 A
1 A
2 NaN
3 B
4 3.14
5 inf
dtype: object
In [113]: labels
Out[113]: array([ 0, 0, -1, 1, 2, 3])
In [114]: uniques
(continues on next page)
Note that factorize is similar to numpy.unique, but differs in its handling of NaN:
Note: The following numpy.unique will fail under Python 3 with a TypeError because of an ordering bug. See
also here.
Note: If you just want to handle one column as a categorical variable (like R’s factor), you can use df["cat_col"]
= pd.Categorical(df["col"]) or df["cat_col"] = df["col"].astype("category"). For
full docs on Categorical, see the Categorical introduction and the API documentation.
NINETEEN
pandas has proven very successful as a tool for working with time series data, especially in the financial data anal-
ysis space. Using the NumPy datetime64 and timedelta64 dtypes, we have consolidated a large number of
features from other Python libraries like scikits.timeseries as well as created a tremendous amount of new
functionality for manipulating time series data.
In working with time series data, we will frequently seek to:
• generate sequences of fixed-frequency dates and time spans
• conform or convert time series to a particular frequency
• compute “relative” dates based on various non-standard time increments (e.g. 5 business days before the last
business day of the year), or “roll” dates forward or backward
pandas provides a relatively compact and self-contained set of tools for performing the above tasks.
Create a range of dates:
# 72 hours starting with midnight Jan 1st, 2011
In [1]: rng = pd.date_range('1/1/2011', periods=72, freq='H')
In [2]: rng[:5]
Out[2]:
DatetimeIndex(['2011-01-01 00:00:00', '2011-01-01 01:00:00',
'2011-01-01 02:00:00', '2011-01-01 03:00:00',
'2011-01-01 04:00:00'],
dtype='datetime64[ns]', freq='H')
In [4]: ts.head()
Out[4]:
2011-01-01 00:00:00 0.469112
2011-01-01 01:00:00 -0.282863
2011-01-01 02:00:00 -1.509059
2011-01-01 03:00:00 -1.135632
2011-01-01 04:00:00 1.212112
Freq: H, dtype: float64
933
pandas: powerful Python data analysis toolkit, Release 0.23.4
# Daily means
In [7]: ts.resample('D').mean()
Out[7]:
2011-01-01 -0.319569
2011-01-02 -0.337703
2011-01-03 0.117258
Freq: D, dtype: float64
19.1 Overview
The following table shows the type of time-related classes pandas can handle and how to create them.
Timestamped data is the most basic type of time series data that associates values with points in time. For pandas
objects it means using the points in time.
In [9]: pd.Timestamp('2012-05-01')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[9]: Timestamp('2012-05-01 00:00:00')
In [10]: pd.Timestamp(2012, 5, 1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[10]:
˓→Timestamp('2012-05-01 00:00:00')
However, in many cases it is more natural to associate things like change variables with a time span instead. The span
represented by Period can be specified explicitly, or inferred from datetime string format.
For example:
In [11]: pd.Period('2011-01')
Out[11]: Period('2011-01', 'M')
Timestamp and Period can serve as an index. Lists of Timestamp and Period are automatically coerced to
DatetimeIndex and PeriodIndex respectively.
In [15]: type(ts.index)
Out[15]: pandas.core.indexes.datetimes.DatetimeIndex
In [16]: ts.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[16]: DatetimeIndex(['2012-05-
˓→01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In [17]: ts
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2012-05-01 -0.410001
2012-05-02 -0.078638
2012-05-03 0.545952
dtype: float64
In [20]: type(ts.index)
Out[20]: pandas.core.indexes.period.PeriodIndex
In [21]: ts.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[21]: PeriodIndex(['2012-01',
˓→'2012-02', '2012-03'], dtype='period[M]', freq='M')
In [22]: ts
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2012-01 -1.219217
2012-02 -1.226825
2012-03 0.769804
Freq: M, dtype: float64
pandas allows you to capture both representations and convert between them. Under the hood, pandas represents
timestamps using instances of Timestamp and sequences of timestamps using instances of DatetimeIndex. For
regular time spans, pandas uses Period objects for scalar values and PeriodIndex for sequences of spans. Better
support for irregular intervals with arbitrary start and end points are forth-coming in future releases.
To convert a Series or list-like object of date-like objects e.g. strings, epochs, or a mixture, you can use the
to_datetime function. When passed a Series, this returns a Series (with the same index), while a list-like is
converted to a DatetimeIndex:
If you use dates which start with the day first (i.e. European style), you can pass the dayfirst flag:
Warning: You see in the above example that dayfirst isn’t strict, so if a date can’t be parsed with the day
being first it will be parsed as if dayfirst were False.
If you pass a single string to to_datetime, it returns a single Timestamp. Timestamp can also accept string
input, but it doesn’t accept string parsing options like dayfirst or format, so use to_datetime if these are
required.
In [27]: pd.to_datetime('2010/11/12')
Out[27]: Timestamp('2010-11-12 00:00:00')
In [28]: pd.Timestamp('2010/11/12')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[28]: Timestamp('2010-11-12 00:00:00')
In addition to the required datetime string, a format argument can be passed to ensure specific parsing. This could
also potentially speed up the conversion considerably.
For more information on the choices available when specifying the format option, see the Python datetime docu-
mentation.
In [32]: pd.to_datetime(df)
Out[32]:
0 2015-02-04 02:00:00
1 2016-03-05 03:00:00
dtype: datetime64[ns]
You can pass only the columns that you need to assemble.
pd.to_datetime looks for standard designations of the datetime component in the column names, including:
• required: year, month, day
• optional: hour, minute, second, millisecond, microsecond, nanosecond
pandas supports converting integer or float epoch times to Timestamp and DatetimeIndex. The default unit is
nanoseconds, since that is how Timestamp objects are stored internally. However, epochs are often stored in another
unit which can be specified. These are computed from the starting point specified by the origin parameter.
Warning: Conversion of float epoch times can lead to inaccurate and unexpected results. Python floats have
about 15 digits precision in decimal. Rounding during conversion from float to high precision Timestamp is
unavoidable. The only way to achieve exact precision is to use a fixed-width types (e.g. an int64).
In [38]: pd.to_datetime([1490195805.433, 1490195805.433502912], unit='s')
Out[38]: DatetimeIndex(['2017-03-22 15:16:45.433000088', '2017-03-22 15:16:45.
˓→433502913'], dtype='datetime64[ns]', freq=None)
See also:
Using the origin Parameter
To invert the operation from above, namely, to convert from a Timestamp to a ‘unix’ epoch:
In [40]: stamps = pd.date_range('2012-10-08 18:15:05', periods=4, freq='D')
In [41]: stamps
Out[41]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05'],
dtype='datetime64[ns]', freq='D')
We subtract the epoch (midnight at January 1, 1970 UTC) and then floor divide by the “unit” (1 second).
In [42]: (stamps - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
Out[42]: Int64Index([1349720105, 1349806505, 1349892905, 1349979305], dtype='int64')
The default is set at origin='unix', which defaults to 1970-01-01 00:00:00. Commonly called ‘unix
epoch’ or POSIX time.
In [44]: pd.to_datetime([1, 2, 3], unit='D')
Out[44]: DatetimeIndex(['1970-01-02', '1970-01-03', '1970-01-04'], dtype=
˓→'datetime64[ns]', freq=None)
To generate an index with timestamps, you can use either the DatetimeIndex or Index constructor and pass in a
list of datetime objects:
In [45]: dates = [datetime(2012, 5, 1), datetime(2012, 5, 2), datetime(2012, 5, 3)]
In [47]: index
Out[47]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype=
˓→'datetime64[ns]', freq=None)
In [49]: index
Out[49]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype=
˓→'datetime64[ns]', freq=None)
In practice this becomes very cumbersome because we often need a very long index with a large number of timestamps.
If we need timestamps on a regular frequency, we can use the date_range() and bdate_range() functions
to create a DatetimeIndex. The default frequency for date_range is a calendar day while the default for
bdate_range is a business day:
In [50]: start = datetime(2011, 1, 1)
In [53]: index
Out[53]:
DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03', '2011-01-04',
'2011-01-05', '2011-01-06', '2011-01-07', '2011-01-08',
'2011-01-09', '2011-01-10',
(continues on next page)
In [55]: index
Out[55]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14',
...
'2011-12-19', '2011-12-20', '2011-12-21', '2011-12-22',
'2011-12-23', '2011-12-26', '2011-12-27', '2011-12-28',
'2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', length=260, freq='B')
Convenience functions like date_range and bdate_range can utilize a variety of frequency aliases:
date_range and bdate_range make it easy to generate a range of dates using various combinations of parame-
ters like start, end, periods, and freq. The start and end dates are strictly inclusive, so dates outside of those
specified will not be generated:
Warning: This functionality was originally exclusive to cdate_range, which is deprecated as of version 0.21.0
in favor of bdate_range. Note that cdate_range only utilizes the weekmask and holidays parameters
when custom business day, ‘C’, is passed as the frequency string. Support has been expanded with bdate_range
to work with any custom frequency string.
See also:
Custom Business Days
Since pandas represents timestamps in nanosecond resolution, the time span that can be represented using a 64-bit
integer is limited to approximately 584 years:
In [68]: pd.Timestamp.min
Out[68]: Timestamp('1677-09-21 00:12:43.145225')
In [69]: pd.Timestamp.max
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[69]: Timestamp('2262-04-11
˓→23:47:16.854775807') (continues on next page)
See also:
Representing Out-of-Bounds Spans
19.6 Indexing
One of the main uses for DatetimeIndex is as an index for pandas objects. The DatetimeIndex class contains
many time series related optimizations:
• A large range of dates for various offsets are pre-computed and cached under the hood in order to make gener-
ating subsequent date ranges very fast (just have to grab a slice).
• Fast shifting using the shift and tshift method on pandas objects.
• Unioning of overlapping DatetimeIndex objects with the same frequency is very fast (important for fast
data alignment).
• Quick access to date fields via properties such as year, month, etc.
• Regularization functions like snap and very fast asof logic.
DatetimeIndex objects have all the basic functionality of regular Index objects, and a smorgasbord of advanced
time series specific methods for easy frequency processing.
See also:
Reindexing methods
Note: While pandas does not force you to have a sorted date index, some of these methods may have unexpected or
incorrect behavior if the dates are unsorted.
DatetimeIndex can be used like a regular index and offers all of its intelligent functionality like selection, slicing,
etc.
In [70]: rng = pd.date_range(start, end, freq='BM')
In [72]: ts.index
Out[72]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [73]: ts[:5].index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [74]: ts[::2].index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
(continues on next page)
Dates and strings that parse to timestamps can be passed as indexing parameters:
In [75]: ts['1/31/2011']
Out[75]: -1.2812473076599531
In [77]: ts['10/31/2011':'12/31/2011']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[77]:
˓→
2011-10-31 0.149748
2011-11-30 -0.732339
2011-12-30 0.687738
Freq: BM, dtype: float64
To provide convenience for accessing longer time series, you can also pass in the year or year and month as strings:
In [78]: ts['2011']
Out[78]:
2011-01-31 -1.281247
2011-02-28 -0.727707
2011-03-31 -0.121306
2011-04-29 -0.097883
2011-05-31 0.695775
2011-06-30 0.341734
2011-07-29 0.959726
2011-08-31 -1.110336
2011-09-30 -0.619976
2011-10-31 0.149748
2011-11-30 -0.732339
2011-12-30 0.687738
Freq: BM, dtype: float64
In [79]: ts['2011-6']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2011-06-30 0.341734
Freq: BM, dtype: float64
This type of slicing will work on a DataFrame with a DatetimeIndex as well. Since the partial string selection
is a form of label slicing, the endpoints will be included. This would include matching times on an included date:
In [80]: dft = pd.DataFrame(randn(100000,1),
....: columns=['A'],
....: index=pd.date_range('20130101',periods=100000,freq='T'))
....:
(continues on next page)
In [81]: dft
Out[81]:
A
2013-01-01 00:00:00 0.176444
2013-01-01 00:01:00 0.403310
2013-01-01 00:02:00 -0.154951
2013-01-01 00:03:00 0.301624
2013-01-01 00:04:00 -2.179861
2013-01-01 00:05:00 -1.369849
2013-01-01 00:06:00 -0.954208
... ...
2013-03-11 10:33:00 -0.293083
2013-03-11 10:34:00 -0.059881
2013-03-11 10:35:00 1.252450
2013-03-11 10:36:00 0.046611
2013-03-11 10:37:00 0.059478
2013-03-11 10:38:00 -0.286539
2013-03-11 10:39:00 0.841669
In [82]: dft['2013']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A
2013-01-01 00:00:00 0.176444
2013-01-01 00:01:00 0.403310
2013-01-01 00:02:00 -0.154951
2013-01-01 00:03:00 0.301624
2013-01-01 00:04:00 -2.179861
2013-01-01 00:05:00 -1.369849
2013-01-01 00:06:00 -0.954208
... ...
2013-03-11 10:33:00 -0.293083
2013-03-11 10:34:00 -0.059881
2013-03-11 10:35:00 1.252450
2013-03-11 10:36:00 0.046611
2013-03-11 10:37:00 0.059478
2013-03-11 10:38:00 -0.286539
2013-03-11 10:39:00 0.841669
This starts on the very first time in the month, and includes the last date and time for the month:
In [83]: dft['2013-1':'2013-2']
Out[83]:
A
2013-01-01 00:00:00 0.176444
2013-01-01 00:01:00 0.403310
2013-01-01 00:02:00 -0.154951
2013-01-01 00:03:00 0.301624
2013-01-01 00:04:00 -2.179861
2013-01-01 00:05:00 -1.369849
2013-01-01 00:06:00 -0.954208
... ...
(continues on next page)
This specifies a stop time that includes all of the times on the last day:
In [84]: dft['2013-1':'2013-2-28']
Out[84]:
A
2013-01-01 00:00:00 0.176444
2013-01-01 00:01:00 0.403310
2013-01-01 00:02:00 -0.154951
2013-01-01 00:03:00 0.301624
2013-01-01 00:04:00 -2.179861
2013-01-01 00:05:00 -1.369849
2013-01-01 00:06:00 -0.954208
... ...
2013-02-28 23:53:00 0.103114
2013-02-28 23:54:00 -1.303422
2013-02-28 23:55:00 0.451943
2013-02-28 23:56:00 0.220534
2013-02-28 23:57:00 -1.624220
2013-02-28 23:58:00 0.093915
2013-02-28 23:59:00 -1.087454
This specifies an exact stop time (and is not the same as the above):
....:
˓→periods=10,
....: freq='12H
˓→'),
In [88]: dft2
Out[88]:
A
2013-01-01 00:00:00 a -0.659574
b 1.494522
2013-01-01 12:00:00 a -0.778425
b -0.253355
2013-01-02 00:00:00 a -2.816159
b -1.210929
2013-01-02 12:00:00 a 0.144669
... ...
2013-01-04 00:00:00 b -1.624463
2013-01-04 12:00:00 a 0.056912
b 0.149867
2013-01-05 00:00:00 a -1.256173
b 2.324544
2013-01-05 12:00:00 a -1.067396
b -0.660996
In [89]: dft2.loc['2013-01-05']
(continues on next page)
A
2013-01-05 00:00:00 a -1.256173
b 2.324544
2013-01-05 12:00:00 a -1.067396
b -0.660996
In [94]: series_minute.index.resolution
Out[94]: 'minute'
A timestamp string with minute resolution (or more accurate), gives a scalar instead, i.e. it is not casted to a slice.
In [99]: series_second.index.resolution
Out[99]: 'second'
If the timestamp string is treated as a slice, it can be used to index DataFrame with [] as well.
Warning: However, if the string is treated as an exact match, the selection in DataFrame’s [] will be column-
wise and not row-wise, see Indexing Basics. For example dft_minute['2011-12-31 23:59'] will raise
KeyError as '2012-12-31 23:59' has the same resolution as the index and there is no column with such
name:
To always have unambiguous selection, whether the row is treated as a slice or a single selection, use .loc.
In [103]: dft_minute.loc['2011-12-31 23:59']
Out[103]:
a 1
b 4
Name: 2011-12-31 23:59:00, dtype: int64
Note also that DatetimeIndex resolution cannot be less precise than day.
In [105]: series_monthly.index.resolution
Out[105]: 'day'
As discussed in previous section, indexing a DatetimeIndex with a partial string depends on the “accuracy” of the
period, in other words how specific the interval is in relation to the resolution of the index. In contrast, indexing with
Timestamp or datetime objects is exact, because the objects have exact meaning. These also follow the semantics
of including both endpoints.
These Timestamp and datetime objects have exact hours, minutes, and seconds, even though they were
not explicitly specified (they are 0).
With no defaults.
A truncate() convenience function is provided that is similar to slicing. Note that truncate assumes a 0 value
for any unspecified date component in a DatetimeIndex in contrast to slicing which returns any partially matching
dates:
In [112]: ts2['2011-11':'2011-12']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2011-11-06 -0.773743
2011-11-13 0.247216
2011-11-20 0.591308
2011-11-27 2.228500
2011-12-04 0.838769
2011-12-11 0.658538
2011-12-18 0.567353
2011-12-25 -1.076735
Freq: W-SUN, dtype: float64
Even complicated fancy indexing that breaks the DatetimeIndex frequency regularity will result in a
DatetimeIndex, although frequency is lost:
There are several time/date properties that one can access from Timestamp or a collection of timestamps like a
DatetimeIndex.
Property Description
year The year of the datetime
month The month of the datetime
day The days of the datetime
hour The hour of the datetime
minute The minutes of the datetime
second The seconds of the datetime
microsecond The microseconds of the datetime
nanosecond The nanoseconds of the datetime
date Returns datetime.date (does not contain timezone information)
time Returns datetime.time (does not contain timezone information)
dayofyear The ordinal day of year
weekofyear The week ordinal of the year
week The week ordinal of the year
dayofweek The number of the day of the week with Monday=0, Sunday=6
weekday The number of the day of the week with Monday=0, Sunday=6
weekday_name The name of the day in a week (ex: Friday)
quarter Quarter of the date: Jan-Mar = 1, Apr-Jun = 2, etc.
days_in_month The number of days in the month of the datetime
is_month_start Logical indicating if first day of month (defined by frequency)
is_month_end Logical indicating if last day of month (defined by frequency)
is_quarter_start Logical indicating if first day of quarter (defined by frequency)
is_quarter_end Logical indicating if last day of quarter (defined by frequency)
is_year_start Logical indicating if first day of year (defined by frequency)
is_year_end Logical indicating if last day of year (defined by frequency)
is_leap_year Logical indicating if the date belongs to a leap year
Furthermore, if you have a Series with datetimelike values, then you can access these properties via the .dt
accessor, as detailed in the section on .dt accessors.
In the preceding examples, we created DatetimeIndex objects at various frequencies by passing in frequency
strings like ‘M’, ‘W’, and ‘BM’ to the freq keyword. Under the hood, these frequency strings are being translated
into an instance of DateOffset, which represents a regular frequency increment. Specific offset logic like “month”,
“business day”, or “one hour” is represented in its various subclasses.
The basic DateOffset takes the same arguments as dateutil.relativedelta, which works as follows:
class BDay(DateOffset):
"""DateOffset increments between business days"""
def apply(self, other):
...
In [118]: d - 5 * BDay()
Out[118]: Timestamp('2008-08-11 09:00:00')
In [119]: d + BMonthEnd()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[119]: Timestamp('2008-08-29 09:00:00')
The rollforward and rollback methods do exactly what you would expect:
In [120]: d
Out[120]: datetime.datetime(2008, 8, 18, 9, 0)
In [122]: offset.rollforward(d)
Out[122]: Timestamp('2008-08-29 09:00:00')
In [123]: offset.rollback(d)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[123]: Timestamp('2008-07-31 09:00:00')
It’s definitely worth exploring the pandas.tseries.offsets module and the various docstrings for the classes.
These operations (apply, rollforward and rollback) preserve time (hour, minute, etc) information by de-
fault. To reset time, use normalize=True when creating the offset instance. If normalize=True, the result is
normalized after the function is applied.
In [124]: day = Day()
Some of the offsets can be “parameterized” when created to result in different behaviors. For example, the Week
offset for generating weekly data accepts a weekday parameter which results in the generated dates always lying on
a particular day of the week:
In [133]: d
Out[133]: datetime.datetime(2008, 8, 18, 9, 0)
(continues on next page)
In [134]: d + Week()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[134]: Timestamp('2008-08-25
˓→09:00:00')
In [135]: d + Week(weekday=4)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[135]:
˓→Timestamp('2008-08-22 09:00:00')
In [136]: (d + Week(weekday=4)).weekday()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→4
In [137]: d - Week()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timestamp('2008-08-11 09:00:00')
In [139]: d - Week(normalize=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[139]: Timestamp('2008-08-11 00:00:00')
In [141]: d + YearEnd(month=6)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[141]: Timestamp('2009-06-30 09:00:00')
Offsets can be used with either a Series or DatetimeIndex to apply the offset to each element.
In [142]: rng = pd.date_range('2012-01-01', '2012-01-03')
In [143]: s = pd.Series(rng)
In [144]: rng
Out[144]: DatetimeIndex(['2012-01-01', '2012-01-02', '2012-01-03'], dtype=
˓→'datetime64[ns]', freq='D')
˓→freq='D')
In [146]: s + DateOffset(months=2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2012-03-01
1 2012-03-02
(continues on next page)
In [147]: s - DateOffset(months=2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2011-11-01
1 2011-11-02
2 2011-11-03
dtype: datetime64[ns]
If the offset class maps directly to a Timedelta (Day, Hour, Minute, Second, Micro, Milli, Nano) it can be
used exactly like a Timedelta - see the Timedelta section for more examples.
In [148]: s - Day(2)
Out[148]:
0 2011-12-30
1 2011-12-31
2 2012-01-01
dtype: datetime64[ns]
In [150]: td
Out[150]:
0 3 days
1 3 days
2 3 days
dtype: timedelta64[ns]
In [151]: td + Minute(15)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[151]:
0 3 days 00:15:00
1 3 days 00:15:00
2 3 days 00:15:00
dtype: timedelta64[ns]
Note that some offsets (such as BQuarterEnd) do not have a vectorized implementation. They can still be used but
may calculate significantly slower and will show a PerformanceWarning
In [152]: rng + BQuarterEnd()
Out[152]: DatetimeIndex(['2012-03-30', '2012-03-30', '2012-03-30'], dtype=
˓→'datetime64[ns]', freq='D')
The CDay or CustomBusinessDay class provides a parametric BusinessDay class which can be used to create
customized business day calendars which account for local holidays and local weekend conventions.
As an interesting example, let’s look at Egypt where a Friday-Saturday weekend is observed.
In [153]: from pandas.tseries.offsets import CustomBusinessDay
In [158]: dt + 2 * bday_egypt
Out[158]: Timestamp('2013-05-05 00:00:00')
Out[160]:
2013-04-30 Tue
2013-05-02 Thu
2013-05-05 Sun
2013-05-06 Mon
2013-05-07 Tue
Freq: C, dtype: object
Holiday calendars can be used to provide the list of holidays. See the holiday calendar section for more information.
In [161]: from pandas.tseries.holiday import USFederalHolidayCalendar
Monthly offsets that respect a certain holiday calendar can be defined in the usual way.
In [165]: from pandas.tseries.offsets import CustomBusinessMonthBegin
In [168]: dt + bmth_us
Out[168]: Timestamp('2014-01-02 00:00:00')
Note: The frequency string ‘C’ is used to indicate that a CustomBusinessDay DateOffset is used, it is important to
note that since CustomBusinessDay is a parameterised type, instances of CustomBusinessDay may differ and this is
not detectable from the ‘C’ frequency string. The user therefore needs to ensure that the ‘C’ frequency string is used
consistently within the user’s application.
The BusinessHour class provides a business hour representation on BusinessDay, allowing to use specific start
and end times.
By default, BusinessHour uses 9:00 - 17:00 as business hours. Adding BusinessHour will increment
Timestamp by hourly frequency. If target Timestamp is out of business hours, move to the next business hour
then increment it. If the result exceeds the business hours end, the remaining hours are added to the next business day.
In [170]: bh = BusinessHour()
In [171]: bh
Out[171]: <BusinessHour: BH=09:00-17:00>
# 2014-08-01 is Friday
In [172]: pd.Timestamp('2014-08-01 10:00').weekday()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[172]: 4
# If the results is on the end time, move to the next business day
In [175]: pd.Timestamp('2014-08-01 16:00') + bh
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timestamp('2014-08-04 09:00:00')
You can also specify start and end time by keywords. The argument must be a str with an hour:minute
representation or a datetime.time instance. Specifying seconds, microseconds and nanoseconds as business hour
results in ValueError.
In [180]: bh
Out[180]: <BusinessHour: BH=11:00-20:00>
Passing start time later than end represents midnight business hour. In this case, business hour exceeds midnight
and overlap to the next day. Valid business hours are distinguished by whether it started from valid BusinessDay.
In [185]: bh
Out[185]: <BusinessHour: BH=17:00-09:00>
Applying BusinessHour.rollforward and rollback to out of business hours results in the next business
hour start or previous day’s end. Different from other offsets, BusinessHour.rollforward may output different
results from apply by definition.
This is because one day’s business hour end is equal to next day’s business hour start. For example, under the de-
fault business hours (9:00 - 17:00), there is no gap (0 minutes) between 2014-08-01 17:00 and 2014-08-04
09:00.
BusinessHour regards Saturday and Sunday as holidays. To use arbitrary holidays, you can use
CustomBusinessHour offset, as explained in the following subsection.
In [198]: dt + bhour_us
Out[198]: Timestamp('2014-01-17 16:00:00')
You can use keyword arguments supported by either BusinessHour and CustomBusinessDay.
# Monday is skipped because it's a holiday, business hour starts from 10:00
(continues on next page)
A number of string aliases are given to useful common time series frequencies. We will refer to these aliases as offset
aliases.
Alias Description
B business day frequency
C custom business day frequency
D calendar day frequency
W weekly frequency
M month end frequency
SM semi-month end frequency (15th and end of month)
BM business month end frequency
CBM custom business month end frequency
MS month start frequency
SMS semi-month start frequency (1st and 15th)
BMS business month start frequency
CBMS custom business month start frequency
Q quarter end frequency
BQ business quarter end frequency
QS quarter start frequency
BQS business quarter start frequency
A, Y year end frequency
BA, BY business year end frequency
AS, YS year start frequency
BAS, BYS business year start frequency
BH business hour frequency
H hourly frequency
T, min minutely frequency
S secondly frequency
L, ms milliseconds
U, us microseconds
N nanoseconds
As we have seen previously, the alias and the offset instance are fungible in most functions:
In [202]: pd.date_range(start, periods=5, freq='B')
Out[202]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
Alias Description
W-SUN weekly frequency (Sundays). Same as ‘W’
W-MON weekly frequency (Mondays)
W-TUE weekly frequency (Tuesdays)
W-WED weekly frequency (Wednesdays)
W-THU weekly frequency (Thursdays)
W-FRI weekly frequency (Fridays)
W-SAT weekly frequency (Saturdays)
(B)Q(S)- quarterly frequency, year ends in December. Same as ‘Q’
DEC
(B)Q(S)- quarterly frequency, year ends in January
JAN
(B)Q(S)- quarterly frequency, year ends in February
FEB
(B)Q(S)- quarterly frequency, year ends in March
MAR
(B)Q(S)- quarterly frequency, year ends in April
APR
(B)Q(S)- quarterly frequency, year ends in May
MAY
(B)Q(S)- quarterly frequency, year ends in June
JUN
Continued on next page
These can be used as arguments to date_range, bdate_range, constructors for DatetimeIndex, as well as
various other timeseries-related functions in pandas.
For those offsets that are anchored to the start or end of specific frequency (MonthEnd, MonthBegin, WeekEnd,
etc), the following rules apply to rolling forward and backwards.
When n is not 0, if the given date is not on an anchor point, it snapped to the next(previous) anchor point, and moved
|n|-1 additional steps forwards or backwards.
In [206]: pd.Timestamp('2014-01-02') + MonthBegin(n=1)
Out[206]: Timestamp('2014-02-01 00:00:00')
If the given date is on an anchor point, it is moved |n| points forwards or backwards.
For the case when n=0, the date is not moved if on an anchor point, otherwise it is rolled forward to the next anchor
point.
Holidays and calendars provide a simple way to define holiday rules to be used with CustomBusinessDay or
in other analysis that requires a predefined set of holidays. The AbstractHolidayCalendar class provides all
the necessary methods to return a list of holidays and only rules need to be defined in a specific holiday calendar
class. Furthermore, the start_date and end_date class attributes determine over what date range holidays are
generated. These should be overwritten on the AbstractHolidayCalendar class to have the range apply to all
calendar subclasses. USFederalHolidayCalendar is the only calendar that exists and primarily serves as an
example for developing other calendars.
For holidays that occur on fixed dates (e.g., US Memorial Day or July 4th) an observance rule determines when that
holiday is observed if it falls on a weekend or some other non-observed day. Defined observance rules are:
Rule Description
nearest_workday move Saturday to Friday and Sunday to Monday
sunday_to_monday move Sunday to following Monday
next_monday_or_tuesday
move Saturday to Monday and Sunday/Monday to Tuesday
previous_friday move Saturday and Sunday to previous Friday”
next_monday move Saturday and Sunday to following Monday
Using this calendar, creating an index or doing offset arithmetic skips weekends and holidays (i.e., Memorial Day/July
4th). For example, the below defines a custom business day offset using the ExampleCalendar. Like any other
offset, it can be used to create a DatetimeIndex or added to datetime or Timestamp objects.
In [226]: from pandas.tseries.offsets import CDay
Ranges are defined by the start_date and end_date class attributes of AbstractHolidayCalendar. The
defaults are shown below.
In [233]: AbstractHolidayCalendar.start_date
Out[233]: Timestamp('1970-01-01 00:00:00')
In [234]: AbstractHolidayCalendar.end_date
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[234]: Timestamp('2030-12-31 00:00:00')
In [237]: cal.holidays()
Out[237]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype=
˓→'datetime64[ns]', freq=None)
Every calendar class is accessible by name using the get_calendar function which returns a holiday class instance.
Any imported calendar class will automatically be available by this function. Also, HolidayCalendarFactory
provides an easy interface to create calendars that are combinations of calendars or calendars with additional rules.
In [240]: cal.rules
Out[240]:
[Holiday: MemorialDay (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at
˓→0x7f20d5b40f28>),
One may want to shift or lag the values in a time series back and forward in time. The method for this is shift(),
which is available on all of the pandas objects.
In [243]: ts = ts[:5]
In [244]: ts.shift(1)
Out[244]:
2011-01-31 NaN
2011-02-28 -1.281247
2011-03-31 -0.727707
2011-04-29 -0.121306
2011-05-31 -0.097883
Freq: BM, dtype: float64
The shift method accepts an freq argument which can accept a DateOffset class or other timedelta-like
object or also an offset alias:
2011-06-30 -1.281247
2011-07-29 -0.727707
2011-08-31 -0.121306
2011-09-30 -0.097883
2011-10-31 0.695775
Freq: BM, dtype: float64
Rather than changing the alignment of the data and the index, DataFrame and Series objects also have a
tshift() convenience method that changes all the dates in the index by a specified number of offsets:
Note that with tshift, the leading entry is no longer NaN because the data is not being realigned.
The primary function for changing frequencies is the asfreq() method. For a DatetimeIndex, this is basically
just a thin, but convenient wrapper around reindex() which generates a date_range and calls reindex.
In [250]: ts
Out[250]:
2010-01-01 0.155932
2010-01-06 1.486218
2010-01-11 -2.148675
Freq: 3B, dtype: float64
In [251]: ts.asfreq(BDay())
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2010-01-01 0.155932
2010-01-04 NaN
2010-01-05 NaN
2010-01-06 1.486218
2010-01-07 NaN
2010-01-08 NaN
2010-01-11 -2.148675
Freq: B, dtype: float64
asfreq provides a further convenience so you can specify an interpolation method for any gaps that may appear after
the frequency conversion.
Related to asfreq and reindex is fillna(), which is documented in the missing data section.
DatetimeIndex can be converted to an array of Python native datetime.datetime objects using the
to_pydatetime method.
19.10 Resampling
Warning: The interface to .resample has changed in 0.18.0 to be more groupby-like and hence more flexible.
See the whatsnew docs for a comparison with prior versions.
Pandas has a simple, powerful, and efficient functionality for performing resampling operations during frequency
conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to,
financial applications.
resample() is a time-based groupby, followed by a reduction method on each of its groups. See some cookbook
examples for some advanced strategies.
Starting in version 0.18.1, the resample() function can be used directly from DataFrameGroupBy objects, see
the groupby docs.
Note: .resample() is similar to using a rolling() operation with a time-based offset, see a discussion here.
19.10.1 Basics
In [255]: ts.resample('5Min').sum()
Out[255]:
2012-01-01 25653
Freq: 5T, dtype: int64
The resample function is very flexible and allows you to specify many different parameters to control the frequency
conversion and resampling operation.
Any function available via dispatching is available as a method of the returned object, including sum, mean, std,
sem, max, min, median, first, last, ohlc:
In [256]: ts.resample('5Min').mean()
Out[256]:
2012-01-01 256.53
Freq: 5T, dtype: float64
In [257]: ts.resample('5Min').ohlc()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[257]:
open high low close
2012-01-01 296 496 6 449
In [258]: ts.resample('5Min').max()
(continues on next page)
2012-01-01 496
Freq: 5T, dtype: int64
For downsampling, closed can be set to ‘left’ or ‘right’ to specify which end of the interval is closed:
2012-01-01 256.53
Freq: 5T, dtype: float64
Parameters like label and loffset are used to manipulate the resulting labels. label specifies whether the result
is labeled with the beginning or the end of the interval. loffset performs a time adjustment on the output labels.
Note: The default values for label and closed is ‘left’ for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’,
‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’.
2011-12-31 13
2012-01-15 29
2012-01-31 44
2012-02-15 58
2012-02-29 73
2012-03-15 89
2012-03-31 90
Freq: SM-15, dtype: int64
2012-01-15 14.0
2012-01-31 30.0
2012-02-15 45.0
2012-02-29 59.0
2012-03-15 74.0
2012-03-31 90.0
2012-04-15 NaN
Freq: SM-15, dtype: float64
The axis parameter can be set to 0 or 1 and allows you to resample the specified axis for a DataFrame.
kind can be set to ‘timestamp’ or ‘period’ to convert the resulting index to/from timestamp and time span represen-
tations. By default resample retains the input representation.
convention can be set to ‘start’ or ‘end’ when resampling period data (detail below). It specifies how low frequency
periods are converted to higher frequency periods.
19.10.2 Upsampling
For upsampling, you can specify a way to upsample and the limit parameter to interpolate over the gaps that are
created:
# from secondly to every 250 milliseconds
In [269]: ts[:2].resample('250L').asfreq()
Out[269]:
2012-01-01 00:00:00.000 296.0
2012-01-01 00:00:00.250 NaN
2012-01-01 00:00:00.500 NaN
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 199.0
Freq: 250L, dtype: float64
In [270]: ts[:2].resample('250L').ffill()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [271]: ts[:2].resample('250L').ffill(limit=2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Sparse timeseries are the ones where you have a lot fewer points relative to the amount of time you are looking to
resample. Naively upsampling a sparse series can potentially generate lots of intermediate values. When you don’t
want to use a method to fill these values, e.g. fill_method is None, then intermediate values will be filled with
NaN.
Since resample is a time-based groupby, the following is a method to efficiently resample only the groups that are
not all NaN.
In [272]: rng = pd.date_range('2014-1-1', periods=100, freq='D') + pd.Timedelta('1s')
We can instead only resample those groups where we have points as follows:
In [275]: from functools import partial
19.10.4 Aggregation
Similar to the aggregating API, groupby API, and the window functions API, a Resampler can be selectively resam-
pled.
Resampling a DataFrame, the default will be to act on all columns with the same function.
In [280]: r = df.resample('3T')
In [281]: r.mean()
Out[281]:
A B C
2012-01-01 00:00:00 -0.038580 -0.085117 -0.024750
2012-01-01 00:03:00 0.052387 -0.061477 0.029548
2012-01-01 00:06:00 0.121377 -0.010630 -0.043691
2012-01-01 00:09:00 -0.106814 -0.053819 0.097222
2012-01-01 00:12:00 0.032560 0.080543 0.167380
2012-01-01 00:15:00 0.060486 -0.057602 -0.106213
In [282]: r['A'].mean()
Out[282]:
2012-01-01 00:00:00 -0.038580
2012-01-01 00:03:00 0.052387
2012-01-01 00:06:00 0.121377
2012-01-01 00:09:00 -0.106814
2012-01-01 00:12:00 0.032560
2012-01-01 00:15:00 0.060486
(continues on next page)
In [283]: r[['A','B']].mean()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B
2012-01-01 00:00:00 -0.038580 -0.085117
2012-01-01 00:03:00 0.052387 -0.061477
2012-01-01 00:06:00 0.121377 -0.010630
2012-01-01 00:09:00 -0.106814 -0.053819
2012-01-01 00:12:00 0.032560 0.080543
2012-01-01 00:15:00 0.060486 -0.057602
You can pass a list or dict of functions to do aggregation with, outputting a DataFrame:
On a resampled DataFrame, you can pass a list of functions to apply to each column, which produces an aggregated
result with a hierarchical index:
By passing a dict to aggregate you can apply a different aggregation to the columns of a DataFrame:
The function names can also be strings. In order for a string to be valid it must be implemented on the resampled
object:
Furthermore, you can also specify multiple aggregation functions for each column separately.
If a DataFrame does not have a datetimelike index, but instead you want to resample based on datetimelike column
in the frame, it can passed to the on keyword.
.....: names=['v','d']))
.....:
In [290]: df
Out[290]:
date a
v d
1 2015-01-04 2015-01-04 0
2 2015-01-11 2015-01-11 1
3 2015-01-18 2015-01-18 2
4 2015-01-25 2015-01-25 3
5 2015-02-01 2015-02-01 4
a
date
2015-01-31 6
2015-02-28 4
Similarly, if you instead want to resample by a datetimelike level of MultiIndex, its name or location can be passed
to the level keyword.
Regular intervals of time are represented by Period objects in pandas while sequences of Period objects are
collected in a PeriodIndex, which can be created with the convenience function period_range.
19.11.1 Period
A Period represents a span of time (e.g., a day, a month, a quarter, etc). You can specify the span via freq keyword
using a frequency alias like below. Because freq represents a span of Period, it cannot be negative like “-3D”.
In [293]: pd.Period('2012', freq='A-DEC')
Out[293]: Period('2012', 'A-DEC')
Adding and subtracting integers from periods shifts the period by its own frequency. Arithmetic is not allowed between
Period with different freq (span).
In [297]: p = pd.Period('2012', freq='A-DEC')
In [298]: p + 1
Out[298]: Period('2013', 'A-DEC')
In [299]: p - 3
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[299]: Period('2009', 'A-DEC')
In [301]: p + 2
Out[301]: Period('2012-05', '2M')
In [302]: p - 1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[302]: Period('2011-11', '2M')
/pandas/pandas/_libs/tslibs/period.pyx in pandas._libs.tslibs.period._Period.__
˓→richcmp__()
If Period freq is daily or higher (D, H, T, S, L, U, N), offsets and timedelta-like can be added if the result can
have the same freq. Otherwise, ValueError will be raised.
In [304]: p = pd.Period('2014-07-01 09:00', freq='H')
In [305]: p + Hour(2)
Out[305]: Period('2014-07-01 11:00', 'H')
In [306]: p + timedelta(minutes=120)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[306]: Period('2014-07-01 11:00', 'H')
In [1]: p + Minute(5)
Traceback
...
ValueError: Input has different freq from Period(freq=H)
If Period has other freqs, only the same offsets can be added. Otherwise, ValueError will be raised.
In [308]: p = pd.Period('2014-07', freq='M')
In [309]: p + MonthEnd(3)
Out[309]: Period('2014-10', 'M')
In [1]: p + MonthBegin(3)
Traceback
...
ValueError: Input has different freq from Period(freq=M)
Taking the difference of Period instances with the same frequency will return the number of frequency units between
them:
In [310]: pd.Period('2012', freq='A-DEC') - pd.Period('2002', freq='A-DEC')
Out[310]: 10
Regular sequences of Period objects can be collected in a PeriodIndex, which can be constructed using the
period_range convenience function:
In [311]: prng = pd.period_range('1/1/2011', '1/1/2012', freq='M')
In [312]: prng
(continues on next page)
Passing multiplied frequency outputs a sequence of Period which has multiplied span.
If start or end are Period objects, they will be used as anchor endpoints for a PeriodIndex with frequency
matching that of the PeriodIndex constructor.
Just like DatetimeIndex, a PeriodIndex can also be used to index pandas objects:
In [317]: ps
Out[317]:
2011-01 0.258318
2011-02 -2.503700
2011-03 -0.303053
2011-04 0.270509
2011-05 1.004841
2011-06 -0.129044
2011-07 -1.406335
2011-08 -1.310412
2011-09 0.769439
2011-10 -0.542325
2011-11 2.010541
2011-12 1.001558
2012-01 -0.087453
Freq: M, dtype: float64
PeriodIndex supports addition and subtraction with the same rule as Period.
In [319]: idx
Out[319]:
PeriodIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00'],
dtype='period[H]', freq='H')
(continues on next page)
In [322]: idx
Out[322]: PeriodIndex(['2014-07', '2014-08', '2014-09', '2014-10', '2014-11'], dtype=
˓→'period[M]', freq='M')
˓→'period[M]', freq='M')
PeriodIndex has its own dtype named period, refer to Period Dtypes.
In [325]: pi
Out[325]: PeriodIndex(['2016-01', '2016-02', '2016-03'], dtype='period[M]', freq='M')
In [326]: pi.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[326]:
˓→period[M]
The period dtype can be used in .astype(...). It allows one to change the freq of a PeriodIndex like
.asfreq() and convert a DatetimeIndex to PeriodIndex like to_period():
# change monthly freq to daily freq
In [327]: pi.astype('period[D]')
Out[327]: PeriodIndex(['2016-01-31', '2016-02-29', '2016-03-31'], dtype='period[D]',
˓→freq='D')
# convert to DatetimeIndex
In [328]: pi.astype('datetime64[ns]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[32
˓→DatetimeIndex(['2016-01-01', '2016-02-01', '2016-03-01'], dtype='datetime64[ns]',
˓→freq='MS')
# convert to PeriodIndex
In [329]: dti = pd.date_range('2011-01-01', freq='M', periods=3)
(continues on next page)
In [330]: dti
Out[330]: DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31'], dtype=
˓→'datetime64[ns]', freq='M')
In [331]: dti.astype('period[M]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]', freq='M')
You can pass in dates and strings to Series and DataFrame with PeriodIndex, in the same manner as
DatetimeIndex. For details, refer to DatetimeIndex Partial String Indexing.
In [332]: ps['2011-01']
Out[332]: 0.25831819727391592
In [334]: ps['10/31/2011':'12/31/2011']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2011-10 -0.542325
2011-11 2.010541
2011-12 1.001558
Freq: M, dtype: float64
Passing a string representing a lower frequency than PeriodIndex returns partial sliced data.
In [335]: ps['2011']
Out[335]:
2011-01 0.258318
2011-02 -2.503700
2011-03 -0.303053
2011-04 0.270509
2011-05 1.004841
2011-06 -0.129044
2011-07 -1.406335
2011-08 -1.310412
2011-09 0.769439
2011-10 -0.542325
2011-11 2.010541
2011-12 1.001558
Freq: M, dtype: float64
.....:
A
2013-01-01 10:00 -0.148998
2013-01-01 10:01 2.154810
2013-01-01 10:02 -1.605646
2013-01-01 10:03 0.021024
2013-01-01 10:04 -0.623737
2013-01-01 10:05 1.451612
2013-01-01 10:06 1.062463
... ...
2013-01-01 10:53 0.273119
2013-01-01 10:54 -0.994071
2013-01-01 10:55 -1.222179
2013-01-01 10:56 -1.167118
2013-01-01 10:57 0.262822
2013-01-01 10:58 -0.283786
2013-01-01 10:59 1.190726
As with DatetimeIndex, the endpoints will be included in the result. The example below slices data starting from
10:00 to 11:59.
The frequency of Period and PeriodIndex can be converted via the asfreq method. Let’s start with the fiscal
year 2011, ending in December:
In [341]: p
Out[341]: Period('2011', 'A-DEC')
We can convert it to a monthly frequency. Using the how parameter, we can specify whether to return the starting or
ending month:
Converting to a “super-period” (e.g., annual frequency is a super-period of quarterly frequency) automatically returns
the super-period that includes the input period:
In [347]: p.asfreq('A-NOV')
Out[347]: Period('2012', 'A-NOV')
Note that since we converted to an annual frequency that ends the year in November, the monthly period of December
2011 is actually in the 2012 A-NOV period.
Period conversions with anchored frequencies are particularly useful for working with various quarterly data common
to economics, business, and other fields. Many organizations define quarters relative to the month in which their
fiscal year starts and ends. Thus, first quarter of 2011 could start in 2010 or a few months into 2011. Via anchored
frequencies, pandas works for all quarterly frequencies Q-JAN through Q-DEC.
Q-DEC define regular calendar quarters:
Timestamped data can be converted to PeriodIndex-ed data using to_period and vice-versa using
to_timestamp:
In [356]: ts
Out[356]:
2012-01-31 -0.898547
2012-02-29 -1.332247
2012-03-31 -0.741645
2012-04-30 0.094321
2012-05-31 -0.438813
Freq: M, dtype: float64
In [357]: ps = ts.to_period()
In [358]: ps
Out[358]:
2012-01 -0.898547
2012-02 -1.332247
2012-03 -0.741645
2012-04 0.094321
2012-05 -0.438813
Freq: M, dtype: float64
In [359]: ps.to_timestamp()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
2012-01-01 -0.898547
2012-02-01 -1.332247
2012-03-01 -0.741645
2012-04-01 0.094321
(continues on next page)
Remember that ‘s’ and ‘e’ can be used to return the timestamps at the start or end of the period:
Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following
example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following
the quarter end:
In [364]: ts.head()
Out[364]:
1990-03-01 09:00 -0.564874
1990-06-01 09:00 -1.426510
1990-09-01 09:00 1.295437
1990-12-01 09:00 1.124017
1991-03-01 09:00 0.840428
Freq: H, dtype: float64
If you have data that is outside of the Timestamp bounds, see Timestamp limitations, then you can use a
PeriodIndex and/or Series of Periods to do computations.
In [366]: span
Out[366]:
PeriodIndex(['1215-01-01', '1215-01-02', '1215-01-03', '1215-01-04',
'1215-01-05', '1215-01-06', '1215-01-07', '1215-01-08',
'1215-01-09', '1215-01-10',
...
'1380-12-23', '1380-12-24', '1380-12-25', '1380-12-26',
'1380-12-27', '1380-12-28', '1380-12-29', '1380-12-30',
'1380-12-31', '1381-01-01'],
dtype='period[D]', length=60632, freq='D')
In [368]: s
Out[368]:
0 20121231
1 20141130
2 99991231
dtype: int64
.....:
In [370]: s.apply(conv)
Out[370]:
0 2012-12-31
1 2014-11-30
2 9999-12-31
dtype: object
In [371]: s.apply(conv)[2]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[371]:
˓→Period('9999-12-31', 'D')
In [373]: span
Out[373]: PeriodIndex(['2012-12-31', '2014-11-30', '9999-12-31'], dtype='period[D]',
˓→freq='D')
Pandas provides rich support for working with timestamps in different time zones using pytz and dateutil li-
braries. dateutil currently is only supported for fixed offset and tzfile zones. The default library is pytz. Support
for dateutil is provided for compatibility with other applications e.g. if you use dateutil in other Python
packages.
To supply the time zone, you can use the tz keyword to date_range and other functions. Dateutil time zone strings
are distinguished from pytz time zones by starting with dateutil/.
• In pytz you can find a list of common (and less common) time zones using from pytz import
common_timezones, all_timezones.
• dateutil uses the OS timezones so there isn’t a fixed list available. For common zones, the names are the
same as pytz.
# pytz
In [376]: rng_pytz = pd.date_range('3/6/2012 00:00', periods=10, freq='D',
.....: tz='Europe/London')
.....:
In [377]: rng_pytz.tz
Out[377]: <DstTzInfo 'Europe/London' LMT-1 day, 23:59:00 STD>
# dateutil
In [378]: rng_dateutil = pd.date_range('3/6/2012 00:00', periods=10, freq='D',
.....: tz='dateutil/Europe/London')
.....:
In [379]: rng_dateutil.tz
Out[379]: tzfile('/usr/share/zoneinfo/Europe/London')
In [381]: rng_utc.tz
Out[381]: tzutc()
Note that the UTC timezone is a special case in dateutil and should be constructed explicitly as an instance of
dateutil.tz.tzutc. You can also construct other timezones explicitly first, which gives you more control over
which time zone is used:
# pytz
In [382]: tz_pytz = pytz.timezone('Europe/London')
# dateutil
In [385]: tz_dateutil = dateutil.tz.gettz('Europe/London')
Timestamps, like Python’s datetime.datetime object can be either time zone naive or time zone aware. Naive
time series and DatetimeIndex objects can be localized using tz_localize:
In [388]: ts = pd.Series(np.random.randn(len(rng)), rng)
Again, you can explicitly construct the timezone object first. You can use the tz_convert method to convert pandas
objects to convert tz-aware data to another time zone:
In [391]: ts_utc.tz_convert('US/Eastern')
Out[391]:
2012-03-05 19:00:00-05:00 0.037206
2012-03-06 19:00:00-05:00 2.313998
2012-03-07 19:00:00-05:00 1.458296
2012-03-08 19:00:00-05:00 -0.620431
2012-03-09 19:00:00-05:00 -0.000111
2012-03-10 19:00:00-05:00 -0.342783
2012-03-11 20:00:00-04:00 -0.664322
2012-03-12 20:00:00-04:00 0.654814
2012-03-13 20:00:00-04:00 1.550680
2012-03-14 20:00:00-04:00 0.174511
2012-03-15 20:00:00-04:00 1.360491
2012-03-16 20:00:00-04:00 0.799737
2012-03-17 20:00:00-04:00 0.449149
2012-03-18 20:00:00-04:00 0.111346
2012-03-19 20:00:00-04:00 -0.435531
Freq: D, dtype: float64
Warning: Be wary of conversions between libraries. For some zones pytz and dateutil have different
definitions of the zone. This is more of a problem for unusual timezones than for ‘standard’ zones like US/
Eastern.
Warning: Be aware that a timezone definition across versions of timezone libraries may not be considered equal.
This may cause problems when working with stored data that is localized using one version and operated on with
a different version. See here for how to handle such a situation.
Warning: It is incorrect to pass a timezone directly into the datetime.datetime constructor (e.g.,
datetime.datetime(2011, 1, 1, tz=timezone('US/Eastern')). Instead, the datetime needs
Under the hood, all timestamps are stored in UTC. Scalar values from a DatetimeIndex with a time zone will have
their fields (day, hour, minute) localized to the time zone. However, timestamps with the same UTC value are still
considered to be equal even if they are in different time zones:
In [392]: rng_eastern = rng_utc.tz_convert('US/Eastern')
In [394]: rng_eastern[5]
Out[394]: Timestamp('2012-03-10 19:00:00-0500', tz='US/Eastern', freq='D')
In [395]: rng_berlin[5]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[395]:
˓→Timestamp('2012-03-11 01:00:00+0100', tz='Europe/Berlin', freq='D')
In [398]: rng_berlin[5]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[398]:
˓→Timestamp('2012-03-11 01:00:00+0100', tz='Europe/Berlin', freq='D')
In [399]: rng_eastern[5].tz_convert('Europe/Berlin')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timestamp('2012-03-11 01:00:00+0100', tz='Europe/Berlin')
In [401]: rng[5].tz_localize('Asia/Shanghai')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[401]: Timestamp('2012-03-11
˓→00:00:00+0800', tz='Asia/Shanghai')
Operations between Series in different time zones will yield UTC Series, aligning the data on the UTC times-
tamps:
In [402]: eastern = ts_utc.tz_convert('US/Eastern')
In [405]: result
Out[405]:
2012-03-06 00:00:00+00:00 0.074412
(continues on next page)
In [406]: result.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [408]: didx
Out[408]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00', '2014-08-01 12:00:00-04:00',
'2014-08-01 13:00:00-04:00', '2014-08-01 14:00:00-04:00',
'2014-08-01 15:00:00-04:00', '2014-08-01 16:00:00-04:00',
'2014-08-01 17:00:00-04:00', '2014-08-01 18:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')
In [409]: didx.tz_localize(None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [410]: didx.tz_convert(None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In some cases, localize cannot determine the DST and non-DST hours when there are duplicates. This often hap-
pens when reading files or database records that simply duplicate the hours. Passing ambiguous='infer' into
tz_localize will attempt to determine the right offset. Below the top example will fail as it contains ambiguous
times and the bottom will infer the right offset.
In [2]: rng_hourly.tz_localize('US/Eastern')
AmbiguousTimeError: Cannot infer dst time from Timestamp('2011-11-06 01:00:00'), try
˓→using the 'ambiguous' argument
In [414]: rng_hourly_eastern.tolist()
Out[414]:
[Timestamp('2011-11-06 00:00:00-0400', tz='US/Eastern'),
Timestamp('2011-11-06 01:00:00-0400', tz='US/Eastern'),
Timestamp('2011-11-06 01:00:00-0500', tz='US/Eastern'),
Timestamp('2011-11-06 02:00:00-0500', tz='US/Eastern'),
Timestamp('2011-11-06 03:00:00-0500', tz='US/Eastern')]
In addition to ‘infer’, there are several other arguments supported. Passing an array-like of bools or 0s/1s where True
represents a DST hour and False a non-DST hour, allows for distinguishing more than one DST transition (e.g., if you
have multiple records in a database each with their own DST transition). Or passing ‘NaT’ will fill in transition times
with not-a-time values. These methods are available in the DatetimeIndex constructor as well as tz_localize.
In [419]: didx
Out[419]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00', '2014-08-01 12:00:00-04:00',
'2014-08-01 13:00:00-04:00', '2014-08-01 14:00:00-04:00',
'2014-08-01 15:00:00-04:00', '2014-08-01 16:00:00-04:00',
'2014-08-01 17:00:00-04:00', '2014-08-01 18:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')
In [420]: didx.tz_localize(None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [421]: didx.tz_convert(None)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
Series/DatetimeIndex with a timezone naive value are represented with a dtype of datetime64[ns].
In [424]: s_naive
Out[424]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
dtype: datetime64[ns]
Series/DatetimeIndex with a timezone aware value are represented with a dtype of datetime64[ns,
tz].
In [426]: s_aware
Out[426]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Both of these Series can be manipulated via the .dt accessor, see here.
For example, to localize and convert a naive stamp to timezone aware.
In [427]: s_naive.dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
Out[427]:
0 2012-12-31 19:00:00-05:00
1 2013-01-01 19:00:00-05:00
2 2013-01-02 19:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Further more you can .astype(...) timezone aware (and naive). This operation is effectively a localize AND
convert on a naive stamp, and a convert on an aware stamp.
0 2013-01-01 05:00:00
1 2013-01-02 05:00:00
(continues on next page)
0 2013-01-01 06:00:00+01:00
1 2013-01-02 06:00:00+01:00
2 2013-01-03 06:00:00+01:00
dtype: datetime64[ns, CET]
Note: Using the .values accessor on a Series, returns an NumPy array of the data. These values are converted
to UTC, as NumPy does not currently support timezones (even though it is printing in the local timezone!).
In [431]: s_naive.values
Out[431]:
array(['2013-01-01T00:00:00.000000000', '2013-01-02T00:00:00.000000000',
'2013-01-03T00:00:00.000000000'], dtype='datetime64[ns]')
In [432]: s_aware.values
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
array(['2013-01-01T05:00:00.000000000', '2013-01-02T05:00:00.000000000',
'2013-01-03T05:00:00.000000000'], dtype='datetime64[ns]')
Further note that once converted to a NumPy array these would lose the tz tenor.
In [433]: pd.Series(s_aware.values)
Out[433]:
0 2013-01-01 05:00:00
1 2013-01-02 05:00:00
2 2013-01-03 05:00:00
dtype: datetime64[ns]
In [434]: pd.Series(s_aware.values).dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
Out[434]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
TWENTY
TIME DELTAS
Timedeltas are differences in times, expressed in difference units, e.g. days, hours, minutes, seconds. They can be
both positive and negative.
Timedelta is a subclass of datetime.timedelta, and behaves in a similar manner, but allows compatibility
with np.timedelta64 types as well as a host of custom representation, parsing, and attributes.
20.1 Parsing
# strings
In [1]: pd.Timedelta('1 days')
Out[1]: Timedelta('1 days 00:00:00')
# like datetime.timedelta
# note: these MUST be specified as keyword arguments
In [5]: pd.Timedelta(days=1, seconds=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timedelta('1 days 00:00:01')
# from a datetime.timedelta/np.timedelta64
In [7]: pd.Timedelta(datetime.timedelta(days=1, seconds=1))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timedelta('1 days 00:00:01')
995
pandas: powerful Python data analysis toolkit, Release 0.23.4
# a NaT
In [10]: pd.Timedelta('nan')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→NaT
In [11]: pd.Timedelta('nat')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→NaT
In [13]: pd.Timedelta('P0DT0H0M0.000000123S')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timedelta('0 days 00:00:00.000000')
New in version 0.23.0: Added constructor for ISO 8601 Duration strings
DateOffsets (Day, Hour, Minute, Second, Milli, Micro, Nano) can also be used in construction.
In [14]: pd.Timedelta(Second(2))
Out[14]: Timedelta('0 days 00:00:02')
20.1.1 to_timedelta
Using the top-level pd.to_timedelta, you can convert a scalar, array, list, or Series from a recognized timedelta
format / value into a Timedelta type. It will construct Series if the input is a Series, a scalar if the input is scalar-like,
otherwise it will output a TimedeltaIndex.
You can parse a single string to a Timedelta:
In [17]: pd.to_timedelta('15.5us')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[17]: Timedelta('0 days 00:00:00.
˓→000015')
or a list/array of strings:
˓→'timedelta64[ns]', freq=None)
Pandas represents Timedeltas in nanosecond resolution using 64 bit integers. As such, the 64 bit integer limits
determine the Timedelta limits.
In [21]: pd.Timedelta.min
Out[21]: Timedelta('-106752 days +00:12:43.145224')
In [22]: pd.Timedelta.max
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[22]: Timedelta('106751 days
˓→23:47:16.854775')
20.2 Operations
You can operate on Series/DataFrames and construct timedelta64[ns] Series through subtraction operations on
datetime64[ns] Series, or Timestamps.
In [23]: s = pd.Series(pd.date_range('2012-1-1', periods=3, freq='D'))
In [26]: df
Out[26]:
A B
0 2012-01-01 0 days
1 2012-01-02 1 days
2 2012-01-03 2 days
In [28]: df
Out[28]:
A B C
0 2012-01-01 0 days 2012-01-01
1 2012-01-02 1 days 2012-01-03
2 2012-01-03 2 days 2012-01-05
(continues on next page)
In [29]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A datetime64[ns]
B timedelta64[ns]
C datetime64[ns]
dtype: object
In [30]: s - s.max()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 -2 days
1 -1 days
2 0 days
dtype: timedelta64[ns]
In [31]: s - datetime.datetime(2011, 1, 1, 3, 5)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [32]: s + datetime.timedelta(minutes=5)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
In [33]: s + Minute(5)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
0 2012-01-01 00:05:00.005
1 2012-01-02 00:05:00.005
2 2012-01-03 00:05:00.005
dtype: datetime64[ns]
In [36]: y
Out[36]:
0 0 days
1 1 days
(continues on next page)
In [37]: y = s - s.shift()
In [38]: y
Out[38]:
0 NaT
1 1 days
2 1 days
dtype: timedelta64[ns]
In [40]: y
Out[40]:
0 NaT
1 NaT
2 1 days
dtype: timedelta64[ns]
Operands can also appear in a reversed order (a singular object operated with a Series):
In [41]: s.max() - s
Out[41]:
0 2 days
1 1 days
2 0 days
dtype: timedelta64[ns]
In [42]: datetime.datetime(2011, 1, 1, 3, 5) - s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[42]:
0 -365 days +03:05:00
1 -366 days +03:05:00
2 -367 days +03:05:00
dtype: timedelta64[ns]
In [43]: datetime.timedelta(minutes=5) + s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
min, max and the corresponding idxmin, idxmax operations are supported on frames:
In [47]: df
Out[47]:
A B
0 -1 days +23:54:55 -1 days
1 0 days 23:54:55 -1 days
2 1 days 23:54:55 -1 days
In [48]: df.min()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A -1 days +23:54:55
B -1 days +00:00:00
dtype: timedelta64[ns]
In [49]: df.min(axis=1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 -1 days
1 -1 days
2 -1 days
dtype: timedelta64[ns]
In [50]: df.idxmin()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A 0
B 0
dtype: int64
In [51]: df.idxmax()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A 2
B 0
dtype: int64
min, max, idxmin, idxmax operations are supported on Series as well. A scalar result will be a Timedelta.
In [52]: df.min().max()
Out[52]: Timedelta('-1 days +23:54:55')
In [53]: df.min(axis=1).min()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[53]: Timedelta('-1 days +00:00:00')
In [54]: df.min().idxmax()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[54]:
˓→'A'
In [55]: df.min(axis=1).idxmin()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[55]:
˓→0
You can fillna on timedeltas. Integers will be interpreted as seconds. You can pass a timedelta to get a particular value.
In [56]: y.fillna(0)
Out[56]:
(continues on next page)
In [57]: y.fillna(10)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[57]:
0 0 days 00:00:10
1 0 days 00:00:10
2 1 days 00:00:00
dtype: timedelta64[ns]
0 -1 days +00:00:05
1 -1 days +00:00:05
2 1 days 00:00:00
dtype: timedelta64[ns]
In [60]: td1
Out[60]: Timedelta('-2 days +21:59:57')
In [61]: -1 * td1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[61]: Timedelta('1 days 02:00:03')
In [62]: - td1
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[62]:
˓→Timedelta('1 days 02:00:03')
In [63]: abs(td1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timedelta('1 days 02:00:03')
20.3 Reductions
Numeric reduction operation for timedelta64[ns] will return Timedelta objects. As usual NaT are skipped
during evaluation.
In [64]: y2 = pd.Series(pd.to_timedelta(['-1 days +00:00:05', 'nat', '-1 days
˓→+00:00:05', '1 days']))
In [65]: y2
Out[65]:
0 -1 days +00:00:05
1 NaT
2 -1 days +00:00:05
3 1 days 00:00:00
dtype: timedelta64[ns]
In [67]: y2.median()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timedelta('-1 days +00:00:05')
In [68]: y2.quantile(.1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timedelta('-1 days +00:00:05')
In [69]: y2.sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Timedelta('-1 days +00:00:10')
Timedelta Series, TimedeltaIndex, and Timedelta scalars can be converted to other ‘frequencies’ by dividing
by another timedelta, or by astyping to a specific timedelta type. These operations yield Series and propagate NaT ->
nan. Note that division by the NumPy scalar is true division, while astyping is equivalent of floor division.
In [70]: td = pd.Series(pd.date_range('20130101', periods=4)) - \
....: pd.Series(pd.date_range('20121201', periods=4))
....:
In [73]: td
Out[73]:
0 31 days 00:00:00
1 31 days 00:00:00
2 31 days 00:05:03
3 NaT
dtype: timedelta64[ns]
# to days
In [74]: td / np.timedelta64(1, 'D')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 31.000000
1 31.000000
2 31.003507
3 NaN
dtype: float64
In [75]: td.astype('timedelta64[D]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 31.0
1 31.0
2 31.0
(continues on next page)
# to seconds
In [76]: td / np.timedelta64(1, 's')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2678400.0
1 2678400.0
2 2678703.0
3 NaN
dtype: float64
In [77]: td.astype('timedelta64[s]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2678400.0
1 2678400.0
2 2678703.0
3 NaN
dtype: float64
0 1.018501
1 1.018501
2 1.018617
3 NaN
dtype: float64
In [79]: td * -1
Out[79]:
0 -31 days +00:00:00
1 -31 days +00:00:00
2 -32 days +23:54:57
3 NaT
dtype: timedelta64[ns]
0 31 days 00:00:00
1 62 days 00:00:00
2 93 days 00:15:09
3 NaT
dtype: timedelta64[ns]
Rounded division (floor-division) of a timedelta64[ns] Series by a scalar Timedelta gives a series of integers.
The mod (%) and divmod operations are defined for Timedelta when operating with another timedelta-like or with
a numeric argument.
20.5 Attributes
You can access various components of the Timedelta or TimedeltaIndex directly using the attributes
days,seconds,microseconds,nanoseconds. These are identical to the values returned by datetime.
timedelta, in that, for example, the .seconds attribute represents the number of seconds >= 0 and < 1 day.
These are signed according to whether the Timedelta is signed.
These operations can also be directly accessed via the .dt property of the Series as well.
Note: Note that the attributes are NOT the displayed values of the Timedelta. Use .components to retrieve the
displayed values.
For a Series:
In [86]: td.dt.days
Out[86]:
0 31.0
1 31.0
2 31.0
3 NaN
dtype: float64
In [87]: td.dt.seconds
(continues on next page)
You can access the value of the fields for a scalar Timedelta directly.
In [89]: tds.days
Out[89]: 31
In [90]: tds.seconds
\\\\\\\\\\\\Out[90]: 303
In [91]: (-tds).seconds
\\\\\\\\\\\\\\\\\\\\\\\\\Out[91]: 86097
You can use the .components property to access a reduced form of the timedelta. This returns a DataFrame
indexed similarly to the Series. These are the displayed values of the Timedelta.
In [92]: td.dt.components
Out[92]:
days hours minutes seconds milliseconds microseconds nanoseconds
0 31.0 0.0 0.0 0.0 0.0 0.0 0.0
1 31.0 0.0 0.0 0.0 0.0 0.0 0.0
2 31.0 0.0 5.0 3.0 0.0 0.0 0.0
3 NaN NaN NaN NaN NaN NaN NaN
In [93]: td.dt.components.seconds
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 0.0
1 0.0
2 3.0
3 NaN
Name: seconds, dtype: float64
You can convert a Timedelta to an ISO 8601 Duration string with the .isoformat method
New in version 0.20.0.
20.6 TimedeltaIndex
To generate an index with time delta, you can use either the TimedeltaIndex or the timedelta_range()
constructor.
Using TimedeltaIndex you can pass string-like, Timedelta, timedelta, or np.timedelta64 objects.
Passing np.nan/pd.NaT/nat will represent missing values.
....:
Out[95]:
TimedeltaIndex(['1 days 00:00:00', '1 days 00:00:05', '2 days 00:00:00',
'2 days 00:00:02'],
dtype='timedelta64[ns]', freq=None)
Various combinations of start, end, and periods can be used with timedelta_range:
˓→freq='D')
Similarly to other of the datetime-like indices, DatetimeIndex and PeriodIndex, you can use
TimedeltaIndex as the index of pandas objects.
In [103]: s = pd.Series(np.arange(100),
.....: index=pd.timedelta_range('1 days', periods=100, freq='h'))
.....:
In [104]: s
Out[104]:
1 days 00:00:00 0
1 days 01:00:00 1
1 days 02:00:00 2
1 days 03:00:00 3
1 days 04:00:00 4
1 days 05:00:00 5
1 days 06:00:00 6
..
4 days 21:00:00 93
4 days 22:00:00 94
4 days 23:00:00 95
5 days 00:00:00 96
5 days 01:00:00 97
5 days 02:00:00 98
5 days 03:00:00 99
Freq: H, Length: 100, dtype: int64
Furthermore you can use partial string selection and the range will be inferred:
In [108]: s['1 day':'1 day 5 hours']
Out[108]:
1 days 00:00:00 0
1 days 01:00:00 1
1 days 02:00:00 2
1 days 03:00:00 3
1 days 04:00:00 4
1 days 05:00:00 5
Freq: H, dtype: int64
20.6.3 Operations
Finally, the combination of TimedeltaIndex with DatetimeIndex allow certain combination operations that
are NaT preserving:
In [109]: tdi = pd.TimedeltaIndex(['1 days', pd.NaT, '2 days'])
In [110]: tdi.tolist()
Out[110]: [Timedelta('1 days 00:00:00'), NaT, Timedelta('2 days 00:00:00')]
In [112]: dti.tolist()
Out[112]:
[Timestamp('2013-01-01 00:00:00', freq='D'),
Timestamp('2013-01-02 00:00:00', freq='D'),
Timestamp('2013-01-03 00:00:00', freq='D')]
(continues on next page)
20.6.4 Conversions
Similarly to frequency conversion on a Series above, you can convert these indices to yield another Index.
In [116]: tdi.astype('timedelta64[s]')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[116]:
˓→Float64Index([86400.0, nan, 172800.0], dtype='float64')
Scalars type ops work as well. These can potentially return a different type of index.
˓→ freq=None)
20.7 Resampling
In [122]: s.resample('D').mean()
Out[122]:
1 days 11.5
2 days 35.5
3 days 59.5
4 days 83.5
5 days 97.5
Freq: D, dtype: float64
TWENTYONE
CATEGORICAL DATA
This is an introduction to pandas categorical data type, including a short comparison with R’s factor.
Categoricals are a pandas data type corresponding to categorical variables in statistics. A categorical variable takes
on a limited, and usually fixed, number of possible values (categories; levels in R). Examples are gender, social class,
blood type, country affiliation, observation time or rating via Likert scales.
In contrast to statistical categorical variables, categorical data might have an order (e.g. ‘strongly agree’ vs ‘agree’ or
‘first observation’ vs. ‘second observation’), but numerical operations (additions, divisions, . . . ) are not possible.
All values of categorical data are either in categories or np.nan. Order is defined by the order of categories, not lexical
order of the values. Internally, the data structure consists of a categories array and an integer array of codes which
point to the real value in the categories array.
The categorical data type is useful in the following cases:
• A string variable consisting of only a few different values. Converting such a string variable to a categorical
variable will save some memory, see here.
• The lexical order of a variable is not the same as the logical order (“one”, “two”, “three”). By converting to a
categorical and specifying an order on the categories, sorting and min/max will use the logical order instead of
the lexical order, see here.
• As a signal to other Python libraries that this column should be treated as a categorical variable (e.g. to use
suitable statistical methods or plot types).
See also the API docs on categoricals.
In [2]: s
Out[2]:
0 a
1 b
2 c
3 a
(continues on next page)
1011
pandas: powerful Python data analysis toolkit, Release 0.23.4
In [3]: df = pd.DataFrame({"A":["a","b","c","a"]})
In [5]: df
Out[5]:
A B
0 a a
1 b b
2 c c
3 a a
By using special functions, such as cut(), which groups data into discrete bins. See the example on tiling in the
docs.
In [9]: df.head(10)
Out[9]:
value group
0 65 60 - 69
1 49 40 - 49
2 56 50 - 59
3 43 40 - 49
4 43 40 - 49
5 91 90 - 99
6 32 30 - 39
7 87 80 - 89
8 36 30 - 39
9 8 0 - 9
In [11]: s = pd.Series(raw_cat)
In [12]: s
Out[12]:
0 NaN
1 b
2 c
3 NaN
dtype: category
Categories (3, object): [b, c, d]
(continues on next page)
In [13]: df = pd.DataFrame({"A":["a","b","c","a"]})
In [15]: df
Out[15]:
A B
0 a NaN
1 b b
2 c c
3 a NaN
In [16]: df.dtypes
Out[16]:
A object
B category
dtype: object
Similar to the previous section where a single column was converted to categorical, all columns in a DataFrame can
be batch converted to categorical either during or after construction.
This can be done during construction by specifying dtype="category" in the DataFrame constructor:
In [18]: df.dtypes
Out[18]:
A category
B category
dtype: object
Note that the categories present in each column differ; the conversion is done column by column, so only labels present
in a given column are categories:
In [19]: df['A']
Out[19]:
0 a
1 b
2 c
3 a
Name: A, dtype: category
Categories (3, object): [a, b, c]
In [20]: df['B']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[
˓→
0 b
1 c
2 c
3 d
(continues on next page)
In [23]: df_cat.dtypes
Out[23]:
A category
B category
dtype: object
In [24]: df_cat['A']
Out[24]:
0 a
1 b
2 c
3 a
Name: A, dtype: category
Categories (3, object): [a, b, c]
In [25]: df_cat['B']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[
˓→
0 b
1 c
2 c
3 d
Name: B, dtype: category
Categories (3, object): [b, c, d]
In the examples above where we passed dtype='category', we used the default behavior:
1. Categories are inferred from the data.
2. Categories are unordered.
To control those behaviors, instead of passing 'category', use an instance of CategoricalDtype.
In [30]: s_cat
Out[30]:
0 NaN
1 b
2 c
3 NaN
dtype: category
Categories (3, object): [b < c < d]
Similarly, a CategoricalDtype can be used with a DataFrame to ensure that categories are consistent among
all columns.
In [34]: df_cat['A']
Out[34]:
0 a
1 b
2 c
3 a
Name: A, dtype: category
Categories (4, object): [a < b < c < d]
In [35]: df_cat['B']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 b
1 c
2 c
3 d
Name: B, dtype: category
Categories (4, object): [a < b < c < d]
Note: To perform table-wise conversion, where all labels in the entire DataFrame are used as categories for each
column, the categories parameter can be determined programmatically by categories = pd.unique(df.
values.ravel()).
If you already have codes and categories, you can use the from_codes() constructor to save the factorize
step during normal constructor mode:
To get back to the original Series or NumPy array, use Series.astype(original_dtype) or np.
asarray(categorical):
In [38]: s = pd.Series(["a","b","c","a"])
In [39]: s
Out[39]:
0 a
1 b
2 c
3 a
dtype: object
In [40]: s2 = s.astype('category')
In [41]: s2
Out[41]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): [a, b, c]
In [42]: s2.astype(str)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[42]:
˓→
0 a
1 b
2 c
3 a
dtype: object
In [43]: np.asarray(s2)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→array(['a', 'b', 'c', 'a'], dtype=object)
Note: In contrast to R’s factor function, categorical data is not converting input values to strings; categories will end
up the same data type as the original values.
Note: In contrast to R’s factor function, there is currently no way to assign/change labels at creation time. Use
categories to change the categories after creation time.
21.2 CategoricalDtype
2. ordered: a boolean
This information can be stored in a CategoricalDtype. The categories argument is optional, which implies
that the actual categories should be inferred from whatever is present in the data when the pandas.Categorical
is created. The categories are assumed to be unordered by default.
In [47]: CategoricalDtype()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→CategoricalDtype(categories=None, ordered=None)
A CategoricalDtype can be used in any place pandas expects a dtype. For example pandas.read_csv(),
pandas.DataFrame.astype(), or in the Series constructor.
Note: As a convenience, you can use the string 'category' in place of a CategoricalDtype when you want
the default behavior of the categories being unordered, and equal to the set values present in the array. In other words,
dtype='category' is equivalent to dtype=CategoricalDtype().
Two instances of CategoricalDtype compare equal whenever they have the same categories and order. When
comparing two unordered categoricals, the order of the categories is not considered.
In [51]: c1 == 'category'
Out[51]: True
21.3 Description
Using describe() on categorical data will produce similar output to a Series or DataFrame of type string.
In [54]: df.describe()
Out[54]:
cat s
count 3 3
unique 2 2
top c c
freq 2 2
In [55]: df["cat"].describe()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[55]:
˓→
count 3
unique 2
top c
freq 2
Name: cat, dtype: object
Categorical data has a categories and a ordered property, which list their possible values and whether the ordering
matters or not. These properties are exposed as s.cat.categories and s.cat.ordered. If you don’t manually
specify categories and ordering, they are inferred from the passed arguments.
In [57]: s.cat.categories
Out[57]: Index(['a', 'b', 'c'], dtype='object')
In [58]: s.cat.ordered
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[58]: False
In [60]: s.cat.categories
Out[60]: Index(['c', 'b', 'a'], dtype='object')
In [61]: s.cat.ordered
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[61]: False
Note: New categorical data are not automatically ordered. You must explicitly pass ordered=True to indicate an
ordered Categorical.
Note: The result of unique() is not always the same as Series.cat.categories, because Series.
unique() has a couple of guarantees, namely that it returns categories in the order of appearance, and it only
includes values that are actually present.
In [62]: s = pd.Series(list('babc')).astype(CategoricalDtype(list('abcd')))
In [63]: s
Out[63]:
0 b
1 a
2 b
3 c
dtype: category
Categories (4, object): [a, b, c, d]
# categories
In [64]: s.cat.categories
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[64]:
˓→Index(['a', 'b', 'c', 'd'], dtype='object')
# uniques
In [65]: s.unique()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
[b, a, c]
Categories (3, object): [b, a, c]
Renaming categories is done by assigning new values to the Series.cat.categories property or by using the
rename_categories() method:
In [66]: s = pd.Series(["a","b","c","a"], dtype="category")
In [67]: s
Out[67]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): [a, b, c]
In [69]: s
Out[69]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): [Group a, Group b, Group c]
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [1, 2, 3]
In [71]: s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): [Group a, Group b, Group c]
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): [Group a, Group b, Group c]
In [73]: s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): [Group a, Group b, Group c]
Note: In contrast to R’s factor, categorical data can have categories of other types than string.
Note: Be aware that assigning new categories is an inplace operation, while most other operations under Series.
cat per default return a new Series of dtype category.
In [75]: try:
....: s.cat.categories = [1,2,np.nan]
....: except ValueError as e:
....: print("ValueError: " + str(e))
....:
ValueError: Categorial categories cannot be null
In [76]: s = s.cat.add_categories([4])
In [77]: s.cat.categories
Out[77]: Index(['Group a', 'Group b', 'Group c', 4], dtype='object')
In [78]: s
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[78]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (4, object): [Group a, Group b, Group c, 4]
Removing categories can be done by using the remove_categories() method. Values which are removed are
replaced by np.nan.:
In [79]: s = s.cat.remove_categories([4])
In [80]: s
Out[80]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): [Group a, Group b, Group c]
In [83]: s.cat.remove_unused_categories()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[83]:
˓→
0 a
1 b
2 a
dtype: category
Categories (2, object): [a, b]
If you want to do remove and add new categories in one step (which has some speed advantage), or simply set the
categories to a predefined scale, use set_categories().
In [85]: s
Out[85]:
0 one
1 two
2 four
3 -
dtype: category
Categories (4, object): [-, four, one, two]
In [86]: s = s.cat.set_categories(["one","two","three","four"])
In [87]: s
Out[87]:
0 one
1 two
2 four
3 NaN
dtype: category
Categories (4, object): [one, two, three, four]
Note: Be aware that Categorical.set_categories() cannot know whether some category is omitted in-
tentionally or because it is misspelled or (under Python3) due to a type difference (e.g., NumPy S1 dtype and Python
strings). This can result in surprising behaviour!
If categorical data is ordered (s.cat.ordered == True), then the order of the categories has a meaning and
certain operations are possible. If the categorical is unordered, .min()/.max() will raise a TypeError.
In [88]: s = pd.Series(pd.Categorical(["a","b","c","a"], ordered=False))
In [89]: s.sort_values(inplace=True)
In [90]: s = pd.Series(["a","b","c","a"]).astype(
....: CategoricalDtype(ordered=True)
....: )
....:
In [91]: s.sort_values(inplace=True)
In [92]: s
Out[92]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): [a < b < c]
You can set categorical data to be ordered by using as_ordered() or unordered by using as_unordered().
These will by default return a new object.
In [94]: s.cat.as_ordered()
Out[94]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): [a < b < c]
In [95]: s.cat.as_unordered()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[95]:
˓→
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): [a, b, c]
Sorting will use the order defined by categories, not any lexical order present on the data type. This is even true for
strings and numeric data:
In [96]: s = pd.Series([1,2,3,1], dtype="category")
In [98]: s
Out[98]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [99]: s.sort_values(inplace=True)
In [100]: s
Out[100]:
1 2
2 3
0 1
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
21.5.1 Reordering
In [104]: s
Out[104]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [105]: s.sort_values(inplace=True)
In [106]: s
Out[106]:
1 2
2 3
0 1
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
(continues on next page)
Note: Note the difference between assigning new categories and reordering the categories: the first renames categories
and therefore the individual values in the Series, but if the first position was sorted last, the renamed value will still
be sorted last. Reordering means that the way values are sorted is different afterwards, but not that individual values
in the Series are changed.
Note: If the Categorical is not ordered, Series.min() and Series.max() will raise TypeError. Nu-
meric operations like +, -, *, / and operations based on them (e.g. Series.median(), which would need to
compute the mean between two values if the length of an array is even) do not work and raise a TypeError.
A categorical dtyped column will participate in a multi-column sort in a similar manner to other columns. The ordering
of the categorical is determined by the categories of that column.
In [111]: dfs.sort_values(by=['A','B'])
Out[111]:
A B
7 a 1
6 a 2
0 b 1
5 b 1
1 b 2
4 b 2
2 e 1
3 e 2
21.6 Comparisons
Note: Any “non-equality” comparisons of categorical data with a Series, np.array, list or categorical data
with different categories or ordering will raise a TypeError because custom categories ordering could be interpreted
in two ways: one with taking into account the ordering and one without.
In [115]: cat
Out[115]:
0 1
1 2
2 3
dtype: category
Categories (3, int64): [3 < 2 < 1]
In [116]: cat_base
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[116]:
˓→
0 2
1 2
2 2
dtype: category
Categories (3, int64): [3 < 2 < 1]
In [117]: cat_base2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 2
1 2
(continues on next page)
Comparing to a categorical with the same categories and ordering or to a scalar works:
Equality comparisons work with any list-like object of same length and scalars:
In [122]: cat == 2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 False
1 True
2 False
dtype: bool
This doesn’t work because the categories are not the same:
In [123]: try:
.....: cat > cat_base2
.....: except TypeError as e:
.....: print("TypeError: " + str(e))
.....:
TypeError: Categoricals can only be compared if 'categories' are the same. Categories
˓→are different lengths
If you want to do a “non-equality” comparison of a categorical series with a list-like object which is not categorical
data, you need to be explicit and convert the categorical data back to the original values:
In [125]: try:
.....: cat > base
.....: except TypeError as e:
.....: print("TypeError: " + str(e))
.....:
TypeError: Cannot compare a Categorical for op __gt__ with type <class 'numpy.ndarray
˓→'>.
When you compare two unordered categoricals with the same categories, the order is not considered:
In [127]: c1 = pd.Categorical(['a', 'b'], categories=['a', 'b'], ordered=False)
In [129]: c1 == c2
Out[129]: array([ True, True], dtype=bool)
21.7 Operations
Apart from Series.min(), Series.max() and Series.mode(), the following operations are possible with
categorical data:
Series methods like Series.value_counts() will use all categories, even if some categories are not present
in the data:
In [130]: s = pd.Series(pd.Categorical(["a","b","c","c"], categories=["c","a","b","d
˓→"]))
In [131]: s.value_counts()
Out[131]:
c 2
b 1
a 1
d 0
dtype: int64
In [133]: df = pd.DataFrame({"cats":cats,"values":[1,2,2,2,3,4,5]})
In [134]: df.groupby("cats").mean()
Out[134]:
values
cats
a 1.0
(continues on next page)
In [137]: df2.groupby(["cats","B"]).mean()
Out[137]:
values
cats B
a c 1.0
d 2.0
b c 3.0
d 4.0
c c NaN
d NaN
Pivot tables:
In [138]: raw_cat = pd.Categorical(["a","a","b","b"], categories=["a","b","c"])
The optimized pandas data access methods .loc, .iloc, .at, and .iat, work as normal. The only difference is
the return type (for getting) and that only values already in categories can be assigned.
21.8.1 Getting
If the slicing operation returns either a DataFrame or a column of type Series, the category dtype is preserved.
In [141]: idx = pd.Index(["h","i","j","k","l","m","n",])
In [145]: df.iloc[2:4,:]
Out[145]:
(continues on next page)
In [146]: df.iloc[2:4,:].dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[146]:
cats category
values int64
dtype: object
In [147]: df.loc["h":"j","cats"]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
h a
i b
j b
Name: cats, dtype: category
Categories (3, object): [a, b, c]
cats values
i b 2
j b 2
k b 2
An example where the category type is not preserved is if you take one single row: the resulting Series is of dtype
object:
Returning a single item from categorical data will also return the value, not a categorical of length “1”.
In [150]: df.iat[0,0]
Out[150]: 'a'
Note: The is in contrast to R’s factor function, where factor(c(1,2,3))[1] returns a single value factor.
To get a single value Series of type category, you pass in a list with a single value:
In [153]: df.loc[["h"],"cats"]
Out[153]:
h x
(continues on next page)
The accessors .dt and .str will work if the s.cat.categories are of an appropriate type:
In [154]: str_s = pd.Series(list('aabb'))
In [156]: str_cat
Out[156]:
0 a
1 a
2 b
3 b
dtype: category
Categories (2, object): [a, b]
In [157]: str_cat.str.contains("a")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[157]:
˓→
0 True
1 True
2 False
3 False
dtype: bool
In [160]: date_cat
Out[160]:
0 2015-01-01
1 2015-01-02
2 2015-01-03
3 2015-01-04
4 2015-01-05
dtype: category
Categories (5, datetime64[ns]): [2015-01-01, 2015-01-02, 2015-01-03, 2015-01-04, 2015-
˓→01-05]
In [161]: date_cat.dt.day
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1
1 2
2 3
3 4
4 5
dtype: int64
Note: The returned Series (or DataFrame) is of the same type as if you used the .str.<method> / .dt.
That means, that the returned values from methods and properties on the accessors of a Series and the returned
values from methods and properties on the accessors of this Series transformed to one of type category will be
equal:
Note: The work is done on the categories and then a new Series is constructed. This has some performance
implication if you have a Series of type string, where lots of elements are repeated (i.e. the number of unique
elements in the Series is a lot smaller than the length of the Series). In this case it can be faster to convert the
original Series to one of type category and use .str.<method> or .dt.<property> on that.
21.8.3 Setting
Setting values in a categorical column (or Series) works as long as the value is included in the categories:
In [171]: df
Out[171]:
cats values
h a 1
i a 1
j b 2
k b 2
l a 1
m a 1
n a 1
In [172]: try:
.....: df.iloc[2:4,:] = [["c",3],["c",3]]
(continues on next page)
Setting values by assigning categorical data will also check that the categories match:
In [174]: df
Out[174]:
cats values
h a 1
i a 1
j a 2
k a 2
l a 1
m a 1
n a 1
In [175]: try:
.....: df.loc["j":"k","cats"] = pd.Categorical(["b","b"], categories=["a","b",
˓→"c"])
Assigning a Categorical to parts of a column of other types will use the values:
In [179]: df
Out[179]:
a b
0 1 a
1 b a
2 b b
3 1 b
4 1 a
In [180]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[180]:
a object
b object
dtype: object
21.8.4 Merging
You can concat two DataFrames containing categorical data together, but the categories of these categoricals need
to be the same:
In [185]: res
Out[185]:
cats vals
0 a 1
1 b 2
0 a 1
1 b 2
In [186]: res.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[186]:
cats category
vals int64
dtype: object
In this case the categories are not the same, and therefore an error is raised:
In [189]: try:
.....: pd.concat([df,df_different])
.....: except ValueError as e:
.....: print("ValueError: " + str(e))
.....:
21.8.5 Unioning
By default, the resulting categories will be ordered as they appear in the data. If you want the categories to be lexsorted,
use sort_categories=True argument.
union_categoricals also works with the “easy” case of combining two categoricals of the same categories and
order information (e.g. what you could also append for).
The below raises TypeError because the categories are ordered and not identical.
union_categoricals() also works with a CategoricalIndex, or Series containing categorical data, but
note that the resulting array will always be a plain Categorical:
Note: union_categoricals may recode the integer codes for categories when combining categoricals. This is
likely what you want, but if you are relying on the exact numbering of the categories, be aware.
In [206]: c1
Out[206]:
[b, c]
Categories (2, object): [b, c]
# "b" is coded to 0
In [207]: c1.codes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[207]: array([0, 1], dtype=int8)
In [208]: c2
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[208]:
˓→
[a, b]
Categories (2, object): [a, b]
# "b" is coded to 1
In [209]: c2.codes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→array([0, 1], dtype=int8)
In [211]: c
Out[211]:
[b, c, a, b]
Categories (3, object): [b, c, a]
21.8.6 Concatenation
This section describes concatenations specific to category dtype. See Concatenating objects for general description.
By default, Series or DataFrame concatenation which contains the same categories results in category dtype,
otherwise results in object dtype. Use .astype or union_categoricals to get category result.
# same categories
In [213]: s1 = pd.Series(['a', 'b'], dtype='category')
# different categories
In [216]: s3 = pd.Series(['b', 'c'], dtype='category')
[a, b, b, c]
Categories (3, object): [a, b, c]
You can write data that contains category dtypes to a HDFStore. See here for an example and caveats.
It is also possible to write data to and reading data from Stata format files. See here for an example and caveats.
Writing to a CSV file will convert the data, effectively removing any information about the categorical (categories and
ordering). So if you read back the CSV file you have to convert the relevant columns back to category and assign the
right categories and categories ordering.
In [220]: s = pd.Series(pd.Categorical(['a', 'b', 'b', 'a', 'a', 'd']))
In [225]: df.to_csv(csv)
In [227]: df2.dtypes
Out[227]:
Unnamed: 0 int64
cats object
vals int64
dtype: object
In [228]: df2["cats"]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[228]:
˓→
0 very good
1 good
2 good
3 very good
4 very good
5 bad
Name: cats, dtype: object
.....: inplace=True)
.....:
In [231]: df2.dtypes
Out[231]:
Unnamed: 0 int64
cats category
vals int64
dtype: object
In [232]: df2["cats"]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[232
˓→
0 very good
1 good
2 good
3 very good
4 very good
5 bad
Name: cats, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]
pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See
the Missing Data section.
Missing values should not be included in the Categorical’s categories, only in the values. Instead, it is under-
stood that NaN is different, and is always a possibility. When working with the Categorical’s codes, missing values
will always have a code of -1.
In [233]: s = pd.Series(["a", "b", np.nan, "a"], dtype="category")
In [235]: s.cat.codes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[235
˓→
0 0
1 1
2 -1
3 0
dtype: int8
Methods for working with missing data, e.g. isna(), fillna(), dropna(), all work normally:
In [236]: s = pd.Series(["a", "b", np.nan], dtype="category")
In [237]: s
Out[237]:
0 a
1 b
2 NaN
dtype: category
Categories (2, object): [a, b]
In [238]: pd.isna(s)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[238]:
˓→
0 False
1 False
2 True
dtype: bool
In [239]: s.fillna("a")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 a
(continues on next page)
21.12 Gotchas
The memory usage of a Categorical is proportional to the number of categories plus the length of the data. In
contrast, an object dtype is a constant times the length of the data.
In [240]: s = pd.Series(['foo','bar']*1000)
# object dtype
In [241]: s.nbytes
Out[241]: 16000
# category dtype
In [242]: s.astype('category').nbytes
\\\\\\\\\\\\\\\\Out[242]: 2016
Note: If the number of categories approaches the length of the data, the Categorical will use nearly the same or
more memory than an equivalent object dtype representation.
# object dtype
In [244]: s.nbytes
Out[244]: 16000
# category dtype
In [245]: s.astype('category').nbytes
\\\\\\\\\\\\\\\\Out[245]: 20000
Currently, categorical data and the underlying Categorical is implemented as a Python object and not as a low-
level NumPy array dtype. This leads to some problems.
NumPy itself doesn’t know about the new dtype:
In [246]: try:
.....: np.dtype("category")
.....: except TypeError as e:
.....: print("TypeError: " + str(e))
.....:
TypeError: data type "category" not understood
In [248]: try:
.....: np.dtype(dtype)
.....: except TypeError as e:
.....: print("TypeError: " + str(e))
.....:
TypeError: data type not understood
Using NumPy functions on a Series of type category should not work as Categoricals are not numeric data (even
in the case that .categories is numeric).
In [253]: s = pd.Series(pd.Categorical([1,2,3,4]))
In [254]: try:
.....: np.sum(s)
.....: except TypeError as e:
.....: print("TypeError: " + str(e))
.....:
TypeError: Categorical cannot perform the operation sum
Pandas currently does not preserve the dtype in apply functions: If you apply along rows you get a Series of object
dtype (same as getting a row -> getting one element will return a basic type) and applying along columns will also
convert to object.
In [255]: df = pd.DataFrame({"a":[1,2,3,4],
.....: "b":["a","b","c","d"],
.....: "cats":pd.Categorical([1,2,3,2])})
.....:
a int64
b object
cats category
dtype: object
CategoricalIndex is a type of index that is useful for supporting indexing with duplicates. This is a container
around a Categorical and allows efficient indexing and storage of an index with a large number of duplicated
elements. See the advanced indexing docs for a more detailed explanation.
Setting the index will create a CategoricalIndex:
In [262]: df.index
Out[262]: CategoricalIndex([1, 2, 3, 4], categories=[4, 2, 3, 1], ordered=False,
˓→dtype='category')
strings values
4 d 1
2 b 2
3 c 3
1 a 4
Constructing a Series from a Categorical will not copy the input Categorical. This means that changes to
the Series will in most cases change the original Categorical:
In [266]: cat
Out[266]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [267]: s.iloc[0:2] = 10
In [268]: cat
Out[268]:
[10, 10, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [269]: df = pd.DataFrame(s)
In [271]: cat
Out[271]:
[5, 5, 3, 5]
Categories (5, int64): [1, 2, 3, 4, 5]
In [274]: cat
Out[274]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [275]: s.iloc[0:2] = 10
In [276]: cat
Out[276]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
Note: This also happens in some cases when you supply a NumPy array instead of a Categorical: using an
int array (e.g. np.array([1,2,3,4])) will exhibit the same behavior, while using a string array (e.g. np.
array(["a","b","c","a"])) will not.
TWENTYTWO
VISUALIZATION
We provide the basics in pandas to easily create decent looking plots. See the ecosystem section for visualization
libraries that go beyond the basics documented here.
We will demonstrate the basics, see the cookbook for some advanced strategies.
The plot method on Series and DataFrame is just a simple wrapper around plt.plot():
In [3]: ts = ts.cumsum()
In [4]: ts.plot()
Out[4]: <matplotlib.axes._subplots.AxesSubplot at 0x7f20d5690710>
1045
pandas: powerful Python data analysis toolkit, Release 0.23.4
If the index consists of dates, it calls gcf().autofmt_xdate() to try to format the x-axis nicely as per above.
On DataFrame, plot() is a convenience to plot all of the columns with labels:
In [6]: df = df.cumsum()
You can plot one column versus another using the x and y keywords in plot():
Note: For more formatting and styling options, see formatting below.
Plotting methods allow for a handful of plot styles other than the default line plot. These methods can be provided as
the kind keyword argument to plot(), and include:
• ‘bar’ or ‘barh’ for bar plots
• ‘hist’ for histogram
• ‘box’ for boxplot
• ‘kde’ or ‘density’ for density plots
• ‘area’ for area plots
• ‘scatter’ for scatter plots
• ‘hexbin’ for hexagonal bin plots
• ‘pie’ for pie plots
For example, a bar plot can be created the following way:
In [11]: plt.figure();
In [12]: df.iloc[5].plot(kind='bar');
You can also create these other plots using the methods DataFrame.plot.<kind> instead of providing the kind
keyword argument. This makes it easier to discover plot methods and the specific arguments they use:
In [13]: df = pd.DataFrame()
In [14]: df.plot.<TAB>
df.plot.area df.plot.barh df.plot.density df.plot.hist df.plot.line
˓→df.plot.scatter
In addition to these kind s, there are the DataFrame.hist(), and DataFrame.boxplot() methods, which use a separate
interface.
Finally, there are several plotting functions in pandas.plotting that take a Series or DataFrame as an argu-
ment. These include:
• Scatter Matrix
• Andrews Curves
• Parallel Coordinates
• Lag Plot
• Autocorrelation Plot
• Bootstrap Plot
• RadViz
Plots may also be adorned with errorbars or tables.
For labeled, non-time series data, you may wish to produce a bar plot:
In [15]: plt.figure();
In [18]: df2.plot.bar();
In [19]: df2.plot.bar(stacked=True);
In [20]: df2.plot.barh(stacked=True);
22.2.2 Histograms
In [22]: plt.figure();
In [23]: df4.plot.hist(alpha=0.5)
Out[23]: <matplotlib.axes._subplots.AxesSubplot at 0x7f20cf918908>
A histogram can be stacked using stacked=True. Bin size can be changed using the bins keyword.
In [24]: plt.figure();
You can pass other keywords supported by matplotlib hist. For example, horizontal and cumulative histograms can
be drawn by orientation='horizontal' and cumulative=True.
In [26]: plt.figure();
See the hist method and the matplotlib hist documentation for more.
The existing interface DataFrame.hist to plot histogram still can be used.
In [28]: plt.figure();
In [29]: df['A'].diff().hist()
Out[29]: <matplotlib.axes._subplots.AxesSubplot at 0x7f20d550efd0>
In [30]: plt.figure()
Out[30]: <Figure size 640x480 with 0 Axes>
In [35]: df.plot.box()
Out[35]: <matplotlib.axes._subplots.AxesSubplot at 0x7f20cf9400f0>
Boxplot can be colorized by passing color keyword. You can pass a dict whose keys are boxes, whiskers,
medians and caps. If some keys are missing in the dict, default colors are used for the corresponding artists.
Also, boxplot has sym keyword to specify fliers style.
When you pass other type of arguments via color keyword, it will be directly passed to matplotlib for all the boxes,
whiskers, medians and caps colorization.
The colors are applied to every boxes to be drawn. If you want more complicated colorization, you can get each drawn
artists by passing return_type.
Also, you can pass other keywords supported by matplotlib boxplot. For example, horizontal and custom-positioned
boxplot can be drawn by vert=False and positions keywords.
See the boxplot method and the matplotlib boxplot documentation for more.
The existing interface DataFrame.boxplot to plot boxplot still can be used.
In [39]: df = pd.DataFrame(np.random.rand(10,5))
In [40]: plt.figure();
In [41]: bp = df.boxplot()
You can create a stratified boxplot using the by keyword argument to create groupings. For instance,
In [44]: plt.figure();
In [45]: bp = df.boxplot(by='X')
You can also pass a subset of columns to plot, as well as group by multiple columns:
In [49]: plt.figure();
In boxplot, the return type can be controlled by the return_type, keyword. The valid choices are {"axes",
"dict", "both", None}. Faceting, created by DataFrame.boxplot with the by keyword, will affect the
output type as well:
In [55]: bp = df_box.boxplot(by='g')
The subplots above are split by the numeric columns first, then the value of the g column. Below the subplots are first
split by the value of g, then by the numeric columns.
In [56]: bp = df_box.groupby('g').boxplot()
You can create area plots with Series.plot.area() and DataFrame.plot.area(). Area plots are stacked
by default. To produce stacked area plot, each column must be either all positive or all negative values.
When input data contains NaN, it will be automatically filled by 0. If you want to drop or fill by different values, use
dataframe.dropna() or dataframe.fillna() before calling plot.
In [58]: df.plot.area();
To produce an unstacked plot, pass stacked=False. Alpha value is set to 0.5 unless otherwise specified:
In [59]: df.plot.area(stacked=False);
Scatter plot can be drawn by using the DataFrame.plot.scatter() method. Scatter plot requires numeric
columns for the x and y axes. These can be specified by the x and y keywords.
To plot multiple column groups in a single axes, repeat plot method specifying target ax. It is recommended to
specify color and label keywords to distinguish each groups.
The keyword c may be given as the name of a column to provide colors for each point:
You can pass other keywords supported by matplotlib scatter. The example below shows a bubble chart using a
column of the DataFrame as the bubble size.
See the scatter method and the matplotlib scatter documentation for more.
You can create hexagonal bin plots with DataFrame.plot.hexbin(). Hexbin plots can be a useful alternative
to scatter plots if your data are too dense to plot each point individually.
A useful keyword argument is gridsize; it controls the number of hexagons in the x-direction, and defaults to 100.
A larger gridsize means more, smaller bins.
By default, a histogram of the counts around each (x, y) point is computed. You can specify alternative aggregations
by passing values to the C and reduce_C_function arguments. C specifies the value at each (x, y) point and
reduce_C_function is a function of one argument that reduces all the values in a bin to a single number (e.g.
mean, max, sum, std). In this example the positions are given by columns a and b, while the value is given by
column z. The bins are aggregated with NumPy’s max function.
See the hexbin method and the matplotlib hexbin documentation for more.
You can create a pie plot with DataFrame.plot.pie() or Series.plot.pie(). If your data includes any
NaN, they will be automatically filled with 0. A ValueError will be raised if there are any negative values in your
data.
For pie plots it’s best to use square figures, i.e. a figure aspect ratio 1. You can create the figure with equal width and
height, or force the aspect ratio to be equal after plotting by calling ax.set_aspect('equal') on the returned
axes object.
Note that pie plot with DataFrame requires that you either specify a target column by the y argument or
subplots=True. When y is specified, pie plot of selected column will be drawn. If subplots=True is spec-
ified, pie plots for each column are drawn as subplots. A legend will be drawn in each pie plots by default; specify
legend=False to hide it.
You can use the labels and colors keywords to specify the labels and colors of each wedge.
Warning: Most pandas plots use the label and color arguments (note the lack of “s” on those). To be
consistent with matplotlib.pyplot.pie() you must use labels and colors.
If you want to hide wedge labels, specify labels=None. If fontsize is specified, the value will be applied to
wedge labels. Also, other keywords supported by matplotlib.pyplot.pie() can be used.
If you pass values whose sum total is less than 1.0, matplotlib draws a semicircle.
Pandas tries to be pragmatic about plotting DataFrames or Series that contain missing data. Missing values are
dropped, left out, or filled depending on the plot type.
If any of these defaults are not what you want, or if you want to be explicit about how missing values are handled,
consider using fillna() or dropna() before plotting.
These functions can be imported from pandas.plotting and take a Series or DataFrame as an argument.
You can create a scatter plot matrix using the scatter_matrix method in pandas.plotting:
You can create density plots using the Series.plot.kde() and DataFrame.plot.kde() methods.
In [84]: ser.plot.kde()
Out[84]: <matplotlib.axes._subplots.AxesSubplot at 0x7f20d5961a20>
Andrews curves allow one to plot multivariate data as a large number of curves that are created using the attributes
of samples as coefficients for Fourier series, see the Wikipedia entry for more information. By coloring these curves
differently for each class it is possible to visualize data clustering. Curves belonging to samples of the same class will
usually be closer together and form larger structures.
Note: The “Iris” dataset is available here.
In [87]: plt.figure()
Out[87]: <Figure size 640x480 with 0 Axes>
Parallel coordinates is a plotting technique for plotting multivariate data, see the Wikipedia entry for an introduction.
Parallel coordinates allows one to see clusters in data and to estimate other statistics visually. Using parallel coordinates
points are represented as connected line segments. Each vertical line represents one attribute. One set of connected
line segments represents one data point. Points that tend to cluster will appear closer together.
In [91]: plt.figure()
Out[91]: <Figure size 640x480 with 0 Axes>
Lag plots are used to check if a data set or time series is random. Random data should not exhibit any structure in the
lag plot. Non-random structure implies that the underlying data are not random. The lag argument may be passed,
and when lag=1 the plot is essentially data[:-1] vs. data[1:].
In [94]: plt.figure()
Out[94]: <Figure size 640x480 with 0 Axes>
In [96]: lag_plot(data)
Out[96]: <matplotlib.axes._subplots.AxesSubplot at 0x7f20f78a5278>
Autocorrelation plots are often used for checking randomness in time series. This is done by computing autocorrela-
tions for data values at varying time lags. If time series is random, such autocorrelations should be near zero for any
and all time-lag separations. If time series is non-random then one or more of the autocorrelations will be significantly
non-zero. The horizontal lines displayed in the plot correspond to 95% and 99% confidence bands. The dashed line is
99% confidence band. See the Wikipedia entry for more about autocorrelation plots.
In [98]: plt.figure()
Out[98]: <Figure size 640x480 with 0 Axes>
In [100]: autocorrelation_plot(data)
Out[100]: <matplotlib.axes._subplots.AxesSubplot at 0x7f21345fc080>
Bootstrap plots are used to visually assess the uncertainty of a statistic, such as mean, median, midrange, etc. A
random subset of a specified size is selected from a data set, the statistic in question is computed for this subset and
the process is repeated a specified number of times. Resulting plots and histograms are what constitutes the bootstrap
plot.
22.4.8 RadViz
RadViz is a way of visualizing multi-variate data. It is based on a simple spring tension minimization algorithm.
Basically you set up a bunch of points in a plane. In our case they are equally spaced on a unit circle. Each point
represents a single attribute. You then pretend that each sample in the data set is attached to each of these points
by a spring, the stiffness of which is proportional to the numerical value of that attribute (they are normalized to
unit interval). The point in the plane, where our sample settles to (where the forces acting on our sample are at an
equilibrium) is where a dot representing our sample will be drawn. Depending on which class that sample belongs it
will be colored differently. See the R package Radviz for more information.
Note: The “Iris” dataset is available here.
In [106]: plt.figure()
Out[106]: <Figure size 640x480 with 0 Axes>
From version 1.5 and up, matplotlib offers a range of preconfigured plotting styles. Setting the style can be
used to easily give plots the general look that you want. Setting the style is as easy as calling matplotlib.
style.use(my_plot_style) before creating your plot. For example you could write matplotlib.style.
use('ggplot') for ggplot-style plots.
You can see the various available style names at matplotlib.style.available and it’s very easy to try them
out.
Most plotting methods have a set of keyword arguments that control the layout and formatting of the returned plot:
For each kind of plot (e.g. line, bar, scatter) any additional arguments keywords are passed along to the corresponding
matplotlib function (ax.plot(), ax.bar(), ax.scatter()). These can be used to control additional styling,
beyond what pandas provides.
You may set the legend argument to False to hide the legend, which is shown by default.
In [110]: df = df.cumsum()
In [111]: df.plot(legend=False)
Out[111]: <matplotlib.axes._subplots.AxesSubplot at 0x7f20cf7f5c88>
22.5.4 Scales
In [113]: ts = np.exp(ts.cumsum())
In [114]: ts.plot(logy=True)
Out[114]: <matplotlib.axes._subplots.AxesSubplot at 0x7f20d4b82438>
In [115]: df.A.plot()
Out[115]: <matplotlib.axes._subplots.AxesSubplot at 0x7f20d48e9c50>
To plot some columns in a DataFrame, give the column names to the secondary_y keyword:
In [117]: plt.figure()
Out[117]: <Figure size 640x480 with 0 Axes>
Note that the columns plotted on the secondary y-axis is automatically marked with “(right)” in the legend. To turn off
the automatic marking, use the mark_right=False keyword:
In [121]: plt.figure()
Out[121]: <Figure size 640x480 with 0 Axes>
pandas includes automatic tick resolution adjustment for regular frequency time-series data. For limited cases where
pandas cannot infer the frequency information (e.g., in an externally created twinx), you can choose to suppress this
behavior for alignment purposes.
Here is the default behavior, notice how the x-axis tick labeling is performed:
In [123]: plt.figure()
Out[123]: <Figure size 640x480 with 0 Axes>
In [124]: df.A.plot()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[124]: <matplotlib.axes._subplots.
˓→AxesSubplot at 0x7f20c09d86d8>
In [125]: plt.figure()
Out[125]: <Figure size 640x480 with 0 Axes>
In [126]: df.A.plot(x_compat=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[126]: <matplotlib.axes._subplots.
˓→AxesSubplot at 0x7f20c0909e48>
If you have more than one plot that needs to be suppressed, the use method in pandas.plotting.
plot_params can be used in a with statement:
In [127]: plt.figure()
Out[127]: <Figure size 640x480 with 0 Axes>
22.5.8 Subplots
Each Series in a DataFrame can be plotted on a different axis with the subplots keyword:
The layout of subplots can be specified by the layout keyword. It can accept (rows, columns). The layout
keyword can be used in hist and boxplot also. If the input is invalid, a ValueError will be raised.
The number of axes which can be contained by rows x columns specified by layout must be larger than the number
of required subplots. If layout can contain more axes than required, blank axes are not drawn. Similar to a NumPy
array’s reshape method, you can use -1 for one dimension to automatically calculate the number of rows or columns
needed, given the other.
The required number of columns (3) is inferred from the number of series to plot and the given number of rows (2).
You can pass multiple axes created beforehand as list-like via ax keyword. This allows more complicated layouts.
The passed axes must be the same number as the subplots being drawn.
When multiple axes are passed via the ax keyword, layout, sharex and sharey keywords don’t affect to the
output. You should explicitly pass sharex=False and sharey=False, otherwise you will see a warning.
# Group by index labels and take the means and standard deviations for each group
In [145]: gp3 = df3.groupby(level=('letter', 'word'))
In [148]: means
Out[148]:
data1 data2
letter word
a bar 3.5 6.0
foo 2.5 5.5
b bar 2.5 5.5
foo 3.0 4.5
In [149]: errors
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
data1 data2
letter word
a bar 0.707107 1.414214
foo 0.707107 0.707107
b bar 0.707107 0.707107
foo 1.414214 0.707107
# Plot
In [150]: fig, ax = plt.subplots()
Plotting with matplotlib table is now supported in DataFrame.plot() and Series.plot() with a table
keyword. The table keyword can accept bool, DataFrame or Series. The simple way to draw a table is to
specify table=True. Data will be transposed to meet matplotlib’s default layout.
Also, you can pass a different DataFrame or Series to the table keyword. The data will be drawn as displayed
in print method (not transposed automatically). If required, it should be transposed manually as seen in the example
below.
There also exists a helper function pandas.plotting.table, which creates a table from DataFrame or
Series, and adds it to an matplotlib.Axes instance. This function can accept keywords which the matplotlib
table has.
Note: You can get table instances on the axes using axes.tables property for further decorations. See the mat-
plotlib table documentation for more.
22.5.12 Colormaps
A potential issue when plotting a large number of columns is that it can be difficult to distinguish some series due to
repetition in the default colors. To remedy this, DataFrame plotting supports the use of the colormap argument,
which accepts either a Matplotlib colormap or a string that is a name of a colormap registered with Matplotlib. A
visualization of the default matplotlib colormaps is available here.
As matplotlib does not directly support colormaps for line-based plots, the colors are selected based on an even spacing
determined by the number of columns in the DataFrame. There is no consideration made for background color, so
some colormaps will produce lines that are not easily visible.
To use the cubehelix colormap, we can pass colormap='cubehelix'.
In [163]: df = pd.DataFrame(np.random.randn(1000, 10), index=ts.index)
In [164]: df = df.cumsum()
In [165]: plt.figure()
Out[165]: <Figure size 640x480 with 0 Axes>
In [166]: df.plot(colormap='cubehelix')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[166]: <matplotlib.axes._subplots.
˓→AxesSubplot at 0x7f20d573bd30> (continues on next page)
In [168]: plt.figure()
Out[168]: <Figure size 640x480 with 0 Axes>
In [169]: df.plot(colormap=cm.cubehelix)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[169]: <matplotlib.axes._subplots.
˓→AxesSubplot at 0x7f20c029e3c8>
Colormaps can also be used other plot types, like bar charts:
In [171]: dd = dd.cumsum()
In [172]: plt.figure()
Out[172]: <Figure size 640x480 with 0 Axes>
In [173]: dd.plot.bar(colormap='Greens')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[173]: <matplotlib.axes._subplots.
˓→AxesSubplot at 0x7f20c02d2e10>
In [174]: plt.figure()
Out[174]: <Figure size 640x480 with 0 Axes>
In [176]: plt.figure()
Out[176]: <Figure size 640x480 with 0 Axes>
In some situations it may still be preferable or necessary to prepare plots directly with matplotlib, for instance when a
certain type of plot or customization is not (yet) supported by pandas. Series and DataFrame objects behave like
arrays and can therefore be passed directly to matplotlib functions without explicit casts.
pandas also automatically registers formatters and locators that recognize date indices, thereby extending date and
time support to practically all plot types available in matplotlib. Although this formatting does not provide the same
level of refinement you would get when plotting via pandas, it can be faster when plotting a large number of points.
In [178]: price = pd.Series(np.random.randn(150).cumsum(),
.....: index=pd.date_range('2000-1-1', periods=150, freq='B'))
.....:
In [179]: ma = price.rolling(20).mean()
In [181]: plt.figure()
Out[181]: <Figure size 640x480 with 0 Axes>
Warning: The rplot trellis plotting interface has been removed. Please use external packages like seaborn for
similar but more refined functionality and refer to our 0.18.1 documentation here for how to convert to using it.
TWENTYTHREE
STYLING
np.random.seed(24)
df = pd.DataFrame({'A': np.linspace(1, 10, 10)})
df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4), columns=list('BCDE'))],
axis=1)
df.iloc[0, 2] = np.nan
1113
pandas: powerful Python data analysis toolkit, Release 0.23.4
In [3]: df.style
Out[3]: <pandas.io.formats.style.Styler at 0x7f05cb19e240>
Note: The DataFrame.style attribute is a property that returns a Styler object. Styler has a _repr_html_
method defined on it so they are rendered automatically. If you want the actual HTML back for further processing or
for writing to file call the .render() method which returns a string.
The above output looks very similar to the standard DataFrame HTML representation. But we’ve done some work
behind the scenes to attach CSS classes to each cell. We can view these by calling the .render method.
In [4]: df.style.highlight_null().render().split('\n')[:10]
Out[4]: ['<style type="text/css" >',
' #T_4c0fa58e_98a7_11e8_9d82_ff3af6a67056row0_col2 {',
' background-color: red;',
' }</style> ',
'<table id="T_4c0fa58e_98a7_11e8_9d82_ff3af6a67056" > ',
'<thead> <tr> ',
' <th class="blank level0" ></th> ',
' <th class="col_heading level0 col0" >A</th> ',
' <th class="col_heading level0 col1" >B</th> ',
' <th class="col_heading level0 col2" >C</th> ']
The row0_col2 is the identifier for that particular cell. We’ve also prepended each row/column identifier with a
UUID unique to each DataFrame so that the style from one doesn’t collide with the styling from another within the
same notebook or page (you can set the uuid if you’d like to tie together the styling of two DataFrames).
When writing style functions, you take care of producing the CSS attribute / value pairs you want. Pandas matches
those up with the CSS classes that identify each cell.
Let’s write a simple style function that will color negative numbers red and positive numbers black.
In [5]: def color_negative_red(val):
"""
Takes a scalar and returns a string with
the css property `'color: red'` for negative
strings, black otherwise.
"""
color = 'red' if val < 0 else 'black'
return 'color: %s' % color
In this case, the cell’s style depends only on it’s own value. That means we should use the Styler.applymap
method which works elementwise.
In [6]: s = df.style.applymap(color_negative_red)
s
Out[6]: <pandas.io.formats.style.Styler at 0x7f05c3697cc0>
Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to
be able to reuse your existing knowledge of how to interact with DataFrames.
Notice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in
a <style> tag. This will be a common theme.
Finally, the input shapes matched. Styler.applymap calls the function on each scalar input, and the function
returns a scalar output.
Now suppose you wanted to highlight the maximum value in each column. We can’t use .applymap anymore since
that operated elementwise. Instead, we’ll turn to .apply which operates columnwise (or rowwise using the axis
keyword). Later on we’ll see that something like highlight_max is already defined on Styler so you wouldn’t
need to write this yourself.
In this case the input is a Series, one column at a time. Notice that the output shape of highlight_max matches
the input shape, an array with len(s) items.
We encourage you to use method chains to build up a style piecewise, before finally rending at the end of the chain.
In [9]: df.style.\
applymap(color_negative_red).\
apply(highlight_max)
Out[9]: <pandas.io.formats.style.Styler at 0x7f05c36c5320>
Above we used Styler.apply to pass in each column one at a time.
Debugging Tip: If you’re having trouble writing your style function, try just passing it into DataFrame.apply. Inter-
nally, Styler.apply uses DataFrame.apply so the result should be the same.
What if you wanted to highlight just the maximum value in the entire table? Use .apply(function,
axis=None) to indicate that your function wants the entire table, not one column or row at a time. Let’s try that
next.
We’ll rewrite our highlight-max to handle either Series (from .apply(axis=0 or 1)) or DataFrames (from
.apply(axis=None)). We’ll also allow the color to be adjustable, to demonstrate that .apply, and .applymap
pass along keyword arguments.
In [10]: def highlight_max(data, color='yellow'):
'''
highlight the maximum in a Series or DataFrame
'''
attr = 'background-color: {}'.format(color)
if data.ndim == 1: # Series from .apply(axis=0) or axis=1
is_max = data == data.max()
return [attr if v else '' for v in is_max]
else: # from .apply(axis=None)
is_max = data == data.max().max()
return pd.DataFrame(np.where(is_max, attr, ''),
index=data.index, columns=data.columns)
When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index
and column labels.
In [11]: df.style.apply(highlight_max, color='darkorange', axis=None)
Out[11]: <pandas.io.formats.style.Styler at 0x7f05c3680eb8>
Style functions should return strings with one or more CSS attribute: value delimited by semicolons. Use
• Styler.applymap(func) for elementwise styles
• Styler.apply(func, axis=0) for columnwise styles
• Styler.apply(func, axis=1) for rowwise styles
Both Styler.apply, and Styler.applymap accept a subset keyword. This allows you to apply styles to
specific rows or columns, without having to code that logic into your style function.
The value passed to subset behaves similar to slicing a DataFrame.
• A scalar is treated as a column label
• A list (or series or numpy array)
• A tuple is treated as (row_indexer, column_indexer)
Consider using pd.IndexSlice to construct the tuple for the last one.
In [12]: df.style.apply(highlight_max, subset=['B', 'C', 'D'])
Out[12]: <pandas.io.formats.style.Styler at 0x7f05c3680fd0>
For row and column slicing, any valid indexer to .loc will work.
In [13]: df.style.applymap(color_negative_red,
subset=pd.IndexSlice[2:5, ['B', 'D']])
Out[13]: <pandas.io.formats.style.Styler at 0x7f05c3680c50>
Only label-based slicing is supported right now, not positional.
If your style function uses a subset or axis keyword argument, consider wrapping your function in a
functools.partial, partialing out that keyword.
We distinguish the display value from the actual value in Styler. To control the display value, the text is printed in
each cell, use Styler.format. Cells can be formatted according to a format spec string or a callable that takes a
single value and returns a string.
In [14]: df.style.format("{:.2%}")
Out[14]: <pandas.io.formats.style.Styler at 0x7f05cb19e0b8>
Finally, we expect certain styling functions to be common enough that we’ve included a few “built-in” to the Styler,
so you don’t have to write them yourself.
In [17]: df.style.highlight_null(null_color='red')
Out[17]: <pandas.io.formats.style.Styler at 0x7f05c36c5550>
You can create “heatmaps” with the background_gradient method. These require matplotlib, and we’ll use
Seaborn to get a nice colormap.
In [18]: import seaborn as sns
cm = sns.light_palette("green", as_cmap=True)
s = df.style.background_gradient(cmap=cm)
s
/opt/conda/envs/pandas/lib/python3.6/site-packages/matplotlib/colors.py:504: RuntimeWarning: invalid
xa[xa < 0] = -1
Out[18]: <pandas.io.formats.style.Styler at 0x7f05c3653160>
Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend
the range of your data by low and high percent so that when we convert the colors, the colormap’s entire range isn’t
used. This is useful so that you can actually read the text still.
In [19]: # Uses the full color range
df.loc[:4].style.background_gradient(cmap='viridis')
/opt/conda/envs/pandas/lib/python3.6/site-packages/matplotlib/colors.py:504: RuntimeWarning: invalid
xa[xa < 0] = -1
Out[19]: <pandas.io.formats.style.Styler at 0x7f05c36c5390>
In [20]: # Compress the color range
(df.loc[:4]
.style
.background_gradient(cmap='viridis', low=.5, high=0)
.highlight_null('red'))
/opt/conda/envs/pandas/lib/python3.6/site-packages/matplotlib/colors.py:504: RuntimeWarning: invalid
xa[xa < 0] = -1
Out[20]: <pandas.io.formats.style.Styler at 0x7f05ba1fd320>
New in version 0.20.0 is the ability to customize further the bar chart: You can now have the df.style.bar be
centered on zero or midpoint value (in addition to the already existing way of having the min value at the left side of
the cell), and you can pass a list of [color_negative, color_positive].
Here’s how you can change the above with the new align='mid' option:
In [24]: df.style.bar(subset=['A', 'B'], align='mid', color=['#d65f5f', '#5fba7d'])
Out[24]: <pandas.io.formats.style.Styler at 0x7f05b9f57748>
The following example aims to give a highlight of the behavior of the new align options:
In [25]: import pandas as pd
from IPython.display import HTML
# Test series
test1 = pd.Series([-100,-60,-30,-20], name='All Negative')
test2 = pd.Series([10,20,50,100], name='All Positive')
test3 = pd.Series([-10,-5,0,90], name='Both Pos and Neg')
head = """
<table>
<thead>
<th>Align</th>
<th>All Negative</th>
<th>All Positive</th>
<th>Both Neg and Pos</th>
</thead>
</tbody>
"""
aligns = ['left','zero','mid']
for align in aligns:
row = "<tr><th>{}</th>".format(align)
for serie in [test1,test2,test3]:
s = serie.copy()
s.name=''
row += "<td>{}</td>".format(s.to_frame().style.bar(align=align,
color=['#d65f5f', '#5fba7d'],
width=100).render()) #testn['widt
row += '</tr>'
head += row
head+= """
</tbody>
</table>"""
HTML(head)
Out[25]: <IPython.core.display.HTML object>
Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame.
Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set
In [26]: df2 = -df
style1 = df.style.applymap(color_negative_red)
style1
Out[26]: <pandas.io.formats.style.Styler at 0x7f05affda0f0>
In [27]: style2 = df2.style
style2.use(style1.export())
style2
Out[27]: <pandas.io.formats.style.Styler at 0x7f05ba1d09b0>
Notice that you’re able share the styles even though they’re data aware. The styles are re-evaluated on the new
DataFrame they’ve been used upon.
You’ve seen a few methods for data-driven styling. Styler also provides a few other options for styles that don’t
depend on the data.
• precision
• captions
• table-wide styles
• hiding the index or columns
Each of these can be specified in two ways:
• A keyword argument to Styler.__init__
• A call to one of the .set_ or .hide_ methods, e.g. .set_caption or .hide_columns
The best method to use depends on the context. Use the Styler constructor when building many styled DataFrames
that should all share the same properties. For interactive use, the.set_ and .hide_ methods are more convenient.
23.6.1 Precision
You can control the precision of floats using pandas’ regular display.precision option.
In [28]: with pd.option_context('display.precision', 2):
html = (df.style
.applymap(color_negative_red)
.apply(highlight_max))
html
Out[28]: <pandas.io.formats.style.Styler at 0x7f05affda940>
Or through a set_precision method.
In [29]: df.style\
.applymap(color_negative_red)\
.apply(highlight_max)\
.set_precision(2)
23.6.2 Captions
The next option you have are “table styles”. These are styles that apply to the table as a whole, but don’t look at the
data. Certain sytlings, including pseudo-selectors like :hover can only be used this way.
In [31]: from IPython.display import HTML
def hover(hover_color="#ffff99"):
return dict(selector="tr:hover",
props=[("background-color", "%s" % hover_color)])
styles = [
hover(),
dict(selector="th", props=[("font-size", "150%"),
("text-align", "center")]),
dict(selector="caption", props=[("caption-side", "bottom")])
]
html = (df.style.set_table_styles(styles)
.set_caption("Hover to highlight."))
html
Out[31]: <pandas.io.formats.style.Styler at 0x7f05affdae10>
table_styles should be a list of dictionaries. Each dictionary should have the selector and props keys.
The value for selector should be a valid CSS selector. Recall that all the styles are already attached to an id,
unique to each Styler. This selector is in addition to that id. The value for props should be a list of tuples of
('attribute', 'value').
table_styles are extremely flexible, but not as fun to type out by hand. We hope to collect some useful ones
either in pandas, or preferable in a new package that builds on top the tools here.
The index can be hidden from rendering by calling Styler.hide_index. Columns can be hidden from rendering
by calling Styler.hide_columns and passing in the name of a column, or a slice of columns.
In [32]: df.style.hide_index()
Out[32]: <pandas.io.formats.style.Styler at 0x7f05afff1c18>
In [33]: df.style.hide_columns(['C','D'])
23.6.6 Limitations
23.6.7 Terms
• Style function: a function that’s passed into Styler.apply or Styler.applymap and returns values like
'css attribute: value'
• Builtin style functions: style functions that are methods on Styler
• table style: a dictionary with the two keys selector and props. selector is the CSS selector that props
will apply to. props is a list of (attribute, value) tuples. A list of table styles passed into Styler.
bigdf.style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '1pt'})\
.set_caption("Hover to magnify")\
.set_precision(2)\
.set_table_styles(magnify())
Out[36]: <pandas.io.formats.style.Styler at 0x7f05affdaeb8>
• vertical-align
• white-space: nowrap
Only CSS2 named colors and hex colors of the form #rgb or #rrggbb are currently supported.
In [37]: df.style.\
applymap(color_negative_red).\
apply(highlight_max).\
to_excel('styled.xlsx', engine='openpyxl')
23.9 Extensibility
The core of pandas is, and will remain, its “high-performance, easy-to-use data structures”. With that in mind, we
hope that DataFrame.style accomplishes two goals
• Provide an API that is pleasing to use interactively and is “good enough” for many tasks
• Provide the foundations for dedicated libraries to build on
If you build a great library on top of this, let us know and we’ll link to it.
23.9.1 Subclassing
If the default template doesn’t quite suit your needs, you can subclass Styler and extend or override the template. We’ll
show an example of extending the default template to insert a custom header before each table.
In [38]: from jinja2 import Environment, ChoiceLoader, FileSystemLoader
from IPython.display import HTML
from pandas.io.formats.style import Styler
This next cell writes the custom template. We extend the template html.tpl, which comes with pandas.
In [40]: %%file templates/myhtml.tpl
{% extends "html.tpl" %}
{% block table %}
<h1>{{ table_title|default("My Table") }}</h1>
{{ super() }}
{% endblock table %}
Overwriting templates/myhtml.tpl
Now that we’ve created a template, we need to set up a subclass of Styler that knows about it.
In [41]: class MyStyler(Styler):
env = Environment(
loader=ChoiceLoader([
FileSystemLoader("templates"), # contains ours
Styler.loader, # the default
])
)
template = env.get_template("myhtml.tpl")
Notice that we include the original loader in our environment’s loader. That’s because we extend the original template,
so the Jinja environment needs to be able to find it.
Now we can use that custom styler. It’s __init__ takes a DataFrame.
In [42]: MyStyler(df)
Out[42]: <__main__.MyStyler at 0x7f05affda438>
Our custom template accepts a table_title keyword. We can provide the value in the .render method.
In [43]: HTML(MyStyler(df).render(table_title="Extending Example"))
Out[43]: <IPython.core.display.HTML object>
For convenience, we provide the Styler.from_custom_template method that does the same as the custom
subclass.
In [44]: EasyStyler = Styler.from_custom_template("templates", "myhtml.tpl")
EasyStyler(df)
Out[44]: <pandas.io.formats.style.Styler.from_custom_template.<locals>.MyStyler at 0x7f05ac918f60>
HTML(structure)
Out[45]: <IPython.core.display.HTML object>
See the template in the GitHub repo for more details.
TWENTYFOUR
The pandas I/O API is a set of top level reader functions accessed like pandas.read_csv() that generally
return a pandas object. The corresponding writer functions are object methods that are accessed like DataFrame.
to_csv(). Below is a table containing available readers and writers.
Note: For examples that use the StringIO class, make sure you import it according to your Python version, i.e.
from StringIO import StringIO for Python 2 and from io import StringIO for Python 3.
The two workhorse functions for reading text files (a.k.a. flat files) are read_csv() and read_table(). They
both use the same parsing code to intelligently convert tabular data into a DataFrame object. See the cookbook for
some advanced strategies.
The functions read_csv() and read_table() accept the following common arguments:
1125
pandas: powerful Python data analysis toolkit, Release 0.23.4
24.1.1.1 Basic
header [int or list of ints, default 'infer'] Row number(s) to use as the column names, and the start of the data.
Default behavior is to infer the column names: if no names are passed the behavior is identical to header=0
and column names are inferred from the first line of the file, if column names are passed explicitly then the
behavior is identical to header=None. Explicitly pass header=0 to be able to replace existing names.
The header can be a list of ints that specify row locations for a multi-index on the columns e.g. [0,1,3].
Intervening rows that are not specified will be skipped (e.g. 2 in this example is skipped). Note that this
parameter ignores commented lines and empty lines if skip_blank_lines=True, so header=0 denotes the
first line of data rather than the first line of the file.
names [array-like, default None] List of column names to use. If file contains no header row, then you should
explicitly pass header=None. Duplicates in this list will cause a UserWarning to be issued.
index_col [int or sequence or False, default None] Column to use as the row labels of the DataFrame. If a
sequence is given, a MultiIndex is used. If you have a malformed file with delimiters at the end of each line,
you might consider index_col=False to force pandas to not use the first column as the index (row names).
usecols [list-like or callable, default None] Return a subset of the columns. If list-like, all elements must either be
positional (i.e. integer indices into the document columns) or strings that correspond to column names provided
either by the user in names or inferred from the document header row(s). For example, a valid list-like usecols
parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
Element order is ignored, so usecols=[0, 1] is the same as [1, 0]. To instantiate a
DataFrame from data with element order preserved use pd.read_csv(data, usecols=['foo',
'bar'])[['foo', 'bar']] for columns in ['foo', 'bar'] order or pd.read_csv(data,
usecols=['foo', 'bar'])[['bar', 'foo']] for ['bar', 'foo'] order.
If callable, the callable function will be evaluated against the column names, returning names where the callable
function evaluates to True:
In [2]: pd.read_csv(StringIO(data))
Out[2]:
col1 col2 col3
(continues on next page)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[3]:
˓→
col1 col3
0 a 1
1 a 2
2 c 3
Using this parameter results in much faster parsing time and lower memory usage.
squeeze [boolean, default False] If the parsed data only contains one column then return a Series.
prefix [str, default None] Prefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, . . .
mangle_dupe_cols [boolean, default True] Duplicate columns will be specified as ‘X’, ‘X.1’. . . ’X.N’, rather than
‘X’. . . ’X’. Passing in False will cause data to be overwritten if there are duplicate names in the columns.
dtype [Type name or dict of column -> type, default None] Data type for data or columns. E.g. {'a': np.
float64, 'b': np.int32} (unsupported with engine='python'). Use str or object together with
suitable na_values settings to preserve and not interpret dtype.
New in version 0.20.0: support for the Python parser.
engine [{'c', 'python'}] Parser engine to use. The C engine is faster while the Python engine is currently more
feature-complete.
converters [dict, default None] Dict of functions for converting values in certain columns. Keys can either be integers
or column labels.
true_values [list, default None] Values to consider as True.
false_values [list, default None] Values to consider as False.
skipinitialspace [boolean, default False] Skip spaces after delimiter.
skiprows [list-like or integer, default None] Line numbers to skip (0-indexed) or number of lines to skip (int) at the
start of the file.
If callable, the callable function will be evaluated against the row indices, returning True if the row should be
skipped and False otherwise:
In [4]: data = 'col1,col2,col3\na,b,1\na,b,2\nc,d,3'
In [5]: pd.read_csv(StringIO(data))
Out[5]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
skipfooter [int, default 0] Number of lines at bottom of file to skip (unsupported with engine=’c’).
nrows [int, default None] Number of rows of file to read. Useful for reading pieces of large files.
low_memory [boolean, default True] Internally process the file in chunks, resulting in lower memory use while
parsing, but possibly mixed type inference. To ensure no mixed types either set False, or specify the type with
the dtype parameter. Note that the entire file is read into a single DataFrame regardless, use the chunksize
or iterator parameter to return the data in chunks. (Only valid with C parser)
memory_map [boolean, default False] If a filepath is provided for filepath_or_buffer, map the file object
directly onto memory and access the data directly from there. Using this option can improve performance
because there is no longer any I/O overhead.
na_values [scalar, str, list-like, or dict, default None] Additional strings to recognize as NA/NaN. If dict passed,
specific per-column NA values. See na values const below for a list of the values interpreted as NaN by default.
keep_default_na [boolean, default True] Whether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
• If keep_default_na is True, and na_values are specified, na_values is appended to the default NaN values
used for parsing.
• If keep_default_na is True, and na_values are not specified, only the default NaN values are used for
parsing.
• If keep_default_na is False, and na_values are specified, only the NaN values specified na_values are
used for parsing.
• If keep_default_na is False, and na_values are not specified, no strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and na_values parameters will be ignored.
na_filter [boolean, default True] Detect missing value markers (empty strings and the value of na_values). In data
without any NAs, passing na_filter=False can improve the performance of reading a large file.
verbose [boolean, default False] Indicate number of NA values placed in non-numeric columns.
skip_blank_lines [boolean, default True] If True, skip over blank lines rather than interpreting as NaN values.
parse_dates [boolean or list of ints or names or list of lists or dict, default False.]
• If True -> try parsing the index.
• If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column.
• If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column.
• If {'foo': [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’. A fast-path exists for iso8601-
formatted dates.
infer_datetime_format [boolean, default False] If True and parse_dates is enabled for a column, attempt to infer
the datetime format to speed up the processing.
keep_date_col [boolean, default False] If True and parse_dates specifies combining multiple columns then keep
the original columns.
date_parser [function, default None] Function to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the conversion. Pandas will try to
call date_parser in three different ways, advancing to the next if an exception occurs: 1) Pass one or more arrays
(as defined by parse_dates) as arguments; 2) concatenate (row-wise) the string values from the columns defined
by parse_dates into a single array and pass that; and 3) call date_parser once for each row using one or more
strings (corresponding to the columns defined by parse_dates) as arguments.
dayfirst [boolean, default False] DD/MM format dates, international and European format.
24.1.1.6 Iteration
iterator [boolean, default False] Return TextFileReader object for iteration or getting chunks with get_chunk().
chunksize [int, default None] Return TextFileReader object for iteration. See iterating and chunking below.
compression [{'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'] For on-the-fly decompres-
sion of on-disk data. If ‘infer’, then use gzip, bz2, zip, or xz if filepath_or_buffer is a string ending in ‘.gz’,
‘.bz2’, ‘.zip’, or ‘.xz’, respectively, and no decompression otherwise. If using ‘zip’, the ZIP file must contain
only one data file to be read in. Set to None for no decompression.
New in version 0.18.1: support for ‘zip’ and ‘xz’ compression.
thousands [str, default None] Thousands separator.
decimal [str, default '.'] Character to recognize as decimal point. E.g. use ',' for European data.
float_precision [string, default None] Specifies which converter the C engine should use for floating-point values.
The options are None for the ordinary converter, high for the high-precision converter, and round_trip for
the round-trip converter.
lineterminator [str (length 1), default None] Character to break file into lines. Only valid with C parser.
quotechar [str (length 1)] The character used to denote the start and end of a quoted item. Quoted items can include
the delimiter and it will be ignored.
quoting [int or csv.QUOTE_* instance, default 0] Control field quoting behavior per csv.QUOTE_* constants.
Use one of QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
doublequote [boolean, default True] When quotechar is specified and quoting is not QUOTE_NONE, indi-
cate whether or not to interpret two consecutive quotechar elements inside a field as a single quotechar
element.
escapechar [str (length 1), default None] One-character string used to escape delimiter when quoting is
QUOTE_NONE.
comment [str, default None] Indicates remainder of line should not be parsed. If found at the beginning of a line,
the line will be ignored altogether. This parameter must be a single character. Like empty lines (as long
as skip_blank_lines=True), fully commented lines are ignored by the parameter header but not by
skiprows. For example, if comment='#', parsing ‘#empty\na,b,c\n1,2,3’ with header=0 will result in ‘a,b,c’
being treated as the header.
encoding [str, default None] Encoding to use for UTF when reading/writing (e.g. 'utf-8'). List of Python standard
encodings.
dialect [str or csv.Dialect instance, default None] If provided, this parameter will override values (default or
not) for the following parameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting.
If it is necessary to override values, a ParserWarning will be issued. See csv.Dialect documentation for
more details.
tupleize_cols [boolean, default False]
Deprecated since version 0.21.0.
This argument will be removed and will always convert to MultiIndex
Leave a list of tuples on columns as is (default is to convert to a MultiIndex on the columns).
error_bad_lines [boolean, default True] Lines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be returned. If False, then these “bad lines”
will dropped from the DataFrame that is returned. See bad lines below.
warn_bad_lines [boolean, default True] If error_bad_lines is False, and warn_bad_lines is True, a warning for
each “bad line” will be output.
You can indicate the data type for the whole DataFrame or individual columns:
In [7]: data = 'a,b,c\n1,2,3\n4,5,6\n7,8,9'
In [8]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [10]: df
Out[10]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
In [11]: df['a'][0]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[11]: '1'
In [13]: df.dtypes
Out[13]:
a int64
b object
c float64
dtype: object
Fortunately, pandas offers more than one way to ensure that your column(s) contain only one dtype. If you’re
unfamiliar with these concepts, you can see here to learn more about dtypes, and here to learn more about object
conversion in pandas.
In [16]: df
Out[16]:
col_1
0 1
1 2
2 'A'
3 4.22
In [17]: df['col_1'].apply(type).value_counts()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[17]:
<class 'str'> 4
Name: col_1, dtype: int64
Or you can use the to_numeric() function to coerce the dtypes after reading in the data,
In [18]: df2 = pd.read_csv(StringIO(data))
In [20]: df2
Out[20]:
col_1
0 1.00
1 2.00
2 NaN
3 4.22
In [21]: df2['col_1'].apply(type).value_counts()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[21]:
<class 'float'> 4
Name: col_1, dtype: int64
which will convert all valid parsing to floats, leaving the invalid parsing as NaN.
Ultimately, how you deal with reading in columns containing mixed dtypes depends on your specific needs. In the case
above, if you wanted to NaN out the data anomalies, then to_numeric() is probably your best option. However, if
you wanted for all the data to be coerced, no matter the type, then using the converters argument of read_csv()
would certainly be worth trying.
New in version 0.20.0: support for the Python parser.
The dtype option is supported by the ‘python’ engine.
Note: In some cases, reading in abnormal data with columns containing mixed dtypes will result in an inconsistent
dataset. If you rely on pandas to infer the dtypes of your columns, the parsing engine will go and infer the dtypes for
different chunks of the data, rather than the whole dataset at once. Consequently, you can end up with column(s) with
mixed dtypes. For example,
In [22]: df = pd.DataFrame({'col_1': list(range(500000)) + ['a', 'b'] +
˓→list(range(500000))})
In [23]: df.to_csv('foo.csv')
(continues on next page)
In [25]: mixed_df['col_1'].apply(type).value_counts()
Out[25]:
<class 'int'> 737858
<class 'str'> 262144
Name: col_1, dtype: int64
In [26]: mixed_df['col_1'].dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[26]:
˓→dtype('O')
will result with mixed_df containing an int dtype for certain chunks of the column, and str for others due to the
mixed dtypes from the data that was read in. It is important to note that the overall column will be marked with a
dtype of object, which is used for columns with mixed dtypes.
In [28]: pd.read_csv(StringIO(data))
Out[28]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [29]: pd.read_csv(StringIO(data)).dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[29]:
˓→
col1 object
col2 object
col3 int64
dtype: object
col1 category
col2 category
col3 category
dtype: object
Note: With dtype='category', the resulting categories will always be parsed as strings (object dtype). If the
categories are numeric they can be converted using the to_numeric() function, or as appropriate, another converter
such as to_datetime().
When dtype is a CategoricalDtype with homogenous categories ( all numeric, all datetimes, etc.), the
conversion is done automatically.
In [38]: df.dtypes
Out[38]:
col1 category
col2 category
col3 category
dtype: object
In [39]: df['col3']
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[39]:
0 1
1 2
(continues on next page)
In [41]: df['col3']
Out[41]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, int64): [1, 2, 3]
A file may or may not have a header row. pandas assumes the first row should be used as the column names:
In [42]: data = 'a,b,c\n1,2,3\n4,5,6\n7,8,9'
In [43]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [44]: pd.read_csv(StringIO(data))
\\\\\\\\\\\\\\\\\\\\\\\\Out[44]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
By specifying the names argument in conjunction with header you can indicate other names to use and whether or
not to throw away the header row (if any):
In [45]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
If the header is in a row other than the first, pass the row number to header. This will skip the preceding rows:
Note: Default behavior is to infer the column names: if no names are passed the behavior is identical to header=0
and column names are inferred from the first nonblank line of the file, if column names are passed explicitly then the
behavior is identical to header=None.
If the file or header contains duplicate names, pandas will by default distinguish between them so as to prevent
overwriting data:
In [51]: pd.read_csv(StringIO(data))
Out[51]:
a b a.1
0 0 1 2
1 3 4 5
There is no more duplicate data because mangle_dupe_cols=True by default, which modifies a series of dupli-
cate columns ‘X’, . . . , ‘X’ to become ‘X’, ‘X.1’, . . . , ‘X.N’. If mangle_dupe_cols=False, duplicate data can
arise:
To prevent users from encountering this problem with duplicate data, a ValueError exception is raised if
mangle_dupe_cols != True:
The usecols argument allows you to select any subset of the columns in a file, either using the column names,
position numbers or a callable:
New in version 0.20.0: support for callable usecols arguments
In [53]: pd.read_csv(StringIO(data))
Out[53]:
a b c d
0 1 2 3 foo
1 4 5 6 bar
2 7 8 9 baz
a c d
0 1 3 foo
1 4 6 bar
2 7 9 baz
a c
0 1 3
1 4 6
2 7 9
The usecols argument can also be used to specify which columns not to use in the final result:
In this case, the callable is specifying that we exclude the “a” and “c” columns from the output.
If the comment parameter is specified, then completely commented lines will be ignored. By default, completely
blank lines will be ignored as well.
In [59]: print(data)
a,b,c
# commented line
1,2,3
4,5,6
Warning: The presence of ignored lines might create ambiguities involving line numbers; the parameter header
uses row numbers (ignoring commented/empty lines), while skiprows uses line numbers (including com-
mented/empty lines):
In [63]: data = '#comment\na,b,c\nA,B,C\n1,2,3'
If both header and skiprows are specified, header will be relative to the end of skiprows. For example:
In [67]: 'line\nX,Y,Z\n1,2,3\nA,B,C\n1,2.,4.\n5.,NaN,10.0'
In [68]: print(data)
# empty
# second empty line
# third emptyline
X,Y,Z
1,2,3
A,B,C
1,2.,4.
5.,NaN,10.0
A B C
0 1.0 2.0 4.0
1 5.0 NaN 10.0
24.1.6.2 Comments
In [70]: print(open('tmp.csv').read())
ID,level,category
Patient1,123000,x # really unpleasant
Patient2,23000,y # wouldn't take his medicine
Patient3,1234018,z # awesome
In [71]: df = pd.read_csv('tmp.csv')
In [72]: df
Out[72]:
ID level category
0 Patient1 123000 x # really unpleasant
1 Patient2 23000 y # wouldn't take his medicine
2 Patient3 1234018 z # awesome
In [74]: df
Out[74]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
The encoding argument should be used for encoded unicode data, which will result in byte strings being decoded
to unicode in the result:
In [77]: df
Out[77]:
word length
0 Träumen 7
1 Grüße 5
In [78]: df['word'][1]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[78]: 'Grüße'
Some formats which encode all characters as multiple bytes, like UTF-16, won’t parse correctly at all without speci-
fying the encoding. Full list of Python standard encodings.
If a file has one more column of data than the number of column names, the first column will be used as the
DataFrame’s row names:
In [80]: pd.read_csv(StringIO(data))
Out[80]:
a b c
4 apple bat 5.7
8 orange cow 10.0
Ordinarily, you can achieve this behavior using the index_col option.
There are some exception cases when a file has been prepared with delimiters at the end of each data line, confusing
the parser. To explicitly disable the index column inference and discard the last column, pass index_col=False:
In [84]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
a b c
0 4 apple bat
1 8 orange cow
If a subset of data is being parsed using the usecols option, the index_col specification is based on that subset,
not the original data.
In [87]: data = 'a,b,c\n4,apple,bat,\n8,orange,cow,'
In [88]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
To better facilitate working with datetime data, read_csv() and read_table() use the keyword arguments
parse_dates and date_parser to allow users to specify a variety of columns and date/time formats to turn the
input text data into datetime objects.
The simplest case is to just pass in parse_dates=True:
# Use a column as an index, and parse it as dates.
In [91]: df = pd.read_csv('foo.csv', index_col=0, parse_dates=True)
In [92]: df
Out[92]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
(continues on next page)
˓→name='date', freq=None)
It is often the case that we may want to store date and time data separately, or store various date fields separately. the
parse_dates keyword can be used to specify a combination of columns to parse the dates and/or times from.
You can specify a list of column lists to parse_dates, the resulting date columns will be prepended to the output
(so as to not affect the existing column order) and the new column names will be the concatenation of the component
column names:
In [94]: print(open('tmp.csv').read())
KORD,19990127, 19:00:00, 18:56:00, 0.8100
KORD,19990127, 20:00:00, 19:56:00, 0.0100
KORD,19990127, 21:00:00, 20:56:00, -0.5900
KORD,19990127, 21:00:00, 21:18:00, -0.9900
KORD,19990127, 22:00:00, 21:56:00, -0.5900
KORD,19990127, 23:00:00, 22:56:00, -0.5900
In [96]: df
Out[96]:
1_2 1_3 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
By default the parser removes the component date columns, but you can choose to retain them via the
keep_date_col keyword:
In [98]: df
Out[98]:
1_2 1_3 0 1 2 3 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 19990127 19:00:00 18:56:00 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 19990127 20:00:00 19:56:00 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD 19990127 21:00:00 20:56:00 -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD 19990127 21:00:00 21:18:00 -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD 19990127 22:00:00 21:56:00 -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD 19990127 23:00:00 22:56:00 -0.59
Note that if you wish to combine multiple columns into a single date column, a nested list must be used. In other
words, parse_dates=[1, 2] indicates that the second and third columns should each be parsed as separate date
columns while parse_dates=[[1, 2]] means the two columns should be parsed into a single column.
You can also use a dict to specify custom name columns:
In [101]: df
Out[101]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
It is important to remember that if multiple text columns are to be parsed into a single date column, then a new column
is prepended to the data. The index_col specification is based off of this new set of columns rather than the original
data columns:
In [104]: df
Out[104]:
actual 0 4
nominal
1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Note: If a column or index contains an unparseable date, the entire column or index will be returned unaltered as an
object data type. For non-standard datetime parsing, use to_datetime() after pd.read_csv.
Note: read_csv has a fast_path for parsing datetime strings in iso8601 format, e.g “2000-01-01T00:01:02+00:00” and
similar variations. If you can arrange for your data to store datetimes in this format, load times will be significantly
faster, ~20x has been observed.
Note: When passing a dict as the parse_dates argument, the order of the columns prepended is not guaranteed,
because dict objects do not impose an ordering on their keys. On Python 2.7+ you may use collections.OrderedDict
instead of a regular dict if this matters to you. Because of this, when using a dict for ‘parse_dates’ in conjunction with
the index_col argument, it’s best to specify index_col as a column label rather then as an index on the resulting frame.
Finally, the parser allows you to specify a custom date_parser function to take full advantage of the flexibility of
the date parsing API:
In [107]: df
Out[107]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Pandas will try to call the date_parser function in three different ways. If an exception is raised, the next one is
tried:
1. date_parser is first called with one or more arrays as arguments, as defined using parse_dates (e.g.,
date_parser(['2013', '2013'], ['1', '2'])).
2. If #1 fails, date_parser is called with all the columns concatenated row-wise into a single array (e.g.,
date_parser(['2013 1', '2013 2'])).
3. If #2 fails, date_parser is called once for every row with one or more string arguments from
the columns indicated with parse_dates (e.g., date_parser('2013', '1') for the first row,
date_parser('2013', '2') for the second, etc.).
Note that performance-wise, you should try these methods of parsing dates in order:
1. Try to infer the format using infer_datetime_format=True (see section below).
2. If you know the format, use pd.to_datetime(): date_parser=lambda x: pd.
to_datetime(x, format=...).
3. If you have a really non-standard format, use a custom date_parser function. For optimal performance, this
should be vectorized, i.e., it should accept arrays as arguments.
You can explore the date parsing functionality in date_converters.py and add your own. We would love to turn this
module into a community supported set of date/time parsers. To get you started, date_converters.py contains
functions to parse dual date and time columns, year/month/day columns, and year/month/day/hour/minute/second
columns. It also contains a generic_parser function so you can curry it with a function that deals with a single
date rather than the entire array.
If you have parse_dates enabled for some or all of your columns, and your datetime strings are all formatted the
same way, you may get a large speed up by setting infer_datetime_format=True. If set, pandas will attempt
to guess the format of your datetime strings, and then use a faster means of parsing the strings. 5-10x parsing speeds
have been observed. pandas will fallback to the usual parsing if either the format cannot be guessed or the format that
was guessed cannot properly parse the entire column of strings. So in general, infer_datetime_format should
not have any negative consequences if enabled.
Here are some examples of datetime strings that can be guessed (All representing December 30th, 2011 at 00:00:00):
• “20111230”
• “2011/12/30”
• “20111230 00:00:00”
• “12/30/2011 00:00:00”
• “30/Dec/2011 00:00:00”
• “30/December/2011 00:00:00”
Note that infer_datetime_format is sensitive to dayfirst. With dayfirst=True, it will guess
“01/12/2011” to be December 1st. With dayfirst=False (default) it will guess “01/12/2011” to be January
12th.
In [109]: df
Out[109]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
While US date formats tend to be MM/DD/YYYY, many international formats use DD/MM/YYYY instead. For
convenience, a dayfirst keyword is provided:
In [110]: print(open('tmp.csv').read())
date,value,cat
1/6/2000,5,a
2/6/2000,10,b
3/6/2000,15,c
The parameter float_precision can be specified in order to use a specific floating-point converter during parsing
with the C engine. The options are the ordinary converter, the high-precision converter, and the round-trip converter
(which is guaranteed to round-trip values after writing to a file). For example:
Out[115]: 1.1102230246251565e-16
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[116]: 5.5511151231257827e-17
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[117]: 0.0
For large numbers that have been written with a thousands separator, you can set the thousands keyword to a string
of length 1 so that integers will be parsed correctly:
By default, numbers with a thousands separator will be parsed as strings:
In [118]: print(open('tmp.csv').read())
ID|level|category
Patient1|123,000|x
Patient2|23,000|y
Patient3|1,234,018|z
In [120]: df
Out[120]:
ID level category
0 Patient1 123,000 x
1 Patient2 23,000 y
2 Patient3 1,234,018 z
In [121]: df.level.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→dtype('O')
In [122]: print(open('tmp.csv').read())
ID|level|category
Patient1|123,000|x
Patient2|23,000|y
Patient3|1,234,018|z
In [124]: df
Out[124]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
In [125]: df.level.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→dtype('int64')
24.1.12 NA Values
To control which values are parsed as missing values (which are signified by NaN), specify a string in na_values.
If you specify a list of strings, then all values in it are considered to be missing values. If you specify a number (a
float, like 5.0 or an integer like 5), the corresponding equivalent values will also imply a missing value (in this
case effectively [5.0, 5] are recognized as NaN).
To completely override the default values that are recognized as missing, specify keep_default_na=False.
The default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A
N/A', '#N/A', 'N/A', 'n/a', 'NA', '#NA', 'NULL', 'null', 'NaN', '-NaN', 'nan',
'-nan', ''].
Let us consider some examples:
read_csv(path, na_values=[5])
In the example above 5 and 5.0 will be recognized as NaN, in addition to the defaults. A string will first be interpreted
as a numerical 5, then as a NaN.
read_csv(path, na_values=["Nope"])
The default values, in addition to the string "Nope" are recognized as NaN.
24.1.13 Infinity
inf like values will be parsed as np.inf (positive infinity), and -inf as -np.inf (negative infinity). These will
ignore the case of the value, meaning Inf, will also be parsed as np.inf.
Using the squeeze keyword, the parser will return output with a single column as a Series:
In [126]: print(open('tmp.csv').read())
level
Patient1,123000
Patient2,23000
Patient3,1234018
In [128]: output
Out[128]:
Patient1 123000
Patient2 23000
Patient3 1234018
Name: level, dtype: int64
In [129]: type(output)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[
˓→pandas.core.series.Series
The common values True, False, TRUE, and FALSE are all recognized as boolean. Occasionally you might want to
recognize other values as being boolean. To do this, use the true_values and false_values options as follows:
In [131]: print(data)
a,b,c
1,Yes,2
3,No,4
In [132]: pd.read_csv(StringIO(data))
\\\\\\\\\\\\\\\\\\\\\Out[132]:
a b c
0 1 Yes 2
1 3 No 4
Some files may have malformed lines with too few fields or too many. Lines with too few fields will have NA values
filled in the trailing fields. Lines with too many fields will raise an error by default:
In [28]: pd.read_csv(StringIO(data))
---------------------------------------------------------------------------
ParserError Traceback (most recent call last)
ParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4
Out[29]:
a b c
0 1 2 3
1 8 9 10
You can also use the usecols parameter to eliminate extraneous column data that appear in some lines but not others:
In [30]: pd.read_csv(StringIO(data), usecols=[0, 1, 2])
Out[30]:
a b c
0 1 2 3
1 4 5 6
2 8 9 10
24.1.17 Dialect
The dialect keyword gives greater flexibility in specifying the file format. By default it uses the Excel dialect but
you can specify either the dialect name or a csv.Dialect instance.
Suppose you had data with unenclosed quotes:
In [134]: print(data)
label1,label2,label3
index1,"a,c,e
index2,b,d,f
By default, read_csv uses the Excel dialect and treats the double quote as the quote character, which causes it to
fail when it finds a newline before it finds the closing double quote.
We can get around this using dialect:
In [135]: dia = csv.excel()
Another common dialect option is skipinitialspace, to skip any whitespace after a delimiter:
In [141]: print(data)
a, b, c
1, 2, 3
4, 5, 6
The parsers make every attempt to “do the right thing” and not be fragile. Type inference is a pretty big deal. If a
column can be coerced to integer dtype without altering the contents, the parser will do so. Any non-numeric columns
will come through as object dtype as with the rest of pandas objects.
Quotes (and other escape characters) in embedded fields can be handled in any number of ways. One way is to use
backslashes; to properly parse this data, you should pass the escapechar option:
In [143]: data = 'a,b\n"hello, \\"Bob\\", nice to see you",5'
In [144]: print(data)
a,b
"hello, \"Bob\", nice to see you",5
While read_csv() reads delimited data, the read_fwf() function works with data files that have known and fixed
column widths. The function parameters to read_fwf are largely the same as read_csv with two extra parameters,
and a different usage of the delimiter parameter:
• colspecs: A list of pairs (tuples) giving the extents of the fixed-width fields of each line as half-open intervals
(i.e., [from, to[ ). String value ‘infer’ can be used to instruct the parser to try detecting the column specifications
from the first 100 rows of the data. Default behavior, if not specified, is to infer.
• widths: A list of field widths which can be used instead of ‘colspecs’ if the intervals are contiguous.
• delimiter: Characters to consider as filler characters in the fixed-width file. Can be used to specify the filler
character of the fields if it is not spaces (e.g., ‘~’).
Consider a typical fixed-width data file:
In [146]: print(open('bar.csv').read())
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
In order to parse this file into a DataFrame, we simply need to supply the column specifications to the read_fwf
function along with the file name:
In [149]: df
Out[149]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
Note how the parser automatically picks column names X.<column number> when header=None argument is spec-
ified. Alternatively, you can supply just the column widths for contiguous columns:
In [152]: df
Out[152]:
0 1 2 3
0 id8141 360.242940 149.910199 11950.7
1 id1594 444.953632 166.985655 11788.4
2 id1849 364.136849 183.628767 11806.2
3 id1230 413.836124 184.375703 11916.8
4 id1948 502.953953 173.237159 12468.3
The parser will take care of extra white spaces around the columns so it’s ok to have extra separation between the
columns in the file.
By default, read_fwf will try to infer the file’s colspecs by using the first 100 rows of the file. It can do it
only in cases when the columns are aligned and correctly separated by the provided delimiter (default delimiter is
whitespace).
In [154]: df
Out[154]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
24.1.20 Indexes
Consider a file with one less entry in the header than the number of data column:
In [157]: print(open('foo.csv').read())
A,B,C
20090101,a,1,2
20090102,b,3,4
20090103,c,4,5
In this special case, read_csv assumes that the first column is to be used as the index of the DataFrame:
In [158]: pd.read_csv('foo.csv')
Out[158]:
A B C
20090101 a 1 2
20090102 b 3 4
20090103 c 4 5
Note that the dates weren’t automatically parsed. In that case you would need to do as before:
In [159]: df = pd.read_csv('foo.csv', parse_dates=True)
In [160]: df.index
Out[160]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype=
˓→'datetime64[ns]', freq=None)
The index_col argument to read_csv and read_table can take a list of column numbers to turn multiple
columns into a MultiIndex for the index of the returned object:
In [162]: df = pd.read_csv("data/mindex_ex.csv", index_col=[0,1])
In [163]: df
Out[163]:
zit xit
year indiv
1977 A 1.20 0.60
B 1.50 0.50
C 1.70 0.80
1978 A 0.20 0.06
B 0.70 0.20
C 0.80 0.30
D 0.90 0.50
E 1.40 0.90
1979 C 0.20 0.15
D 0.14 0.05
E 0.50 0.15
F 1.20 0.50
G 3.40 1.90
H 5.40 2.70
I 6.40 1.20
In [164]: df.loc[1978]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
zit xit
indiv
A 0.2 0.06
B 0.7 0.20
C 0.8 0.30
D 0.9 0.50
E 1.4 0.90
By specifying list of row locations for the header argument, you can read in a MultiIndex for the columns.
Specifying non-consecutive rows will skip the intervening rows.
In [165]: from pandas.util.testing import makeCustomDataframe as mkdf
In [167]: df.to_csv('mi.csv')
In [168]: print(open('mi.csv').read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
C1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2
In [170]: print(open('mi2.csv').read())
,a,a,a,b,c,c
,q,r,s,t,u,v
one,1,2,3,4,5,6
two,7,8,9,10,11,12
Note: If an index_col is not specified (e.g. you don’t have an index, or wrote it with df.to_csv(...,
index=False), then any names on the columns index will be lost.
read_csv is capable of inferring delimited (not necessarily comma-separated) files, as pandas uses the csv.
Sniffer class of the csv module. For this, you have to specify sep=None.
In [172]: print(open('tmp2.sv').read())
:0:1:2:3
0:0.4691122999071863:-0.2828633443286633:-1.5090585031735124:-1.1356323710171934
1:1.2121120250208506:-0.17321464905330858:0.11920871129693428:-1.0442359662799567
2:-0.8618489633477999:-2.1045692188948086:-0.4949292740687813:1.071803807037338
3:0.7215551622443669:-0.7067711336300845:-1.0395749851146963:0.27185988554282986
4:-0.42497232978883753:0.567020349793672:0.27623201927771873:-1.0874006912859915
5:-0.6736897080883706:0.1136484096888855:-1.4784265524372235:0.5249876671147047
6:0.4047052186802365:0.5770459859204836:-1.7150020161146375:-1.0392684835147725
7:-0.3706468582364464:-1.1578922506419993:-1.344311812731667:0.8448851414248841
8:1.0757697837155533:-0.10904997528022223:1.6435630703622064:-1.4693879595399115
9:0.35702056413309086:-0.6746001037299882:-1.776903716971867:-0.9689138124473498
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
6 6 0.404705 0.577046 -1.715002 -1.039268
7 7 -0.370647 -1.157892 -1.344312 0.844885
8 8 1.075770 -0.109050 1.643563 -1.469388
9 9 0.357021 -0.674600 -1.776904 -0.968914
It’s best to use concat() to combine multiple files. See the cookbook for an example.
Suppose you wish to iterate through a (potentially very large) file lazily rather than reading the entire file into memory,
such as the following:
In [174]: print(open('tmp.sv').read())
|0|1|2|3
0|0.4691122999071863|-0.2828633443286633|-1.5090585031735124|-1.1356323710171934
1|1.2121120250208506|-0.17321464905330858|0.11920871129693428|-1.0442359662799567
2|-0.8618489633477999|-2.1045692188948086|-0.4949292740687813|1.071803807037338
3|0.7215551622443669|-0.7067711336300845|-1.0395749851146963|0.27185988554282986
4|-0.42497232978883753|0.567020349793672|0.27623201927771873|-1.0874006912859915
5|-0.6736897080883706|0.1136484096888855|-1.4784265524372235|0.5249876671147047
6|0.4047052186802365|0.5770459859204836|-1.7150020161146375|-1.0392684835147725
7|-0.3706468582364464|-1.1578922506419993|-1.344311812731667|0.8448851414248841
8|1.0757697837155533|-0.10904997528022223|1.6435630703622064|-1.4693879595399115
9|0.35702056413309086|-0.6746001037299882|-1.776903716971867|-0.9689138124473498
By specifying a chunksize to read_csv or read_table, the return value will be an iterable object of type
TextFileReader:
In [178]: reader
Out[178]: <pandas.io.parsers.TextFileReader at 0x7f212235a518>
In [181]: reader.get_chunk(5)
Out[181]:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
Under the hood pandas uses a fast and efficient parser implemented in C as well as a Python implementation which is
currently more feature-complete. Where possible pandas uses the C parser (specified as engine='c'), but may fall
back to Python if C-unsupported options are specified. Currently, C-unsupported options include:
• sep other than a single character (e.g. regex separators)
• skipfooter
• sep=None with delim_whitespace=False
Specifying any of the above options will produce a ParserWarning unless the python engine is selected explicitly
using engine='python'.
df = pd.read_csv('https://download.bls.gov/pub/time.series/cu/cu.item',
sep='\t')
df = pd.read_csv('s3://pandas-test/tips.csv')
The Series and DataFrame objects have an instance method to_csv which allows storing the contents of the
object as a comma-separated-values file. The function takes a number of arguments. Only the first is required.
• path_or_buf: A string path to the file to write or a StringIO
• sep : Field delimiter for the output file (default “,”)
• na_rep: A string representation of a missing value (default ‘’)
• float_format: Format string for floating point numbers
• cols: Columns to write (default None)
• header: Whether to write out the column names (default True)
• index: whether to write row (index) names (default True)
• index_label: Column label(s) for index column(s) if desired. If None (default), and header and index are
True, then the index names are used. (A sequence should be given if the DataFrame uses MultiIndex).
• mode : Python write mode, default ‘w’
• encoding: a string representing the encoding to use if the contents are non-ASCII, for Python versions prior
to 3
• line_terminator: Character sequence denoting line end (default ‘\n’)
• quoting: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set
a float_format then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-
numeric
The DataFrame object has an instance method to_string which allows control over the string representation of
the object. All arguments are optional:
• buf default None, for example a StringIO object
• columns default None, which columns to write
• col_space default None, minimum width of each column.
• na_rep default NaN, representation of NA value
• formatters default None, a dictionary (by column) of functions each of which takes a single argument and
returns a formatted string
• float_format default None, a function which takes a single (float) argument and returns a formatted string;
to be applied to floats in the DataFrame.
• sparsify default True, set to False for a DataFrame with a hierarchical index to print every multiindex key
at each row.
• index_names default True, will print the names of the indices
• index default True, will print the index (ie, row labels)
• header default True, will print the column labels
• justify default left, will print column headers left- or right-justified
The Series object also has a to_string method, but with only the buf, na_rep, float_format arguments.
There is also a length argument which, if set to True, will additionally output the length of the Series.
24.2 JSON
A Series or DataFrame can be converted to a valid JSON string. Use to_json with optional parameters:
• path_or_buf : the pathname or buffer to write the output This can be None in which case a JSON string is
returned
• orient :
Series:
– default is index
– allowed values are {split, records, index}
DataFrame:
– default is columns
– allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split dict like {index -> [index], columns -> [columns], data -> [values]}
records list like [{column -> value}, . . . , {column -> value}]
index dict like {index -> {column -> value}}
columns dict like {column -> {index -> value}}
values just the values array
• date_format : string, type of date conversion, ‘epoch’ for timestamp, ‘iso’ for ISO8601.
• double_precision : The number of decimal places to use when encoding floating point values, default 10.
• force_ascii : force encoded string to be ASCII, default True.
• date_unit : The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’ or
‘ns’ for seconds, milliseconds, microseconds and nanoseconds respectively. Default ‘ms’.
• default_handler : The handler to call if an object cannot otherwise be converted to a suitable format for
JSON. Takes a single argument, which is the object to convert, and returns a serializable object.
• lines : If records orient, then will write each record per line as json.
Note NaN’s, NaT’s and None will be converted to null and datetime objects will be converted based on the
date_format and date_unit parameters.
In [182]: dfj = pd.DataFrame(randn(5, 2), columns=list('AB'))
In [184]: json
Out[184]: '{"A":{"0":-1.2945235903,"1":0.2766617129,"2":-0.0139597524,"3":-0.
˓→0061535699,"4":0.8957173022},"B":{"0":0.4137381054,"1":-0.472034511,"2":-0.
˓→3625429925,"3":-0.923060654,"4":0.8052440254}}'
There are a number of different options for the format of the resulting JSON file / string. Consider the following
DataFrame and Series:
In [185]: dfjo = pd.DataFrame(dict(A=range(1, 4), B=range(4, 7), C=range(7, 10)),
.....: columns=list('ABC'), index=list('xyz'))
.....:
In [186]: dfjo
Out[186]:
A B C
x 1 4 7
y 2 5 8
z 3 6 9
(continues on next page)
In [188]: sjo
Out[188]:
x 15
y 16
z 17
Name: D, dtype: int64
Column oriented (the default for DataFrame) serializes the data as nested JSON objects with column labels acting
as the primary index:
In [189]: dfjo.to_json(orient="columns")
Out[189]: '{"A":{"x":1,"y":2,"z":3},"B":{"x":4,"y":5,"z":6},"C":{"x":7,"y":8,"z":9}}'
Index oriented (the default for Series) similar to column oriented but the index labels are now primary:
In [190]: dfjo.to_json(orient="index")
Out[190]: '{"x":{"A":1,"B":4,"C":7},"y":{"A":2,"B":5,"C":8},"z":{"A":3,"B":6,"C":9}}'
In [191]: sjo.to_json(orient="index")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[191]:
˓→'{"x":15,"y":16,"z":17}'
Record oriented serializes the data to a JSON array of column -> value records, index labels are not included. This is
useful for passing DataFrame data to plotting libraries, for example the JavaScript library d3.js:
In [192]: dfjo.to_json(orient="records")
Out[192]: '[{"A":1,"B":4,"C":7},{"A":2,"B":5,"C":8},{"A":3,"B":6,"C":9}]'
In [193]: sjo.to_json(orient="records")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[193]:
˓→'[15,16,17]'
Value oriented is a bare-bones option which serializes to nested JSON arrays of values only, column and index labels
are not included:
In [194]: dfjo.to_json(orient="values")
Out[194]: '[[1,4,7],[2,5,8],[3,6,9]]'
Split oriented serializes to a JSON object containing separate entries for values, index and columns. Name is also
included for Series:
In [195]: dfjo.to_json(orient="split")
Out[195]: '{"columns":["A","B","C"],"index":["x","y","z"],"data":[[1,4,7],[2,5,8],[3,
˓→6,9]]}'
In [196]: sjo.to_json(orient="split")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[196]
˓→'{"name":"D","index":["x","y","z"],"data":[15,16,17]}'
Table oriented serializes to the JSON Table Schema, allowing for the preservation of metadata including but not
limited to dtypes and index names.
Note: Any orient option that encodes to a JSON object will not preserve the ordering of index and column labels
during round-trip serialization. If you wish to preserve label ordering use the split option as it uses ordered containers.
In [201]: json
Out[201]: '{"date":{"0":"2013-01-01T00:00:00.000Z","1":"2013-01-01T00:00:00.000Z","2":
˓→"2013-01-01T00:00:00.000Z","3":"2013-01-01T00:00:00.000Z","4":"2013-01-01T00:00:00.
˓→000Z"},"B":{"0":2.5656459463,"1":1.3403088498,"2":-0.2261692849,"3":0.8138502857,"4
˓→":-0.8273169356},"A":{"0":-1.2064117817,"1":1.4312559863,"2":-1.1702987971,"3":0.
˓→4108345112,"4":0.1320031703}}'
In [203]: json
Out[203]: '{"date":{"0":"2013-01-01T00:00:00.000000Z","1":"2013-01-01T00:00:00.000000Z
˓→","2":"2013-01-01T00:00:00.000000Z","3":"2013-01-01T00:00:00.000000Z","4":"2013-01-
˓→01T00:00:00.000000Z"},"B":{"0":2.5656459463,"1":1.3403088498,"2":-0.2261692849,"3
˓→":0.8138502857,"4":-0.8273169356},"A":{"0":-1.2064117817,"1":1.4312559863,"2":-1.
˓→1702987971,"3":0.4108345112,"4":0.1320031703}}'
In [205]: json
Out[205]: '{"date":{"0":1356998400,"1":1356998400,"2":1356998400,"3":1356998400,"4
˓→":1356998400},"B":{"0":2.5656459463,"1":1.3403088498,"2":-0.2261692849,"3":0.
˓→8138502857,"4":-0.8273169356},"A":{"0":-1.2064117817,"1":1.4312559863,"2":-1.
˓→1702987971,"3":0.4108345112,"4":0.1320031703}}'
In [211]: dfj2.to_json('test.json')
In [212]: open('test.json').read()
Out[212]: '{"A":{"1356998400000":-1.2945235903,"1357084800000":0.2766617129,
˓→"1357171200000":-0.0139597524,"1357257600000":-0.0061535699,"1357344000000":0.
˓→8957173022},"B":{"1356998400000":0.4137381054,"1357084800000":-0.472034511,
˓→"1357171200000":-0.3625429925,"1357257600000":-0.923060654,"1357344000000":0.
˓→8052440254},"date":{"1356998400000":1356998400000,"1357084800000":1356998400000,
˓→"1357171200000":1356998400000,"1357257600000":1356998400000,"1357344000000
˓→":1356998400000},"ints":{"1356998400000":0,"1357084800000":1,"1357171200000":2,
˓→"1357257600000":3,"1357344000000":4},"bools":{"1356998400000":true,"1357084800000
˓→":true,"1357171200000":true,"1357257600000":true,"1357344000000":true}}'
If the JSON serializer cannot handle the container contents directly it will fall back in the following manner:
• if the dtype is unsupported (e.g. np.complex) then the default_handler, if provided, will be called for
each value, otherwise an exception is raised.
• if an object is unsupported it will attempt the following:
– check if the object has defined a toDict method and call it. A toDict method should return a dict
which will then be JSON serialized.
– invoke the default_handler if one was provided.
– convert the object to a dict by traversing its contents. However this will often fail with an
OverflowError or give unexpected results.
In general the best approach for unsupported objects or dtypes is to provide a default_handler. For example:
Reading a JSON string to pandas object can take a number of parameters. The parser will try to parse a DataFrame
if typ is not supplied or is None. To explicitly force Series parsing, pass typ=series
• filepath_or_buffer : a VALID JSON string or file handle / StringIO. The string could be a URL. Valid
URL schemes include http, ftp, S3, and file. For file URLs, a host is expected. For instance, a local file could be
file ://localhost/path/to/table.json
• typ : type of object to recover (series or frame), default ‘frame’
• orient :
Series :
– default is index
– allowed values are {split, records, index}
DataFrame
– default is columns
– allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split dict like {index -> [index], columns -> [columns], data -> [values]}
records list like [{column -> value}, . . . , {column -> value}]
index dict like {index -> {column -> value}}
columns dict like {column -> {index -> value}}
values just the values array
table adhering to the JSON Table Schema
• dtype : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don’t infer dtypes at
all, default is True, apply only to the data.
• convert_axes : boolean, try to convert the axes to the proper dtypes, default is True
• convert_dates : a list of columns to parse for dates; If True, then try to parse date-like columns, default
is True.
• keep_default_dates : boolean, default True. If parsing dates, then parse the default date-like columns.
• numpy : direct decoding to NumPy arrays. default is False; Supports numeric data only, although labels may
be non-numeric. Also note that the JSON ordering MUST be the same for each term if numpy=True.
• precise_float : boolean, default False. Set to enable usage of higher precision (strtod) function when
decoding string to double values. Default (False) is to use fast but less precise builtin functionality.
• date_unit : string, the timestamp unit to detect if converting dates. Default None. By default the timestamp
precision will be detected, if this is not desired then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force timestamp
precision to seconds, milliseconds, microseconds or nanoseconds respectively.
• lines : reads file as one json object per line.
• encoding : The encoding to use to decode py3 bytes.
• chunksize : when used in combination with lines=True, return a JsonReader which reads in chunksize
lines per iteration.
The parser will raise one of ValueError/TypeError/AssertionError if the JSON is not parseable.
If a non-default orient was used when encoding to JSON be sure to pass the same option here so that decoding
produces sensible results, see Orient Options for an overview.
The default of convert_axes=True, dtype=True, and convert_dates=True will try to parse the axes, and
all of the data into appropriate types, including dates. If you need to override specific dtypes, pass a dict to dtype.
convert_axes should only be set to False if you need to preserve string-like numbers (e.g. ‘1’, ‘2’) in an axes.
Note: Large integer values may be converted to dates if convert_dates=True and the data and / or column labels
appear ‘date-like’. The exact threshold depends on the date_unit specified. ‘date-like’ means that the column label
meets one of the following criteria:
• it ends with '_at'
• it ends with '_time'
• it begins with 'timestamp'
• it is 'modified'
• it is 'date'
Warning: When reading JSON data, automatic coercing into dtypes has some quirks:
• an index can be reconstructed in a different order from serialization, that is, the returned order is not guaran-
teed to be the same as before serialization
• a column that was float data will be converted to integer if it can be done safely, e.g. a column of 1.
• bool columns will be converted to integer on reconstruction
Thus there are times where you may want to specify specific dtypes via the dtype keyword argument.
In [214]: pd.read_json(json)
Out[214]:
date B A
0 2013-01-01 2.565646 -1.206412
1 2013-01-01 1.340309 1.431256
2 2013-01-01 -0.226169 -1.170299
3 2013-01-01 0.813850 0.410835
4 2013-01-01 -0.827317 0.132003
In [215]: pd.read_json('test.json')
Out[215]:
A B date ints bools
2013-01-01 -1.294524 0.413738 2013-01-01 0 True
2013-01-02 0.276662 -0.472035 2013-01-01 1 True
2013-01-03 -0.013960 -0.362543 2013-01-01 2 True
2013-01-04 -0.006154 -0.923061 2013-01-01 3 True
2013-01-05 0.895717 0.805244 2013-01-01 4 True
Don’t convert any data (but still convert axes and dates):
In [219]: si
Out[219]:
0 1 2 3
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0
In [220]: si.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Index(['0', '1', '2', '3'], dtype='object')
In [221]: si.columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Int64Index([0, 1, 2, 3], dtype='int64')
In [224]: sij
Out[224]:
0 1 2 3
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
In [225]: sij.index
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[225]:
˓→Index(['0', '1', '2', '3'], dtype='object')
In [226]: sij.columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→Index(['0', '1', '2', '3'], dtype='object')
In [229]: dfju
Out[229]:
A B date ints bools
1356998400000000000 -1.294524 0.413738 1356998400000000000 0 True
1357084800000000000 0.276662 -0.472035 1356998400000000000 1 True
1357171200000000000 -0.013960 -0.362543 1356998400000000000 2 True
1357257600000000000 -0.006154 -0.923061 1356998400000000000 3 True
1357344000000000000 0.895717 0.805244 1356998400000000000 4 True
In [231]: dfju
Out[231]:
A B date ints bools
2013-01-01 -1.294524 0.413738 2013-01-01 0 True
2013-01-02 0.276662 -0.472035 2013-01-01 1 True
2013-01-03 -0.013960 -0.362543 2013-01-01 2 True
2013-01-04 -0.006154 -0.923061 2013-01-01 3 True
2013-01-05 0.895717 0.805244 2013-01-01 4 True
In [233]: dfju
Out[233]:
A B date ints bools
2013-01-01 -1.294524 0.413738 2013-01-01 0 True
2013-01-02 0.276662 -0.472035 2013-01-01 1 True
2013-01-03 -0.013960 -0.362543 2013-01-01 2 True
2013-01-04 -0.006154 -0.923061 2013-01-01 3 True
2013-01-05 0.895717 0.805244 2013-01-01 4 True
Note: This supports numeric data only. Index and columns labels may be non-numeric, e.g. strings, dates etc.
If numpy=True is passed to read_json an attempt will be made to sniff an appropriate dtype during deserialization
and to subsequently decode directly to NumPy arrays, bypassing the need for intermediate Python objects.
This can provide speedups if you are deserialising a large amount of numeric data:
Warning: Direct NumPy decoding makes a number of assumptions and may fail or produce unexpected output if
these assumptions are not satisfied:
• data is numeric.
• data is uniform. The dtype is sniffed from the first value decoded. A ValueError may be raised, or
incorrect output may be produced if this condition is not satisfied.
• labels are ordered. Labels are only read from the first container, it is assumed that each subsequent row /
column has been encoded in the same order. This should be satisfied if the data was encoded using to_json
but may not be the case if the JSON is from another source.
24.2.3 Normalization
pandas provides a utility function to take a dict or list of dicts and normalize this semi-structured data into a flat table.
In [243]: from pandas.io.json import json_normalize
In [245]: json_normalize(data)
Out[245]:
id name name.family name.first name.given name.last
0 1.0 NaN NaN Coleen NaN Volk
1 NaN NaN Regner NaN Mose NaN
2 2.0 Faye Raker NaN NaN NaN NaN
Out[247]:
name population state shortname info.governor
0 Dade 12345 Florida FL Rick Scott
1 Broward 40000 Florida FL Rick Scott
2 Palm Beach 60000 Florida FL Rick Scott
3 Summit 1234 Ohio OH John Kasich
4 Cuyahoga 1337 Ohio OH John Kasich
In [250]: df
Out[250]:
a b
0 1 2
1 3 4
In [253]: reader
Out[253]: <pandas.io.json.json.JsonReader at 0x7f20fb0bc588>
In [255]: df = pd.DataFrame(
.....: {'A': [1, 2, 3],
.....: 'B': ['a', 'b', 'c'],
.....: 'C': pd.date_range('2016-01-01', freq='d', periods=3),
.....: }, index=pd.Index(range(3), name='idx'))
.....:
In [256]: df
Out[256]:
A B C
idx
0 1 a 2016-01-01
1 2 b 2016-01-02
2 3 c 2016-01-03
˓→,{"name":"B","type":"string"},{"name":"C","type":"datetime"}],"primaryKey":["idx"],
˓→01T00:00:00.000Z"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000Z"},{"idx":2,
˓→"A":3,"B":"c","C":"2016-01-03T00:00:00.000Z"}]}'
The schema field contains the fields key, which itself contains a list of column name to type pairs, including the
Index or MultiIndex (see below for a list of types). The schema field also contains a primaryKey field if the
(Multi)index is unique.
The second field, data, contains the serialized data with the records orient. The index is included, and any
datetimes are ISO 8601 formatted, as required by the Table Schema spec.
The full list of types supported are described in the Table Schema spec. This table shows the mapping from pandas
types:
In [260]: build_table_schema(s)
Out[260]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime'}],
'primaryKey': ['index'],
'pandas_version': '0.20.0'}
• datetimes with a timezone (before serializing), include an additional field tz with the time zone name (e.g.
'US/Central').
In [262]: build_table_schema(s_tz)
Out[262]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime', 'tz': 'US/Central'}],
'primaryKey': ['index'],
'pandas_version': '0.20.0'}
• Periods are converted to timestamps before serialization, and so have the same behavior of being converted to
UTC. In addition, periods will contain and additional field freq with the period’s frequency, e.g. 'A-DEC'.
In [264]: build_table_schema(s_per)
Out[264]:
{'fields': [{'name': 'index', 'type': 'datetime', 'freq': 'A-DEC'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': ['index'],
'pandas_version': '0.20.0'}
• Categoricals use the any type and an enum constraint listing the set of possible values. Additionally, an
ordered field is included:
In [265]: s_cat = pd.Series(pd.Categorical(['a', 'b', 'a']))
In [266]: build_table_schema(s_cat)
Out[266]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values',
'type': 'any',
'constraints': {'enum': ['a', 'b']},
'ordered': False}],
'primaryKey': ['index'],
'pandas_version': '0.20.0'}
In [268]: build_table_schema(s_dupe)
Out[268]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'pandas_version': '0.20.0'}
• The primaryKey behavior is the same with MultiIndexes, but in this case the primaryKey is an array:
In [269]: s_multi = pd.Series(1, index=pd.MultiIndex.from_product([('a', 'b'),
.....: (0, 1)]))
.....:
In [270]: build_table_schema(s_multi)
Out[270]:
{'fields': [{'name': 'level_0', 'type': 'string'},
{'name': 'level_1', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': FrozenList(['level_0', 'level_1']),
'pandas_version': '0.20.0'}
In [272]: df
Out[272]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [273]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
In [276]: new_df
Out[276]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [277]: new_df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
Please note that the literal string ‘index’ as the name of an Index is not round-trippable, nor are any names begin-
ning with 'level_' within a MultiIndex. These are used by default in DataFrame.to_json() to indicate
missing values and the subsequent read cannot distinguish the intent.
In [281]: print(new_df.index.name)
None
24.3 HTML
Warning: We highly encourage you to read the HTML Table Parsing gotchas below regarding the issues sur-
rounding the BeautifulSoup4/html5lib/lxml parsers.
The top-level read_html() function can accept an HTML string/file/URL and will parse HTML tables into list of
pandas DataFrames. Let’s look at a few examples.
Note: read_html returns a list of DataFrame objects, even if there is only a single table contained in the
HTML content.
In [284]: dfs
Out[284]:
[ Bank Name City ..
˓→. Closing Date Updated Date
0 Washington Federal Bank for Savings Chicago ..
˓→. December 15, 2017 February 21, 2018
1 The Farmers and Merchants State Bank of Argonia Argonia ..
˓→. October 13, 2017 February 21, 2018
2 Fayette County Bank Saint Elmo ..
˓→. May 26, 2017 July 26, 2017
3 Guaranty Bank, (d/b/a BestBank in Georgia & Mi... Milwaukee ..
˓→. May 5, 2017 March 22, 2018
4 First NBC Bank New Orleans ..
˓→. April 28, 2017 December 5, 2017
5 Proficio Bank Cottonwood Heights ..
˓→. March 3, 2017 March 7, 2018
6 Seaway Bank and Trust Company Chicago ..
˓→. January 27, 2017 May 18, 2017
.. ... ... ..
˓→. ... ...
548 Hamilton Bank, NA En Espanol Miami ..
˓→. January 11, 2002 September 21, 2015
549 Sinclair National Bank Gravette ..
˓→. September 7, 2001 October 6, 2017
550 Superior Bank, FSB Hinsdale ..
˓→. July 27, 2001 August 19, 2014
551 Malta National Bank Malta ..
˓→. May 3, 2001 November 18, 2002
552 First Alliance Bank & Trust Co. Manchester ..
˓→. February 2, 2001 February 18, 2003
553 National State Bank of Metropolis Metropolis ..
˓→. December 14, 2000 March 17, 2005
554 Bank of Honolulu Honolulu ..
˓→. October 13, 2000 March 17, 2005
(continues on next page)
Note: The data from the above URL changes every Monday so the resulting data above and the data below may be
slightly different.
Read in the content of the file from the above URL and pass it to read_html as a string:
In [285]: with open(file_path, 'r') as f:
.....: dfs = pd.read_html(f.read())
.....:
In [286]: dfs
Out[286]:
[ Bank Name City ST CERT
˓→ Acquiring Institution Closing Date Updated Date
0 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386
˓→ North Shore Bank, FSB May 31, 2013 May 31, 2013
1 Central Arizona Bank Scottsdale AZ 34527
˓→ Western State Bank May 14, 2013 May 20, 2013
2 Sunrise Bank Valdosta GA 58185
˓→ Synovus Bank May 10, 2013 May 21, 2013
3 Pisgah Community Bank Asheville NC 58701
˓→ Capital Bank, N.A. May 10, 2013 May 14, 2013
4 Douglas County Bank Douglasville GA 21649
˓→ Hamilton State Bank April 26, 2013 May 16, 2013
5 Parkway Bank Lenoir NC 57158
˓→CertusBank, National Association April 26, 2013 May 17, 2013
6 Chipola Community Bank Marianna FL 58034 First
˓→Federal Bank of Florida April 19, 2013 May 16, 2013
.. ... ... .. ...
˓→ ... ... ...
498 Hamilton Bank, NAEn Espanol Miami FL 24382 Israel
˓→Discount Bank of New York January 11, 2002 June 5, 2012
499 Sinclair National Bank Gravette AR 34248
˓→ Delta Trust & Bank September 7, 2001 February 10, 2004
500 Superior Bank, FSB Hinsdale IL 32646
˓→ Superior Federal, FSB July 27, 2001 June 5, 2012
501 Malta National Bank Malta OH 6629
˓→ North Valley Bank May 3, 2001 November 18, 2002
502 First Alliance Bank & Trust Co. Manchester NH 34264 Southern New
˓→Hampshire Bank & Trust February 2, 2001 February 18, 2003
503 National State Bank of Metropolis Metropolis IL 3815
˓→Banterra Bank of Marion December 14, 2000 March 17, 2005
504 Bank of Honolulu Honolulu HI 21029
˓→ Bank of the Orient October 13, 2000 March 17, 2005
In [289]: dfs
Out[289]:
[ Bank Name City ST CERT
˓→ Acquiring Institution Closing Date Updated Date
0 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386
˓→ North Shore Bank, FSB May 31, 2013 May 31, 2013
1 Central Arizona Bank Scottsdale AZ 34527
˓→ Western State Bank May 14, 2013 May 20, 2013
2 Sunrise Bank Valdosta GA 58185
˓→ Synovus Bank May 10, 2013 May 21, 2013
3 Pisgah Community Bank Asheville NC 58701
˓→ Capital Bank, N.A. May 10, 2013 May 14, 2013
4 Douglas County Bank Douglasville GA 21649
˓→ Hamilton State Bank April 26, 2013 May 16, 2013
5 Parkway Bank Lenoir NC 57158
˓→CertusBank, National Association April 26, 2013 May 17, 2013
6 Chipola Community Bank Marianna FL 58034 First
˓→Federal Bank of Florida April 19, 2013 May 16, 2013
.. ... ... .. ...
˓→ ... ... ...
498 Hamilton Bank, NAEn Espanol Miami FL 24382 Israel
˓→Discount Bank of New York January 11, 2002 June 5, 2012
499 Sinclair National Bank Gravette AR 34248
˓→ Delta Trust & Bank September 7, 2001 February 10, 2004
500 Superior Bank, FSB Hinsdale IL 32646
˓→ Superior Federal, FSB July 27, 2001 June 5, 2012
501 Malta National Bank Malta OH 6629
˓→ North Valley Bank May 3, 2001 November 18, 2002
502 First Alliance Bank & Trust Co. Manchester NH 34264 Southern New
˓→Hampshire Bank & Trust February 2, 2001 February 18, 2003
503 National State Bank of Metropolis Metropolis IL 3815
˓→Banterra Bank of Marion December 14, 2000 March 17, 2005
504 Bank of Honolulu Honolulu HI 21029
˓→ Bank of the Orient October 13, 2000 March 17, 2005
Note: The following examples are not run by the IPython evaluator due to the fact that having so many network-
accessing functions slows down the documentation build. If you spot an error or an example that doesn’t run, please
do not hesitate to report it over on pandas GitHub issues page.
Specify a header row (by default <th> or <td> elements located within a <thead> are used to form the column
index, if multiple rows are contained within <thead> then a multiindex is created); if specified, the header row is
taken from the data minus the parsed header elements (<th> elements).
Specify a number of rows to skip using a list (xrange (Python 2 only) works as well):
url_mcc = 'https://en.wikipedia.org/wiki/Mobile_country_code'
dfs = pd.read_html(url_mcc, match='Telekom Albania', header=0, converters={'MNC':
str})
Read in pandas to_html output (with some loss of floating point precision):
df = pd.DataFrame(randn(2, 2))
s = df.to_html(float_format='{0:.40g}'.format)
dfin = pd.read_html(s, index_col=0)
The lxml backend will raise an error on a failed parse if that is the only parser you provide. If you only have a single
parser you can provide just a string, but it is considered good practice to pass a list with one string if, for example, the
function expects a sequence of strings. You may use:
However, if you have bs4 and html5lib installed and pass None or ['lxml', 'bs4'] then the parse will most
likely succeed. Note that as soon as a parse succeeds, the function will return.
dfs = pd.read_html(url, 'Metcalf Bank', index_col=0, flavor=['lxml', 'bs4'])
DataFrame objects have an instance method to_html which renders the contents of the DataFrame as an HTML
table. The function arguments are as in the method to_string described above.
Note: Not all of the possible options for DataFrame.to_html are shown here for brevity’s sake. See
to_html() for the full set of options.
In [291]: df
Out[291]:
0 1
0 -0.184744 0.496971
1 -0.856240 1.857977
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-0.184744</td>
<td>0.496971</td>
</tr>
<tr>
<th>1</th>
<td>-0.856240</td>
<td>1.857977</td>
</tr>
</tbody>
</table>
HTML:
The columns argument will limit the columns shown:
In [293]: print(df.to_html(columns=[0]))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
(continues on next page)
HTML:
float_format takes a Python callable to control the precision of floating point values:
In [294]: print(df.to_html(float_format='{0:.10f}'.format))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-0.1847438576</td>
<td>0.4969711327</td>
</tr>
<tr>
<th>1</th>
<td>-0.8562396763</td>
<td>1.8579766508</td>
</tr>
</tbody>
</table>
HTML:
bold_rows will make the row labels bold by default, but you can turn that off:
In [295]: print(df.to_html(bold_rows=False))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
(continues on next page)
The classes argument provides the ability to give the resulting HTML table CSS classes. Note that these classes
are appended to the existing 'dataframe' class.
Finally, the escape argument allows you to control whether the “<”, “>” and “&” characters escaped in the resulting
HTML (by default it is True). So to get the HTML without escaped characters pass escape=False
Escaped:
In [298]: print(df.to_html())
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
(continues on next page)
Not escaped:
In [299]: print(df.to_html(escape=False))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>-0.474063</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>-0.230305</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-0.400654</td>
</tr>
</tbody>
</table>
Note: Some browsers may not show a difference in the rendering of the previous two HTML tables.
There are some versioning issues surrounding the libraries that are used to parse HTML tables in the top-level pandas
io function read_html.
The read_excel() method can read Excel 2003 (.xls) and Excel 2007+ (.xlsx) files using the xlrd Python
module. The to_excel() instance method is used for saving a DataFrame to Excel. Generally the semantics are
similar to working with csv data. See the cookbook for some advanced strategies.
In the most basic use-case, read_excel takes a path to an Excel file, and the sheet_name indicating which sheet
to parse.
# Returns a DataFrame
read_excel('path_to_file.xls', sheet_name='Sheet1')
To facilitate working with multiple sheets from the same file, the ExcelFile class can be used to wrap the file and
can be passed into read_excel There will be a performance benefit for reading multiple sheets as the file is read
into memory only once.
xlsx = pd.ExcelFile('path_to_file.xls')
df = pd.read_excel(xlsx, 'Sheet1')
The sheet_names property will generate a list of the sheet names in the file.
The primary use-case for an ExcelFile is parsing multiple sheets with different parameters:
data = {}
# For when Sheet1's format differs from Sheet2
with pd.ExcelFile('path_to_file.xls') as xls:
data['Sheet1'] = pd.read_excel(xls, 'Sheet1', index_col=None, na_values=['NA'])
data['Sheet2'] = pd.read_excel(xls, 'Sheet2', index_col=1)
Note that if the same parsing parameters are used for all sheets, a list of sheet names can simply be passed to
read_excel with no loss in performance.
# Returns a DataFrame
read_excel('path_to_file.xls', 'Sheet1', index_col=None, na_values=['NA'])
read_excel can read more than one sheet, by setting sheet_name to either a list of sheet names, a list of sheet
positions, or None to read all sheets. Sheets can be specified by sheet index or sheet name, using an integer or string,
respectively.
read_excel can read a MultiIndex index, by passing a list of columns to index_col and a MultiIndex
column by passing a list of rows to header. If either the index or columns have serialized level names those will
be read in as well by specifying the rows/columns that make up the levels.
For example, to read in a MultiIndex index without names:
In [300]: df = pd.DataFrame({'a':[1, 2, 3, 4], 'b':[5, 6, 7, 8]},
.....: index=pd.MultiIndex.from_product([['a', 'b'],['c', 'd']]))
.....:
In [301]: df.to_excel('path_to_file.xlsx')
In [303]: df
Out[303]:
a b
a c 1 5
d 2 6
b c 3 7
d 4 8
If the index has level names, they will parsed as well, using the same parameters.
In [304]: df.index = df.index.set_names(['lvl1', 'lvl2'])
In [305]: df.to_excel('path_to_file.xlsx')
(continues on next page)
In [307]: df
Out[307]:
a b
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
If the source file has both MultiIndex index and columns, lists specifying each should be passed to index_col
and header:
In [309]: df.to_excel('path_to_file.xlsx')
In [311]: df
Out[311]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
It is often the case that users will insert columns to do temporary computations in Excel and you may not want to read
in those columns. read_excel takes a usecols keyword to allow you to specify a subset of columns to parse.
If usecols is an integer, then it is assumed to indicate the last column to be parsed.
If usecols is a list of integers, then it is assumed to be the file column indices to be parsed.
Datetime-like values are normally automatically converted to the appropriate dtype when reading the excel file. But
if you have a column of strings that look like dates (but are not actually formatted as dates in excel), you can use the
parse_dates keyword to parse those strings to datetimes:
It is possible to transform the contents of Excel cells via the converters option. For instance, to convert a column
to boolean:
This options handles missing values and treats exceptions in the converters as missing data. Transformations are
applied cell by cell rather than to the column as a whole, so the array dtype is not guaranteed. For instance, a column
of integers with missing values cannot be transformed to an array with integer dtype, because NaN is strictly a float.
You can manually mask missing data to recover integer dtype:
To write a DataFrame object to a sheet of an Excel file, you can use the to_excel instance method. The arguments
are largely the same as to_csv described above, the first argument being the name of the excel file, and the optional
second argument the name of the sheet to which the DataFrame should be written. For example:
df.to_excel('path_to_file.xlsx', sheet_name='Sheet1')
Files with a .xls extension will be written using xlwt and those with a .xlsx extension will be written using
xlsxwriter (if available) or openpyxl.
The DataFrame will be written in a way that tries to mimic the REPL output. The index_label will be placed
in the second row instead of the first. You can place it in the first row by setting the merge_cells option in
to_excel() to False:
In order to write separate DataFrames to separate sheets in a single Excel file, one can pass an ExcelWriter.
Note: Wringing a little more performance out of read_excel Internally, Excel stores all numeric data as floats.
Because this can produce unexpected behavior when reading in data, pandas defaults to trying to convert integers to
floats if it doesn’t lose information (1.0 --> 1). You can pass convert_float=False to disable this behavior,
which may give a slight performance improvement.
Pandas supports writing Excel files to buffer-like objects such as StringIO or BytesIO using ExcelWriter.
bio = BytesIO()
# Seek to the beginning and read to copy the workbook to a variable in memory
bio.seek(0)
workbook = bio.read()
Note: engine is optional but recommended. Setting the engine determines the version of workbook produced.
Setting engine='xlrd' will produce an Excel 2003-format workbook (xls). Using either 'openpyxl' or
'xlsxwriter' will produce an Excel 2007-format workbook (xlsx). If omitted, an Excel 2007-formatted workbook
is produced.
df.to_excel('path_to_file.xlsx', sheet_name='Sheet1')
The look and feel of Excel worksheets created from pandas can be modified using the following parameters on the
DataFrame’s to_excel method.
• float_format : Format string for floating point numbers (default None).
• freeze_panes : A tuple of two integers representing the bottommost row and rightmost column to freeze.
Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default None).
24.5 Clipboard
A handy way to grab data is to use the read_clipboard() method, which takes the contents of the clipboard
buffer and passes them to the read_table method. For instance, you can copy the following text to the clipboard
(CTRL-C on many operating systems):
A B C
x 1 4 p
y 2 5 q
z 3 6 r
clipdf = pd.read_clipboard()
In [312]: clipdf
Out[312]:
A B C
x 1 4 p
y 2 5 q
z 3 6 r
The to_clipboard method can be used to write the contents of a DataFrame to the clipboard. Following which
you can paste the clipboard contents into other applications (CTRL-V on many operating systems). Here we illustrate
writing a DataFrame into clipboard and reading it back.
In [314]: df
Out[314]:
0 1 2
(continues on next page)
In [315]: df.to_clipboard()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→--------------------------------------------------------------------------
PyperclipException:
Pyperclip could not find a copy/paste mechanism for your system.
For more information, please visit https://pyperclip.readthedocs.org
In [316]: pd.read_clipboard()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→--------------------------------------------------------------------------
PyperclipException:
Pyperclip could not find a copy/paste mechanism for your system.
For more information, please visit https://pyperclip.readthedocs.org
We can see that we got the same content back, which we had earlier written to the clipboard.
Note: You may need to install xclip or xsel (with gtk, PyQt5, PyQt4 or qtpy) on Linux to use these methods.
24.6 Pickling
All pandas objects are equipped with to_pickle methods which use Python’s cPickle module to save data
structures to disk using the pickle format.
In [317]: df
Out[317]:
0 1 2
0 -0.288267 -0.084905 0.004772
1 1.382989 0.343635 -1.253994
2 -0.124925 0.212244 0.496654
3 0.525417 1.238640 -1.210543
4 -1.175743 -0.172372 -0.734129
In [318]: df.to_pickle('foo.pkl')
The read_pickle function in the pandas namespace can be used to load any pickled pandas object (or any other
pickled object) from file:
In [319]: pd.read_pickle('foo.pkl')
Out[319]:
0 1 2
0 -0.288267 -0.084905 0.004772
1 1.382989 0.343635 -1.253994
2 -0.124925 0.212244 0.496654
3 0.525417 1.238640 -1.210543
4 -1.175743 -0.172372 -0.734129
Warning: Loading pickled data received from untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html
Warning: Several internal refactorings have been done while still preserving compatibility with pickles created
with older versions of pandas. However, for such cases, pickled DataFrames, Series etc, must be read with
pd.read_pickle, rather than pickle.load.
See here and here for some examples of compatibility-breaking changes. See this question for a detailed explana-
tion.
In [320]: df = pd.DataFrame({
.....: 'A': np.random.randn(1000),
.....: 'B': 'foo',
.....: 'C': pd.date_range('20130101', periods=1000, freq='s')})
.....:
In [321]: df
Out[321]:
A B C
0 0.478412 foo 2013-01-01 00:00:00
1 -0.783748 foo 2013-01-01 00:00:01
2 1.403558 foo 2013-01-01 00:00:02
3 -0.539282 foo 2013-01-01 00:00:03
4 -1.651012 foo 2013-01-01 00:00:04
5 0.692072 foo 2013-01-01 00:00:05
6 1.022171 foo 2013-01-01 00:00:06
.. ... ... ...
993 -1.613932 foo 2013-01-01 00:16:33
994 1.088104 foo 2013-01-01 00:16:34
995 -0.632963 foo 2013-01-01 00:16:35
996 -0.585314 foo 2013-01-01 00:16:36
997 -0.275038 foo 2013-01-01 00:16:37
998 -0.937512 foo 2013-01-01 00:16:38
999 0.632369 foo 2013-01-01 00:16:39
In [324]: rt
Out[324]:
A B C
0 0.478412 foo 2013-01-01 00:00:00
1 -0.783748 foo 2013-01-01 00:00:01
2 1.403558 foo 2013-01-01 00:00:02
3 -0.539282 foo 2013-01-01 00:00:03
4 -1.651012 foo 2013-01-01 00:00:04
5 0.692072 foo 2013-01-01 00:00:05
6 1.022171 foo 2013-01-01 00:00:06
.. ... ... ...
993 -1.613932 foo 2013-01-01 00:16:33
994 1.088104 foo 2013-01-01 00:16:34
995 -0.632963 foo 2013-01-01 00:16:35
(continues on next page)
In [327]: rt
Out[327]:
A B C
0 0.478412 foo 2013-01-01 00:00:00
1 -0.783748 foo 2013-01-01 00:00:01
2 1.403558 foo 2013-01-01 00:00:02
3 -0.539282 foo 2013-01-01 00:00:03
4 -1.651012 foo 2013-01-01 00:00:04
5 0.692072 foo 2013-01-01 00:00:05
6 1.022171 foo 2013-01-01 00:00:06
.. ... ... ...
993 -1.613932 foo 2013-01-01 00:16:33
994 1.088104 foo 2013-01-01 00:16:34
995 -0.632963 foo 2013-01-01 00:16:35
996 -0.585314 foo 2013-01-01 00:16:36
997 -0.275038 foo 2013-01-01 00:16:37
998 -0.937512 foo 2013-01-01 00:16:38
999 0.632369 foo 2013-01-01 00:16:39
In [329]: rt = pd.read_pickle("data.pkl.gz")
In [330]: rt
Out[330]:
A B C
0 0.478412 foo 2013-01-01 00:00:00
1 -0.783748 foo 2013-01-01 00:00:01
2 1.403558 foo 2013-01-01 00:00:02
3 -0.539282 foo 2013-01-01 00:00:03
4 -1.651012 foo 2013-01-01 00:00:04
5 0.692072 foo 2013-01-01 00:00:05
6 1.022171 foo 2013-01-01 00:00:06
.. ... ... ...
993 -1.613932 foo 2013-01-01 00:16:33
994 1.088104 foo 2013-01-01 00:16:34
995 -0.632963 foo 2013-01-01 00:16:35
996 -0.585314 foo 2013-01-01 00:16:36
997 -0.275038 foo 2013-01-01 00:16:37
998 -0.937512 foo 2013-01-01 00:16:38
(continues on next page)
In [331]: df["A"].to_pickle("s1.pkl.bz2")
In [332]: rt = pd.read_pickle("s1.pkl.bz2")
In [333]: rt
Out[333]:
0 0.478412
1 -0.783748
2 1.403558
3 -0.539282
4 -1.651012
5 0.692072
6 1.022171
...
993 -1.613932
994 1.088104
995 -0.632963
996 -0.585314
997 -0.275038
998 -0.937512
999 0.632369
Name: A, Length: 1000, dtype: float64
24.7 msgpack
pandas supports the msgpack format for object serialization. This is a lightweight portable binary format, similar
to binary JSON, that is highly space efficient, and provides good performance both on the writing (serialization), and
reading (deserialization).
Warning: This is a very new feature of pandas. We intend to provide certain optimizations in the io of the
msgpack data. Since this is marked as an EXPERIMENTAL LIBRARY, the storage format may not be stable
until a future release.
In [335]: df.to_msgpack('foo.msg')
In [336]: pd.read_msgpack('foo.msg')
Out[336]:
A B
0 0.170801 0.895366
1 0.838238 0.052592
2 0.664140 0.289750
3 0.449593 0.872087
4 0.983618 0.744359
You can pass a list of objects and you will receive them back on deserialization.
In [339]: pd.read_msgpack('foo.msg')
Out[339]:
[ A B
0 0.170801 0.895366
1 0.838238 0.052592
2 0.664140 0.289750
3 0.449593 0.872087
4 0.983618 0.744359, 'foo', array([1, 2, 3]), 2013-01-01 0.548134
2013-01-02 0.503447
2013-01-03 0.348438
2013-01-04 0.707267
2013-01-05 0.261656
Freq: D, dtype: float64]
In [342]: pd.read_msgpack('foo.msg')
Out[342]:
[ A B
0 0.170801 0.895366
1 0.838238 0.052592
2 0.664140 0.289750
3 0.449593 0.872087
4 0.983618 0.744359, 'foo', array([1, 2, 3]), 2013-01-01 0.548134
2013-01-02 0.503447
2013-01-03 0.348438
2013-01-04 0.707267
2013-01-05 0.261656
Freq: D, dtype: float64, A B
0 0.170801 0.895366
1 0.838238 0.052592
2 0.664140 0.289750
(continues on next page)
Unlike other io methods, to_msgpack is available on both a per-object basis, df.to_msgpack() and using the
top-level pd.to_msgpack(...) where you can pack arbitrary collections of Python lists, dicts, scalars, while
intermixing pandas objects.
In [344]: pd.read_msgpack('foo2.msg')
Out[344]:
{'dict': ({'df': A B
0 0.170801 0.895366
1 0.838238 0.052592
2 0.664140 0.289750
3 0.449593 0.872087
4 0.983618 0.744359},
{'string': 'foo'},
{'scalar': 1.0},
{'s': 2013-01-01 0.548134
2013-01-02 0.503447
2013-01-03 0.348438
2013-01-04 0.707267
2013-01-05 0.261656
Freq: D, dtype: float64})}
In [345]: df.to_msgpack()
Out[345]: b'\x84\xa3typ\xadblock_
˓→manager\xa5klass\xa9DataFrame\xa4axes\x92\x86\xa3typ\xa5index\xa5klass\xa5Index\xa4name\xc0\xa5dtyp
˓→index\xa5klass\xaaRangeIndex\xa4name\xc0\xa5start\x00\xa4stop\x05\xa4step\x01\xa6blocks\x91\x86\xa4
˓→<\xfd\xd2f\xcf\xdc\xc5?0\x15\xebN\xd9\xd2\xea?,\x9c\x16A\xa2@\xe5?\xd8/\xdd\xf4
˓→"\xc6\xdc?\x11\x1e\x97\x1b\xcdy\xef?&\x1e<\xee\xd6\xa6\xec?p\xd3;\xb2N\xed\xaa?
˓→h\xcb\xb1\xbdB\x8b\xd2?\xaf4\x01r"\xe8\xeb?)G6\xd9\xc9\xd1\xe7?
˓→\xa5shape\x92\x02\x05\xa5dtype\xa7float64\xa5klass\xaaFloatBlock\xa8compress\xc0'
Furthermore you can concatenate the strings to produce a list of the original objects.
HDFStore is a dict-like object which reads and writes pandas using the high performance HDF5 format using the
excellent PyTables library. See the cookbook for some advanced strategies
Warning: pandas requires PyTables >= 3.0.0. There is a indexing bug in PyTables < 3.2 which may appear
when querying stores using an index. If you see a subset of results being returned, upgrade to PyTables >= 3.2.
Stores created previously will need to be rewritten using the updated version.
In [348]: print(store)
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Objects can be written to the file just like adding key-value pairs to a dict:
In [349]: np.random.seed(1234)
In [355]: store['df'] = df
In [356]: store['wp'] = wp
In [358]: store
\\\\\\\\\\\\\\\\\Out[358]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
A B C
2000-01-01 0.887163 0.859588 -0.636524
2000-01-02 0.015696 -2.242685 1.150036
2000-01-03 0.991946 0.953324 -2.021255
2000-01-04 -0.334077 0.002118 0.405453
2000-01-05 0.289092 1.321158 -1.546906
2000-01-06 -0.202646 -0.655969 0.193421
2000-01-07 0.553439 1.318152 -0.469305
2000-01-08 0.675554 -1.817027 -0.183109
In [362]: store
Out[362]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [363]: store.close()
In [364]: store
Out[364]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [365]: store.is_open
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[365]: False
# Working with, and automatically closing the store using a context manager
In [366]: with pd.HDFStore('store.h5') as store:
.....: store.keys()
.....:
HDFStore supports an top-level API using read_hdf for reading and to_hdf for writing, similar to how
read_csv and to_csv work.
In [367]: df_tl = pd.DataFrame(dict(A=list(range(5)), B=list(range(5))))
HDFStore will by default not drop rows that are all missing. This behavior can be changed by setting dropna=True.
In [370]: df_with_missing = pd.DataFrame({'col1': [0, np.nan, 2],
.....: 'col2': [1, np.nan, np.nan]})
.....:
In [371]: df_with_missing
Out[371]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [377]: panel_with_major_axis_all_missing=pd.Panel(matrix,
.....: items=['Item1', 'Item2', 'Item3'],
(continues on next page)
In [378]: panel_with_major_axis_all_missing
Out[378]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 2 (major_axis) x 3 (minor_axis)
Items axis: Item1 to Item3
Major_axis axis: 1 to 2
Minor_axis axis: A to C
In [381]: reloaded
Out[381]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 1 (major_axis) x 3 (minor_axis)
Items axis: Item1 to Item3
Major_axis axis: 2 to 2
Minor_axis axis: A to C
The examples above show storing using put, which write the HDF5 to PyTables in a fixed array format, called
the fixed format. These types of stores are not appendable once written (though you can simply remove them and
rewrite). Nor are they queryable; they must be retrieved in their entirety. They also do not support dataframes with
non-unique column names. The fixed format stores offer very fast writing and slightly faster reading than table
stores. This format is specified by default when using put or to_hdf or by format='fixed' or format='f'.
Warning: A fixed format will raise a TypeError if you try to retrieve using a where:
pd.DataFrame(randn(10, 2)).to_hdf('test_fixed.h5', 'df')
HDFStore supports another PyTables format on disk, the table format. Conceptually a table is shaped very
much like a DataFrame, with rows and columns. A table may be appended to in the same or other sessions.
In addition, delete and query type operations are supported. This format is specified by format='table' or
format='t' to append or put or to_hdf.
In [387]: store
Out[387]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Note: You can also create a table by passing format='table' or format='t' to a put operation.
Keys to a store can be specified as a string. These can be in a hierarchical path-name like format (e.g. foo/bar/
bah), which will generate a hierarchy of sub-stores (or Groups in PyTables parlance). Keys can be specified with
out the leading ‘/’ and are always absolute (e.g. ‘foo’ refers to ‘/foo’). Removal operations can remove everything in
the sub-store and below, so be careful.
In [393]: store
Out[393]:
(continues on next page)
In [396]: store
Out[396]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Warning: Hierarchical keys cannot be retrieved as dotted (attribute) access as described above for items stored
under the root node.
In [8]: store.foo.bar.bah
AttributeError: 'HDFStore' object has no attribute 'foo'
# you can directly access the actual PyTables node but using the root node
In [9]: store.root.foo.bar.bah
Out[9]:
/foo/bar/bah (Group) ''
children := ['block0_items' (Array), 'block0_values' (Array), 'axis0' (Array),
˓→'axis1' (Array)]
Storing mixed-dtype data is supported. Strings are stored as a fixed-width using the maximum size of the appended
column. Subsequent attempts at appending longer strings will raise a ValueError.
Passing min_itemsize={`values`: size} as a parameter to append will set a larger minimum for the string
columns. Storing floats, strings, ints, bools, datetime64 are currently supported. For string
columns, passing nan_rep = 'nan' to append will change the default nan representation on disk (which con-
verts to/from np.nan), this defaults to nan.
In [402]: df_mixed1
Out[402]:
A B C string int bool datetime64
0 0.704721 -1.152659 -0.430096 string 1 True 2001-01-02
1 -0.785435 0.631979 0.767369 string 1 True 2001-01-02
2 0.462060 0.039513 0.984920 string 1 True 2001-01-02
3 NaN NaN 0.270836 NaN 1 True NaT
4 NaN NaN 1.391986 NaN 1 True NaT
5 -0.926254 1.321106 0.079842 string 1 True 2001-01-02
6 2.007843 0.152631 -0.399965 string 1 True 2001-01-02
7 0.226963 0.164530 -1.027851 string 1 True 2001-01-02
In [403]: df_mixed1.get_dtype_counts()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
float64 2
float32 1
object 1
int64 1
bool 1
datetime64[ns] 1
dtype: int64
Storing multi-index DataFrames as tables is very similar to storing/selecting from homogeneous index
DataFrames.
In [407]: df_mi
Out[407]:
A B C
foo bar
foo one -0.584718 0.816594 -0.081947
two -0.344766 0.528288 -1.068989
three -0.511881 0.291205 0.566534
bar one 0.503592 0.285296 0.484288
two 1.363482 -0.781105 -0.468018
baz two 1.224574 -1.281108 0.875476
three -1.710715 -0.450765 0.749164
qux one -0.203933 -0.182175 0.680656
two -1.818499 0.047072 0.394844
three -0.248432 -0.617707 -0.682884
In [409]: store.select('df_mi')
Out[409]:
A B C
foo bar
foo one -0.584718 0.816594 -0.081947
two -0.344766 0.528288 -1.068989
three -0.511881 0.291205 0.566534
bar one 0.503592 0.285296 0.484288
two 1.363482 -0.781105 -0.468018
baz two 1.224574 -1.281108 0.875476
three -1.710715 -0.450765 0.749164
qux one -0.203933 -0.182175 0.680656
two -1.818499 0.047072 0.394844
three -0.248432 -0.617707 -0.682884
A B C
foo bar
bar one 0.503592 0.285296 0.484288
two 1.363482 -0.781105 -0.468018
24.8.6 Querying
select and delete operations have an optional criterion that can be specified to select/delete only a subset of the
data. This allows one to have a very large on-disk table and retrieve only a portion of the data.
A query is specified using the Term class under the hood, as a boolean expression.
• index and columns are supported indexers of a DataFrames.
• major_axis, minor_axis, and items are supported indexers of the Panel.
• if data_columns are specified, these can be used as additional indexers.
Valid comparison operators are:
=, ==, !=, >, >=, <, <=
Valid boolean expressions are combined with:
• | : or
• & : and
• ( and ) : for grouping
These rules are similar to how boolean expressions are used in pandas for indexing.
Note:
• = will be automatically expanded to the comparison operator ==
• ~ is the not operator, but can only be used in very limited circumstances
• If a list/tuple of expressions is passed they will be combined via &
Note: Passing a string to a query by interpolating it into the query expression is not recommended. Simply assign the
string of interest to a variable and use that variable in an expression. For example, do this
string = "HolyMoly'"
store.select('df', 'index == string')
instead of this
string = "HolyMoly'"
store.select('df', 'index == %s' % string)
The latter will not work and will raise a SyntaxError.Note that there’s a single quote followed by a double quote
in the string variable.
If you must interpolate, use the '%r' format specifier
store.select('df', 'index == %r' % string)
In [416]: store
Out[416]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[417]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 3 (major_axis) x 2 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to B
The columns keyword can be supplied to select a list of columns to be returned, this is equivalent to passing a
'columns=list_of_columns_to_filter':
start and stop parameters can be specified to limit the total search space. These are in terms of the total number
of rows in a table.
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 1 (major_axis) x 2 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-03 00:00:00
Minor_axis axis: A to B
Note: select will raise a ValueError if the query expression has an unknown variable reference. Usually this
means that you are trying to select on a column that is not a data_column.
select will raise a SyntaxError if the query expression is not valid.
You can store and query using the timedelta64[ns] type. Terms can be specified in the format:
<float>(<unit>), where float may be signed (and fractional), and unit can be D,s,ms,us,ns for the timedelta.
Here’s an example:
In [424]: dftd
Out[424]:
A B C
0 2013-01-01 2013-01-01 00:00:10 -1 days +23:59:50
1 2013-01-01 2013-01-02 00:00:10 -2 days +23:59:50
2 2013-01-01 2013-01-03 00:00:10 -3 days +23:59:50
3 2013-01-01 2013-01-04 00:00:10 -4 days +23:59:50
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
24.8.6.3 Indexing
You can create/modify an index for a table with create_table_index after data is already in the table (after and
append/put operation). Creating a table index is highly encouraged. This will speed your queries a great deal
when you use a select with the indexed dimension as the where.
Note: Indexes are automagically created on the indexables and any data columns you specify. This behavior can be
turned off by passing index=False to append.
In [430]: i = store.root.df.table.cols.index.index
Oftentimes when appending large amounts of data to a store, it is useful to turn off index creation for each append,
then recreate at the end.
In [432]: df_1 = pd.DataFrame(randn(10, 2), columns=list('AB'))
In [437]: st.get_storer('df').table
Out[437]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
(continues on next page)
In [439]: st.get_storer('df').table
Out[439]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
autoindex := True
colindexes := {
"B": Index(9, full, shuffle, zlib(1)).is_csi=True}
In [440]: st.close()
You can designate (and index) certain columns that you want to be able to perform queries (other than the indexable
columns, which you can always query). For instance say you want to perform this common operation, on-disk, and
return just the frame that matches this query. You can specify data_columns = True to force all columns to be
data_columns.
In [447]: df_dc
Out[447]:
A B C string string2
2000-01-01 0.887163 0.859588 -0.636524 foo cool
2000-01-02 0.015696 1.000000 1.000000 foo cool
2000-01-03 0.991946 1.000000 1.000000 foo cool
2000-01-04 -0.334077 0.002118 0.405453 foo cool
2000-01-05 0.289092 1.321158 -1.546906 NaN cool
2000-01-06 -0.202646 -0.655969 0.193421 NaN cool
2000-01-07 0.553439 1.318152 -0.469305 foo cool
2000-01-08 0.675554 -1.817027 -0.183109 bar cool
# getting creative
In [450]: store.select('df_dc', 'B > 0 & C > 0 & string == foo')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C string string2
2000-01-02 0.015696 1.000000 1.000000 foo cool
2000-01-03 0.991946 1.000000 1.000000 foo cool
2000-01-04 -0.334077 0.002118 0.405453 foo cool
A B C string string2
2000-01-02 0.015696 1.000000 1.000000 foo cool
2000-01-03 0.991946 1.000000 1.000000 foo cool
2000-01-04 -0.334077 0.002118 0.405453 foo cool
There is some performance degradation by making lots of columns into data columns, so it is up to the user to designate
these. In addition, you cannot change data columns (nor indexables) after the first append/put operation (Of course
you can simply read in the data and create a new table!).
24.8.6.5 Iterator
Note: You can also use the iterator with read_hdf which will open, then automatically close the store when finished
iterating.
for df in pd.read_hdf('store.h5','df', chunksize=3):
print(df)
Note, that the chunksize keyword applies to the source rows. So if you are doing a query, then the chunksize will
subdivide the total rows in the table and the query applied, returning an iterator on potentially unequal sized chunks.
Here is a recipe for generating a query and using it to create equal sized return chunks.
In [454]: dfeq = pd.DataFrame({'number': np.arange(1, 11)})
In [455]: dfeq
Out[455]:
number
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
To retrieve a single indexable or data column, use the method select_column. This will, for example, enable you
to get the index very quickly. These return a Series of the result, indexed by the row number. These do not currently
accept the where selector.
0 foo
1 foo
2 foo
3 foo
4 NaN
5 NaN
6 foo
7 bar
Name: string, dtype: object
Selecting coordinates
Sometimes you want to get the coordinates (a.k.a the index locations) of your query. This returns an Int64Index
of the resulting locations. These coordinates can also be passed to subsequent where operations.
In [466]: c
Out[466]:
Int64Index([732, 733, 734, 735, 736, 737, 738, 739, 740, 741,
...
990, 991, 992, 993, 994, 995, 996, 997, 998, 999],
dtype='int64', length=268)
0 1
2002-01-02 -0.178266 -0.064638
2002-01-03 -1.204956 -3.880898
2002-01-04 0.974470 0.415160
2002-01-05 1.751967 0.485011
2002-01-06 -0.170894 0.748870
2002-01-07 0.629793 0.811053
2002-01-08 2.133776 0.238459
... ... ...
2002-09-20 -0.181434 0.612399
2002-09-21 -0.763324 -0.354962
2002-09-22 -0.261776 0.812126
2002-09-23 0.482615 -0.886512
2002-09-24 -0.037757 -0.562953
2002-09-25 0.897706 0.383232
2002-09-26 -1.324806 1.139269
Sometime your query can involve creating a list of rows to select. Usually this mask would be a resulting index
from an indexing operation. This example selects the months of a datetimeindex which are 5.
Storer Object
If you want to inspect the stored object, retrieve via get_storer. You could use this programmatically to say get
the number of rows in an object.
In [473]: store.get_storer('df_dc').nrows
Out[473]: 8
The methods append_to_multiple and select_as_multiple can perform appending/selecting from mul-
tiple tables at once. The idea is to have one table (call it the selector table) that you index most/all of the columns, and
perform your queries. The other table(s) are data tables with an index matching the selector table’s index. You can
then perform a very fast query on the selector table, yet get lots of data back. This method is similar to having a very
wide table, but enables more efficient queries.
The append_to_multiple method splits a given single DataFrame into multiple tables according to d, a dictio-
nary that maps the table names to a list of ‘columns’ you want in that table. If None is used in place of a list, that
table will have the remaining unspecified columns of the given DataFrame. The argument selector defines which
table is the selector table (which you can make queries from). The argument dropna will drop rows from the input
DataFrame to ensure tables are synchronized. This means that if a row for one of the tables being written to is
entirely np.NaN, that row will be dropped from all tables.
If dropna is False, THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES. Remember that
entirely np.Nan rows are not written to the HDFStore, so if you choose to call dropna=False, some tables may
have more rows than others, and therefore select_as_multiple may not work or it may return unexpected
results.
In [474]: df_mt = pd.DataFrame(randn(8, 6), index=pd.date_range('1/1/2000',
˓→periods=8),
In [478]: store
Out[478]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [480]: store.select('df2_mt')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
C D E F foo
2000-01-01 0.607460 0.790907 0.852225 0.096696 bar
2000-01-02 0.811031 -0.356817 1.047085 0.664705 bar
2000-01-03 -0.764381 -0.287229 -0.089351 -1.035115 bar
2000-01-04 -1.948100 -0.116556 0.800597 -0.796154 bar
2000-01-05 -0.717627 0.156995 -0.344718 -0.171208 bar
2000-01-06 1.541729 0.205256 1.998065 0.953591 bar
2000-01-07 1.391070 0.303013 1.093347 -0.101000 bar
2000-01-08 -1.507639 0.089575 0.658822 -1.037627 bar
# as a multiple
In [481]: store.select_as_multiple(['df1_mt', 'df2_mt'], where=['A>0', 'B>0'],
.....: selector = 'df1_mt')
.....:
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A B C D E F foo
2000-01-01 0.714697 0.318215 0.607460 0.790907 0.852225 0.096696 bar
2000-01-06 0.538116 0.226388 1.541729 0.205256 1.998065 0.953591 bar
You can delete from a table selectively by specifying a where. In deleting rows, it is important to understand the
PyTables deletes rows by erasing the rows, then moving the following data. Thus deleting can potentially be a very
expensive operation depending on the orientation of your data. This is especially true in higher dimensional objects
(Panel and Panel4D). To get optimal performance, it’s worthwhile to have the dimension you are deleting be the
first of the indexables.
Data is ordered (on the disk) in terms of the indexables. Here’s a simple use case. You store panel-type data, with
dates in the major_axis and ids in the minor_axis. The data is then interleaved like this:
• date_1 - id_1 - id_2 - . - id_n
• date_2 - id_1 - . - id_n
It should be clear that a delete operation on the major_axis will be fairly quick, as one chunk is removed, then the
following data moved. On the other hand a delete operation on the minor_axis will be very expensive. In this case
it would almost certainly be faster to rewrite the table using a where that selects all but the missing data.
In [483]: store.select('wp')
\\\\\\\\\\\\\Out[483]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 2 (major_axis) x 4 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-02 00:00:00
Minor_axis axis: A to D
Warning: Please note that HDF5 DOES NOT RECLAIM SPACE in the h5 files automatically. Thus, repeatedly
deleting (or removing nodes) and adding again, WILL TEND TO INCREASE THE FILE SIZE.
To repack and clean the file, use ptrepack.
24.8.8.1 Compression
PyTables allows the stored data to be compressed. This applies to all kinds of stores, not just tables. Two parameters
are used to control compression: complevel and complib.
complevel specifies if and how hard data is to be compressed. complevel=0 and complevel=None dis-
ables compression and 0<complevel<10 enables compression.
complib specifies which compression library to use. If nothing is specified the default library zlib is used. A
compression library usually optimizes for either good compression rates or speed and the results will depend
on the type of data. Which type of compression to choose depends on your specific needs and data. The list of
supported compression libraries:
• zlib: The default compression library. A classic in terms of compression, achieves good com-
pression rates but is somewhat slow.
• lzo: Fast compression and decompression.
• bzip2: Good compression rates.
• blosc: Fast compression and decompression.
New in version 0.20.2: Support for alternative blosc compressors:
• blosc:blosclz This is the default compressor for blosc
• blosc:lz4: A compact, very popular and fast compressor.
• blosc:lz4hc: A tweaked version of LZ4, produces better compression ratios at the expense of
speed.
• blosc:snappy: A popular compressor used in many places.
• blosc:zlib: A classic; somewhat slower than the previous ones, but achieving better compression
ratios.
• blosc:zstd: An extremely well balanced codec; it provides the best compression ratios among
the others above, and at reasonably fast speed.
If complib is defined as something other than the listed libraries a ValueError exception is
issued.
Note: If the library specified with the complib option is missing on your platform, compression defaults to zlib
without further ado.
Or on-the-fly compression (this only applies to tables) in stores where compression is not enabled:
24.8.8.2 ptrepack
PyTables offers better write performance when tables are compressed after they are written, as opposed to turning on
compression at the very beginning. You can use the supplied PyTables utility ptrepack. In addition, ptrepack
can change compression levels after the fact.
Furthermore ptrepack in.h5 out.h5 will repack the file to allow you to reuse previously deleted space. Alter-
natively, one can simply remove the file and write again, or use the copy method.
24.8.8.3 Caveats
Warning: HDFStore is not-threadsafe for writing. The underlying PyTables only supports concurrent
reads (via threading or processes). If you need reading and writing at the same time, you need to serialize these
operations in a single thread in a single process. You will corrupt your data otherwise. See the (GH2397) for more
information.
• If you use locks to manage write access between multiple processes, you may want to use fsync() before
releasing write locks. For convenience you can use store.flush(fsync=True) to do this for you.
• Once a table is created its items (Panel) / columns (DataFrame) are fixed; only exactly the same columns can
be appended
• Be aware that timezones (e.g., pytz.timezone('US/Eastern')) are not necessarily equal across time-
zone versions. So if data is localized to a specific timezone in the HDFStore using one version of a timezone
library and that data is updated with another version, the data will be converted to UTC since these timezones
are not considered equal. Either use the same version of timezone library or use tz_convert with the updated
timezone definition.
Warning: PyTables will show a NaturalNameWarning if a column name cannot be used as an attribute
selector. Natural identifiers contain only letters, numbers, and underscores, and may not begin with a number.
Other identifiers cannot be used in a where clause and are generally a bad idea.
24.8.9 DataTypes
HDFStore will map an object dtype to the PyTables underlying dtype. This means the following types are known
to work:
You can write data that contains category dtypes to a HDFStore. Queries work the same as if it was an object
array. However, the category dtyped data is stored in a more efficient manner.
In [484]: dfcat = pd.DataFrame({'A': pd.Series(list('aabbcdba')).astype('category'),
.....: 'B': np.random.randn(8) })
.....:
In [485]: dfcat
Out[485]:
A B
0 a 0.603273
1 a 0.262554
2 b -0.979586
3 b 2.132387
4 c 0.892485
5 d 1.996474
6 b 0.231425
7 a 0.980070
In [486]: dfcat.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
A category
B float64
dtype: object
In [490]: result
Out[490]:
A B
2 b -0.979586
3 b 2.132387
4 c 0.892485
6 b 0.231425
In [491]: result.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[491]:
˓→
A category
B float64
dtype: object
min_itemsize
The underlying implementation of HDFStore uses a fixed column width (itemsize) for string columns. A string
column itemsize is calculated as the maximum of the length of data (for that column) that is passed to the HDFStore,
in the first append. Subsequent appends, may introduce a string for a column larger than the column can hold, an
Exception will be raised (otherwise you could have a silent truncation of these columns, leading to loss of information).
In the future we may relax this and allow a user-specified truncation to occur.
Pass min_itemsize on the first table creation to a-priori specify the minimum length of a particular string column.
min_itemsize can be an integer, or a dict mapping a column name to an integer. You can pass values as a key
to allow all indexables or data_columns to have this min_itemsize.
Passing a min_itemsize dict will cause all passed columns to be created as data_columns automatically.
Note: If you are not passing any data_columns, then the min_itemsize will be the maximum of the length of
any string passed
In [493]: dfs
Out[493]:
A B
0 foo bar
1 foo bar
2 foo bar
3 foo bar
4 foo bar
In [495]: store.get_storer('dfs').table
(continues on next page)
In [497]: store.get_storer('dfs2').table
Out[497]:
/dfs2/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=3, shape=(1,), dflt=b'', pos=1),
"A": StringCol(itemsize=30, shape=(), dflt=b'', pos=2)}
byteorder := 'little'
chunkshape := (1598,)
autoindex := True
colindexes := {
"index": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"A": Index(6, medium, shuffle, zlib(1)).is_csi=False}
nan_rep
String columns will serialize a np.nan (a missing value) with the nan_rep string representation. This defaults to
the string value nan. You could inadvertently turn an actual nan value into a missing value.
In [498]: dfss = pd.DataFrame(dict(A=['foo', 'bar', 'nan']))
In [499]: dfss
Out[499]:
A
0 foo
1 bar
2 nan
In [501]: store.select('dfss')
Out[501]:
A
0 foo
1 bar
2 NaN
In [503]: store.select('dfss2')
Out[503]:
(continues on next page)
HDFStore writes table format objects in specific formats suitable for producing loss-less round trips to pandas
objects. For external compatibility, HDFStore can read native PyTables format tables.
It is possible to write an HDFStore object that can easily be imported into R using the rhdf5 library (Package
website). Create a table format store like this:
In [504]: np.random.seed(1)
In [506]: df_for_r.head()
Out[506]:
first second class
0 0.417022 0.326645 0
1 0.720324 0.527058 0
2 0.000114 0.885942 1
3 0.302333 0.357270 1
4 0.146756 0.908535 1
In [509]: store_export
Out[509]:
<class 'pandas.io.pytables.HDFStore'>
File path: export.h5
In R this file can be read into a data.frame object using the rhdf5 library. The following example function reads
the corresponding column names and data values from the values and assembles them into a data.frame:
# Load values and column names for all datasets from corresponding nodes and
# insert them into one data.frame object.
library(rhdf5)
return(data)
}
Note: The R function lists the entire HDF5 file’s contents and assembles the data.frame object from all matching
nodes, so use this only as a starting point if you have stored multiple DataFrame objects to a single HDF5 file.
24.8.11 Performance
• tables format come with a writing performance penalty as compared to fixed stores. The benefit is the
ability to append/delete and query (potentially very large amounts of data). Write times are generally longer as
compared with regular stores. Query times can be quite fast, especially on an indexed axis.
• You can pass chunksize=<int> to append, specifying the write chunksize (default is 50000). This will
significantly lower your memory usage on writing.
• You can pass expectedrows=<int> to the first append, to set the TOTAL number of expected rows that
PyTables will expected. This will optimize read/write performance.
• Duplicate rows can be written to tables, but are filtered out in selection (with the last items being selected; thus
a table is unique on major, minor pairs)
• A PerformanceWarning will be raised if you are attempting to store types that will be pickled by PyTables
(rather than stored as endemic types). See Here for more information and some solutions.
24.9 Feather
Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data frames
efficient, and to make sharing data across data analysis languages easy.
Feather is designed to faithfully serialize and de-serialize DataFrames, supporting all of the pandas dtypes, including
extension dtypes such as categorical and datetime with tz.
Several caveats.
• This is a newer library, and the format, though stable, is not guaranteed to be backward compatible to the earlier
versions.
• The format will NOT write an Index, or MultiIndex for the DataFrame and will raise an error if a non-
default one is provided. You can .reset_index() to store the index or .reset_index(drop=True)
to ignore it.
• Duplicate column names and non-string columns names are not supported
• Non supported types include Period and actual Python object types. These will raise a helpful error message
on an attempt at serialization.
See the Full Documentation.
In [511]: df
Out[511]:
a b c d e f g h
˓→ i
0 a 1 3 4.0 True a 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.
˓→000000000
In [512]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
In [513]: df.to_feather('example.feather')
In [515]: result
Out[515]:
a b c d e f g h
˓→ i
0 a 1 3 4.0 True a 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.
˓→000000000
# we preserve dtypes
In [516]: result.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
24.10 Parquet
Note: These engines are very similar and should read/write nearly identical parquet format files. Currently pyarrow
does not support timedelta data, fastparquet>=0.1.4 supports timezone aware datetimes. These libraries differ
by having different underlying dependencies (fastparquet by using numba, while pyarrow uses a c-library).
.....:
In [518]: df
Out[518]:
a b c d e f g
0 a 1 3 4.0 True 2013-01-01 2013-01-01 00:00:00-05:00
1 b 2 4 5.0 False 2013-01-02 2013-01-02 00:00:00-05:00
2 c 3 5 6.0 True 2013-01-03 2013-01-03 00:00:00-05:00
In [519]: df.dtypes
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
dtype: object
In [524]: result.dtypes
Out[524]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
(continues on next page)
In [526]: result.dtypes
Out[526]:
a object
b int64
dtype: object
The pandas.io.sql module provides a collection of query wrappers to both facilitate data retrieval and to reduce
dependency on DB-specific API. Database abstraction is provided by SQLAlchemy if installed. In addition you will
need a driver library for your database. Examples of such drivers are psycopg2 for PostgreSQL or pymysql for
MySQL. For SQLite this is included in Python’s standard library by default. You can find an overview of supported
drivers for each SQL dialect in the SQLAlchemy docs.
If SQLAlchemy is not installed, a fallback is only provided for sqlite (and for mysql for backwards compatibility,
but this is deprecated and will be removed in a future version). This mode requires a Python database adapter which
respect the Python DB-API.
See also some cookbook examples for some advanced strategies.
The key functions are:
24.11.1 pandas.read_sql_table
Name of SQL schema in database to query (if database flavor supports this). Uses
default schema if None (default).
index_col : string or list of strings, optional, default: None
Column(s) to set as index(MultiIndex).
coerce_float : boolean, default True
Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal)
to floating point. Can result in loss of Precision.
parse_dates : list or dict, default: None
• List of column names to parse as dates.
• Dict of {column_name: format string} where format string is strftime compat-
ible in case of parsing string times or is one of (D, s, ns, ms, us) in case of parsing integer
timestamps.
• Dict of {column_name: arg dict}, where the arg dict corresponds to the keyword
arguments of pandas.to_datetime() Especially useful with databases without native
Datetime support, such as SQLite.
columns : list, default: None
List of column names to select from SQL table
chunksize : int, default None
If specified, returns an iterator where chunksize is the number of rows to include in each
chunk.
Returns
DataFrame
See also:
read_sql
Notes
Any datetime values with time zone information will be converted to UTC.
24.11.2 pandas.read_sql_query
read_sql
Notes
Any datetime values with time zone information parsed via the parse_dates parameter will be converted to UTC.
24.11.3 pandas.read_sql
24.11.4 pandas.DataFrame.to_sql
Databases supported by SQLAlchemy [R16] are supported. Tables can be newly created, appended to, or
overwritten.
Parameters name : string
Name of SQL table.
con : sqlalchemy.engine.Engine or sqlite3.Connection
Using SQLAlchemy makes it possible to use any DB supported by that library. Legacy
support is provided for sqlite3.Connection objects.
schema : string, optional
Specify the schema (if database flavor supports this). If None, use default schema.
if_exists : {‘fail’, ‘replace’, ‘append’}, default ‘fail’
How to behave if the table already exists.
• fail: Raise a ValueError.
• replace: Drop the table before inserting new values.
• append: Insert new values to the existing table.
index : boolean, default True
Write DataFrame index as a column. Uses index_label as the column name in the table.
index_label : string or sequence, default None
Column label for index column(s). If None is given (default) and index is True, then the
index names are used. A sequence should be given if the DataFrame uses MultiIndex.
chunksize : int, optional
Rows will be written in batches of this size at a time. By default, all rows will be written
at once.
dtype : dict, optional
Specifying the datatype for columns. The keys should be the column names and the
values should be the SQLAlchemy types or strings for the sqlite3 legacy mode.
Raises ValueError
When the table already exists and if_exists is ‘fail’ (the default).
See also:
References
[R16], [R17]
Examples
Specify the dtype (especially useful for integers with missing values). Notice that while pandas is forced to store
the data as floating point, the database supports nullable integers. When fetching the data with Python, we get
back integer scalars.
In the following example, we use the SQlite SQL database engine. You can use a temporary SQLite database where
data are stored in “memory”.
To connect with SQLAlchemy you use the create_engine() function to create an engine object from database
URI. You only need to create the engine once per database you are connecting to. For more information on
create_engine() and the URI formatting, see the examples below and the SQLAlchemy documentation
If you want to manage your own connections you can pass one of those instead:
Assuming the following data is in a DataFrame data, we can insert it into the database using to_sql().
With some databases, writing large DataFrames can result in errors due to packet size limitations being exceeded. This
can be avoided by setting the chunksize parameter when calling to_sql. For example, the following writes data
to the database in batches of 1000 rows at a time:
to_sql() will try to map your data to an appropriate SQL data type based on the dtype of the data. When you have
columns of dtype object, pandas will try to infer the data type.
You can always override the default type by specifying the desired SQL type of any of the columns by using the
dtype argument. This argument needs a dictionary mapping column names to SQLAlchemy types (or strings for the
sqlite3 fallback mode). For example, specifying to use the sqlalchemy String type instead of the default Text type
for string columns:
Note: Due to the limited support for timedelta’s in the different database flavors, columns with type timedelta64
will be written as integer values as nanoseconds to the database and a warning will be raised.
Note: Columns of category dtype will be converted to the dense representation as you would get with np.
asarray(categorical) (e.g. for string categories this gives an array of strings). Because of this, reading the
database table back in does not generate a categorical.
read_sql_table() will read a database table given the table name and optionally a subset of columns to read.
Note: In order to use read_sql_table(), you must have the SQLAlchemy optional dependency installed.
You can also specify the name of the column as the DataFrame index, and specify a subset of columns to be read.
Col_1 Col_2
0 X 27.50
1 Y -12.50
2 Z 5.73
If needed you can explicitly specify a format string, or a dict of arguments to pass to pandas.to_datetime():
Reading from and writing to different schema’s is supported through the schema keyword in the
read_sql_table() and to_sql() functions. Note however that this depends on the database flavor (sqlite
does not have schema’s). For example:
24.11.8 Querying
You can query using raw SQL in the read_sql_query() function. In this case you must use the SQL variant
appropriate for your database. When using SQLAlchemy, you can also pass SQLAlchemy Expression language
constructs, which are database-agnostic.
Out[538]:
id Col_1 Col_2
0 42 Y -12.5
The read_sql_query() function supports a chunksize argument. Specifying this will return an iterator through
chunks of the query result:
.....: print(chunk)
.....:
a b c
0 0.280665 -0.073113 1.160339
1 0.369493 1.904659 1.111057
2 0.659050 -1.627438 0.602319
3 0.420282 0.810952 1.044442
4 -0.400878 0.824006 -0.562305
a b c
0 1.954878 -1.331952 -1.760689
1 -1.650721 -0.890556 -1.119115
2 1.956079 -0.326499 -1.342676
3 1.114383 -0.586524 -1.236853
4 0.875839 0.623362 -0.434957
a b c
0 1.407540 0.129102 1.616950
1 0.502741 1.558806 0.109403
2 -1.219744 2.449369 -0.545774
3 -0.198838 -0.700399 -0.203394
4 0.242669 0.201830 0.661020
a b c
0 1.792158 -0.120465 -1.233121
1 -1.182318 -0.665755 -1.674196
2 0.825030 -0.498214 -0.310985
3 -0.001891 -1.396620 -0.861316
4 0.674712 0.618539 -0.443172
You can also run a plain query without creating a DataFrame with execute(). This is useful for queries that don’t
return values, such as INSERT. This is functionally equivalent to calling execute on the SQLAlchemy engine or db
connection object. Again, you must use the SQL syntax variant appropriate for your database.
To connect with SQLAlchemy you use the create_engine() function to create an engine object from database
URI. You only need to create the engine once per database you are connecting to.
engine = create_engine('postgresql://scott:tiger@localhost:5432/mydatabase')
engine = create_engine('mysql+mysqldb://scott:tiger@localhost/foo')
engine = create_engine('oracle://scott:[email protected]:1521/sidname')
engine = create_engine('mssql+pyodbc://mydsn')
# sqlite://<nohostname>/<path>
# where <path> is relative:
engine = create_engine('sqlite:///foo.db')
If you have an SQLAlchemy description of your database you can express where conditions using SQLAlchemy
expressions
Out[546]:
index Date Col_1 Col_2 Col_3
0 0 2010-10-18 X 27.50 True
1 2 2010-10-20 Z 5.73 True
You can combine SQLAlchemy expressions with parameters passed to read_sql() using sqlalchemy.
bindparam()
The use of sqlite is supported without using SQLAlchemy. This mode requires a Python database adapter which
respect the Python DB-API.
You can create connections like so:
import sqlite3
con = sqlite3.connect(':memory:')
data.to_sql('data', cnx)
pd.read_sql_query("SELECT * FROM data", con)
Warning: Starting in 0.20.0, pandas has split off Google BigQuery support into the separate package
pandas-gbq. You can pip install pandas-gbq to get it.
The method to_stata() will write a DataFrame into a .dta file. The format version of this file is always 115 (Stata
12).
In [550]: df = pd.DataFrame(randn(10, 2), columns=list('AB'))
In [551]: df.to_stata('stata.dta')
Stata data files have limited data type support; only strings with 244 or fewer characters, int8, int16, int32,
float32 and float64 can be stored in .dta files. Additionally, Stata reserves certain values to represent missing
data. Exporting a non-missing value that is outside of the permitted range in Stata for a particular data type will retype
the variable to the next larger size. For example, int8 values are restricted to lie between -127 and 100 in Stata, and
so variables with values above 100 will trigger a conversion to int16. nan values in floating points data types are
stored as the basic missing data type (. in Stata).
Note: It is not possible to export missing data values for integer data types.
The Stata writer gracefully handles other data types including int64, bool, uint8, uint16, uint32 by casting
to the smallest supported type that can represent the data. For example, data with a type of uint8 will be cast to
int8 if all values are less than 100 (the upper bound for non-missing int8 data in Stata), or, if values are outside of
this range, the variable is cast to int16.
Warning: Conversion from int64 to float64 may result in a loss of precision if int64 values are larger than
2**53.
Warning: StataWriter and to_stata() only support fixed width strings containing up to 244 characters,
a limitation imposed by the version 115 dta file format. Attempting to write Stata dta files with strings longer than
244 characters raises a ValueError.
The top-level function read_stata will read a dta file and return either a DataFrame or a StataReader that
can be used to read the file incrementally.
In [552]: pd.read_stata('stata.dta')
Out[552]:
index A B
0 0 1.810535 -1.305727
1 1 -0.344987 -0.230840
2 2 -2.793085 1.937529
3 3 0.366332 -1.044589
4 4 2.051173 0.585662
5 5 0.429526 -0.606998
(continues on next page)
Specifying a chunksize yields a StataReader instance that can be used to read chunksize lines from the file
at a time. The StataReader object can be used as an iterator.
In [553]: reader = pd.read_stata('stata.dta', chunksize=3)
For more fine-grained control, use iterator=True and specify chunksize with each call to read().
In [555]: reader = pd.read_stata('stata.dta', iterator=True)
Note: read_stata() and StataReader support .dta formats 113-115 (Stata 10-12), 117 (Stata 13), and 118
(Stata 14).
Note: Setting preserve_dtypes=False will upcast to the standard pandas data types: int64 for all integer
types and float64 for floating point data. By default, the Stata data types are preserved when importing.
Categorical data can be exported to Stata data files as value labeled data. The exported data consists of the
underlying category codes as integer data values and the categories as value labels. Stata does not have an explicit
equivalent to a Categorical and information about whether the variable is ordered is lost when exporting.
Warning: Stata only supports string value labels, and so str is called on the categories when exporting data.
Exporting Categorical variables with non-string categories produces a warning, and can result a loss of infor-
mation if the str representations of the categories are not unique.
Labeled data can similarly be imported from Stata data files as Categorical variables using the keyword argu-
ment convert_categoricals (True by default). The keyword argument order_categoricals (True by
default) determines whether imported Categorical variables are ordered.
Note: When importing categorical data, the values of the variables in the Stata data file are not preserved
since Categorical variables always use integer data types between -1 and n-1 where n is the number
of categories. If the original values in the Stata data file are required, these can be imported by setting
convert_categoricals=False, which will import original data (but not the variable labels). The original
values can be matched to the imported categorical data since there is a simple mapping between the original Stata
data values and the category codes of imported Categorical variables: missing values are assigned code -1, and the
smallest original value is assigned 0, the second smallest is assigned 1 and so on until the largest original value is
assigned the code n-1.
Note: Stata supports partially labeled series. These series have value labels for some but not all data values. Importing
a partially labeled series will produce a Categorical with string categories for the values that are labeled and
numeric categories for values with no label.
The top-level function read_sas() can read (but not write) SAS xport (.XPT) and (since v0.18.0) SAS7BDAT
(.sas7bdat) format files.
SAS files only contain two value types: ASCII text and floating point values (usually 8 bytes but sometimes truncated).
For xport files, there is no automatic type conversion to integers, dates, or categoricals. For SAS7BDAT files, the
format codes may allow date variables to be automatically converted to dates. By default the whole file is read and
returned as a DataFrame.
Specify a chunksize or use iterator=True to obtain reader objects (XportReader or SAS7BDATReader)
for incrementally reading the file. The reader objects also have attributes that contain additional information about the
file and its variables.
Read a SAS7BDAT file:
df = pd.read_sas('sas_data.sas7bdat')
The specification for the xport file format is available from the SAS web site.
No official documentation is available for the SAS7BDAT format.
pandas itself only supports IO with a limited set of file formats that map cleanly to its tabular data model. For reading
and writing other file formats into and from pandas, we recommend these packages from the broader community.
24.15.1 netCDF
xarray provides data structures inspired by the pandas DataFrame for working with multi-dimensional datasets, with
a focus on the netCDF file format and easy conversion to and from pandas.
This is an informal comparison of various IO methods, using pandas 0.20.3. Timings are machine dependent and small
differences should be ignored.
In [1]: sz = 1000000
In [2]: df = pd.DataFrame({'A': randn(sz), 'B': [1] * sz})
In [3]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 2 columns):
A 1000000 non-null float64
B 1000000 non-null int64
dtypes: float64(1), int64(1)
memory usage: 15.3 MB
When writing, the top-three functions in terms of speed are are test_pickle_write, test_feather_write
and test_hdf_fixed_write_compress.
When reading, the top three are test_feather_read, test_pickle_read and test_hdf_fixed_read.
import os
import pandas as pd
import sqlite3
from numpy.random import randn
from pandas.io import sql
sz = 1000000
df = pd.DataFrame({'A': randn(sz), 'B': [1] * sz})
def test_sql_write(df):
if os.path.exists('test.sql'):
os.remove('test.sql')
sql_db = sqlite3.connect('test.sql')
df.to_sql(name='test_table', con=sql_db)
sql_db.close()
def test_hdf_fixed_write(df):
df.to_hdf('test_fixed.hdf', 'test', mode='w')
def test_hdf_fixed_read():
pd.read_hdf('test_fixed.hdf', 'test')
def test_hdf_fixed_write_compress(df):
df.to_hdf('test_fixed_compress.hdf', 'test', mode='w', complib='blosc')
def test_hdf_fixed_read_compress():
pd.read_hdf('test_fixed_compress.hdf', 'test')
def test_hdf_table_write(df):
df.to_hdf('test_table.hdf', 'test', mode='w', format='table')
def test_hdf_table_read():
pd.read_hdf('test_table.hdf', 'test')
def test_hdf_table_write_compress(df):
df.to_hdf('test_table_compress.hdf', 'test', mode='w', complib='blosc', format=
˓→'table')
def test_hdf_table_read_compress():
pd.read_hdf('test_table_compress.hdf', 'test')
def test_csv_write(df):
df.to_csv('test.csv', mode='w')
def test_csv_read():
pd.read_csv('test.csv', index_col=0)
def test_feather_write(df):
df.to_feather('test.feather')
def test_feather_read():
pd.read_feather('test.feather')
def test_pickle_write(df):
df.to_pickle('test.pkl')
def test_pickle_read():
pd.read_pickle('test.pkl')
def test_pickle_write_compress(df):
df.to_pickle('test.pkl.compress', compression='xz')
def test_pickle_read_compress():
pd.read_pickle('test.pkl.compress', compression='xz')
TWENTYFIVE
ENHANCING PERFORMANCE
In this part of the tutorial, we will investigate how to speed up certain functions operating on pandas DataFrames
using three different techniques: Cython, Numba and pandas.eval(). We will see a speed improvement of ~200
when we use Cython and Numba on a test function operating row-wise on the DataFrame. Using pandas.eval()
we will speed up a sum by an order of ~2.
For many use cases writing pandas in pure Python and NumPy is sufficient. In some computationally heavy applica-
tions however, it can be possible to achieve sizeable speed-ups by offloading work to cython.
This tutorial assumes you have refactored as much as possible in Python, for example by trying to remove for-loops
and making use of NumPy vectorization. It’s always worth optimising in Python first.
This tutorial walks through a “typical” process of cythonizing a slow computation. We use an example from the
Cython documentation but in the context of pandas. Our final cythonized solution is around 100 times faster than the
pure Python solution.
In [2]: df
Out[2]:
a b N x
0 0.469112 -0.218470 585 x
1 -0.282863 -0.061645 841 x
2 -1.509059 -0.723780 251 x
3 -1.135632 0.551225 972 x
4 1.212112 -0.497767 181 x
5 -0.173215 0.837519 458 x
6 0.119209 1.103245 159 x
.. ... ... ... ..
993 0.131892 0.290162 190 x
994 0.342097 0.215341 931 x
(continues on next page)
1241
pandas: powerful Python data analysis toolkit, Release 0.23.4
But clearly this isn’t fast enough for us. Let’s take a look and see where the time is spent during this operation (limited
to the most time consuming four calls) using the prun ipython magic function:
By far the majority of time is spend inside either integrate_f or f, hence we’ll concentrate our efforts cythonizing
these two functions.
Note: In Python 2 replacing the range with its generator counterpart (xrange) would mean the range line would
vanish. In Python 3 range is already a generator.
First we’re going to need to import the Cython magic function to ipython:
Now, let’s simply copy our functions over to Cython as is (the suffix is here to distinguish between function versions):
In [7]: %%cython
...: def f_plain(x):
...: return x * (x - 1)
...: def integrate_f_plain(a, b, N):
...: s = 0
...: dx = (b - a) / N
...: for i in range(N):
...: s += f_plain(a + i * dx)
...: return s * dx
...:
Note: If you’re having trouble pasting the above into your ipython, you may need to be using bleeding edge ipython
for paste to play well with cell magics.
Already this has shaved a third off, not too bad for a simple copy and paste.
In [8]: %%cython
...: cdef double f_typed(double x) except? -2:
...: return x * (x - 1)
...: cpdef double integrate_f_typed(double a, double b, int N):
...: cdef int i
...: cdef double s, dx
...: s = 0
...: dx = (b - a) / N
...: for i in range(N):
...: s += f_typed(a + i * dx)
...: return s * dx
...:
Now, we’re talking! It’s now over ten times faster than the original python implementation, and we haven’t really
modified the code. Let’s have another look at what’s eating up time:
It’s calling series. . . a lot! It’s creating a Series from each row, and get-ting from both the index and the series (three
times for each row). Function calls are expensive in Python, so maybe we could minimize these by cythonizing the
apply part.
Note: We are now passing ndarrays into the Cython function, fortunately Cython plays very nicely with NumPy.
In [10]: %%cython
....: cimport numpy as np
....: import numpy as np
....: cdef double f_typed(double x) except? -2:
....: return x * (x - 1)
....: cpdef double integrate_f_typed(double a, double b, int N):
....: cdef int i
....: cdef double s, dx
....: s = 0
....: dx = (b - a) / N
....: for i in range(N):
....: s += f_typed(a + i * dx)
....: return s * dx
....: cpdef np.ndarray[double] apply_integrate_f(np.ndarray col_a, np.ndarray col_
˓→b, np.ndarray col_N):
The implementation is simple, it creates an array of zeros and loops over the rows, applying our
integrate_f_typed, and putting this in the zeros array.
Warning: You can not pass a Series directly as a ndarray typed parameter to a Cython function. Instead
pass the actual ndarray using the .values attribute of the Series. The reason is that the Cython definition
is specific to an ndarray and not the passed Series.
So, do not do this:
apply_integrate_f(df['a'], df['b'], df['N'])
Note: Loops like this would be extremely slow in Python, but in Cython looping over NumPy arrays is fast.
We’ve gotten another big improvement. Let’s check again where the time is spent:
As one might expect, the majority of the time is now spent in apply_integrate_f, so if we wanted to make
anymore efficiencies we must continue to concentrate our efforts here.
There is still hope for improvement. Here’s an example of using some more advanced Cython techniques:
In [12]: %%cython
....: cimport cython
....: cimport numpy as np
....: import numpy as np
....: cdef double f_typed(double x) except? -2:
....: return x * (x - 1)
....: cpdef double integrate_f_typed(double a, double b, int N):
....: cdef int i
....: cdef double s, dx
....: s = 0
....: dx = (b - a) / N
....: for i in range(N):
....: s += f_typed(a + i * dx)
....: return s * dx
....: @cython.boundscheck(False)
....: @cython.wraparound(False)
....: cpdef np.ndarray[double] apply_integrate_f_wrap(np.ndarray[double] col_a, np.
˓→ndarray[double] col_b, np.ndarray[int] col_N):
Even faster, with the caveat that a bug in our Cython code (an off-by-one error, for example) might cause a segfault
because memory access isn’t checked. For more about boundscheck and wraparound, see the Cython docs on
compiler directives.
A recent alternative to statically compiling Cython code, is to use a dynamic jit-compiler, Numba.
Numba gives you the power to speed up your applications with high performance functions written directly in Python.
With a few annotations, array-oriented and math-heavy Python code can be just-in-time compiled to native machine
instructions, similar in performance to C, C++ and Fortran, without having to switch languages or Python interpreters.
Numba works by generating optimized machine code using the LLVM compiler infrastructure at import time, runtime,
or statically (using the included pycc tool). Numba supports compilation of Python to run on either CPU or GPU
hardware, and is designed to integrate with the Python scientific software stack.
Note: You will need to install Numba. This is easy with conda, by using: conda install numba, see installing
using miniconda.
Note: As of Numba version 0.20, pandas objects cannot be passed directly to Numba-compiled functions. Instead,
one must pass the NumPy array underlying the pandas object to the Numba-compiled function as demonstrated below.
25.2.1 Jit
We demonstrate how to use Numba to just-in-time compile our code. We simply take the plain Python code from
above and annotate with the @jit decorator.
import numba
@numba.jit
def f_plain(x):
return x * (x - 1)
@numba.jit
def integrate_f_numba(a, b, N):
s = 0
dx = (b - a) / N
for i in range(N):
s += f_plain(a + i * dx)
return s * dx
@numba.jit
def apply_integrate_f_numba(col_a, col_b, col_N):
n = len(col_N)
result = np.empty(n, dtype='float64')
assert len(col_a) == len(col_b) == n
for i in range(n):
(continues on next page)
def compute_numba(df):
result = apply_integrate_f_numba(df['a'].values, df['b'].values, df['N'].values)
return pd.Series(result, index=df.index, name='result')
Note that we directly pass NumPy arrays to the Numba function. compute_numba is just a wrapper that provides a
nicer interface by passing/returning pandas objects.
25.2.2 Vectorize
Numba can also be used to write vectorized functions that do not require the user to explicitly loop over the observa-
tions of a vector; a vectorized function will be applied to each row automatically. Consider the following toy example
of doubling each observation:
import numba
def double_every_value_nonumba(x):
return x*2
@numba.vectorize
def double_every_value_withnumba(x):
return x*2
25.2.3 Caveats
Note: Numba will execute on any function, but can only accelerate certain classes of functions.
Numba is best at accelerating functions that apply numerical functions to NumPy arrays. When passed a function that
only uses operations it knows how to accelerate, it will execute in nopython mode.
If Numba is passed a function that includes something it doesn’t know how to work with – a category that currently
includes sets, lists, dictionaries, or string functions – it will revert to object mode. In object mode, Numba
will execute but your code will not speed up significantly. If you would prefer that Numba throw an error if it cannot
compile a function in a way that speeds up your code, pass Numba the argument nopython=True (e.g. @numba.
jit(nopython=True)). For more on troubleshooting Numba modes, see the Numba troubleshooting page.
Read more in the Numba docs.
The top-level function pandas.eval() implements expression evaluation of Series and DataFrame objects.
Note: To benefit from using eval() you need to install numexpr. See the recommended dependencies section for
more details.
The point of using eval() for expression evaluation rather than plain Python is two-fold: 1) large DataFrame
objects are evaluated more efficiently and 2) large arithmetic and boolean expressions are evaluated all at once by the
underlying engine (by default numexpr is used for evaluation).
Note: You should not use eval() for simple expressions or for expressions involving small DataFrames. In fact,
eval() is many orders of magnitude slower for smaller expressions/objects than plain ol’ Python. A good rule of
thumb is to only use eval() when you have a DataFrame with more than 10,000 rows.
eval() supports all arithmetic expressions supported by the engine in addition to some extensions available only in
pandas.
Note: The larger the frame and the larger the expression the more speedup you will see from using eval().
Now let’s compare adding them together using plain ol’ Python versus eval():
In [17]: %timeit (df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)
31.6 ms +- 830 us per loop (mean +- std. dev. of 7 runs, 10 loops each)
In [18]: %timeit pd.eval('(df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)')
14.2 ms +- 775 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [19]: s = pd.Series(np.random.randn(50))
should be performed in Python. An exception will be raised if you try to perform any boolean/bitwise operations with
scalar operands that are not of type bool or np.bool_. Again, you should perform these kinds of operations in
plain Python.
In addition to the top level pandas.eval() function you can also evaluate an expression in the “context” of a
DataFrame.
Any expression that is a valid pandas.eval() expression is also a valid DataFrame.eval() expression, with
the added benefit that you don’t have to prefix the name of the DataFrame to the column(s) you’re interested in
evaluating.
In addition, you can perform assignment of columns within an expression. This allows for formulaic evaluation. The
assignment target can be a new column name or an existing column name, and it must be a valid Python identifier.
New in version 0.18.0.
The inplace keyword determines whether this assignment will performed on the original DataFrame or return a
copy with the new column.
Warning: For backwards compatibility, inplace defaults to True if not specified. This will change in a
future version of pandas - if your code depends on an inplace assignment you should update to explicitly set
inplace=True.
In [28]: df
Out[28]:
a b c d
0 1 5 5 10
1 1 6 7 14
(continues on next page)
When inplace is set to False, a copy of the DataFrame with the new or modified columns is returned and the
original frame is unchanged.
In [29]: df
Out[29]:
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
a b c d e
0 1 5 5 10 -4
1 1 6 7 14 -6
2 1 7 9 18 -8
3 1 8 11 22 -10
4 1 9 13 26 -12
In [31]: df
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
In [36]: df['a'] = 1
In [37]: df
Out[37]:
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
In [41]: df
Out[41]:
a b
3 3 8
4 4 9
Warning: Unlike with eval, the default value for inplace for query is False. This is consistent with prior
versions of pandas.
You must explicitly reference any local variable that you want to use in an expression by placing the @ character in
front of the name. For example,
a b
0 0.863987 -0.115998
2 -2.621419 -1.297879
If you don’t prefix the local variable with @, pandas will raise an exception telling you the variable is undefined.
When using DataFrame.eval() and DataFrame.query(), this allows you to have a local variable and a
DataFrame column with the same name in an expression.
In [46]: a = np.random.randn()
With pandas.eval() you cannot use the @ prefix at all, because it isn’t defined in that context. pandas will let
you know this if you try to use @ in a top-level call to pandas.eval(). For example,
In [49]: a, b = 1, 2
File "/opt/conda/envs/pandas/lib/python3.6/site-packages/IPython/core/
˓→ interactiveshell.py", line 2961, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
In this case, you should simply refer to the variables like you would in standard Python.
There are two different parsers and two different engines you can use as the backend.
The default 'pandas' parser allows a more intuitive syntax for expressing query-like operations (comparisons,
conjunctions and disjunctions). In particular, the precedence of the & and | operators is made equal to the precedence
of the corresponding boolean operations and and or.
For example, the above conjunction can be written without parentheses. Alternatively, you can use the 'python'
parser to enforce strict Python semantics.
In [52]: expr = '(df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)'
In [54]: expr_no_parens = 'df1 > 0 & df2 > 0 & df3 > 0 & df4 > 0'
In [56]: np.all(x == y)
Out[56]: True
The same expression can be “anded” together with the word and as well:
In [57]: expr = '(df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)'
In [59]: expr_with_ands = 'df1 > 0 and df2 > 0 and df3 > 0 and df4 > 0'
In [61]: np.all(x == y)
Out[61]: True
The and and or operators here have the same precedence that they would in vanilla Python.
There’s also the option to make eval() operate identical to plain ol’ Python.
Note: Using the 'python' engine is generally not useful, except for testing other evaluation engines against it. You
will achieve no performance benefits using eval() with engine='python' and in fact may incur a performance
hit.
You can see this by using pandas.eval() with the 'python' engine. It is a bit slower (not by much) than
evaluating the same expression in Python
eval() is intended to speed up certain kinds of operations. In particular, those operations involving complex expres-
sions with large DataFrame/Series objects should see a significant performance benefit. Here is a plot showing
the running time of pandas.eval() as function of the size of the frame involved in the computation. The two lines
are two different engines.
Note: Operations with smallish objects (around 15k-20k rows) are faster using plain Python:
This plot was created using a DataFrame with 3 columns each containing floating point values generated using
numpy.random.randn().
Expressions that would result in an object dtype or involve datetime operations (because of NaT) must be evaluated
in Python space. The main reason for this behavior is to maintain backwards compatibility with versions of NumPy <
1.7. In those versions of NumPy a call to ndarray.astype(str) will truncate any strings that are more than 60
characters in length. Second, we can’t pass object arrays to numexpr thus string comparisons must be evaluated
in Python space.
The upshot is that this only applies to object-dtype’d expressions. So, if you have an expression–for example
In [65]: df
Out[65]:
strings nums
0 c 0
1 c 0
2 c 0
3 b 1
4 b 1
5 b 1
6 a 2
7 a 2
8 a 2
Empty DataFrame
Columns: [strings, nums]
Index: []
TWENTYSIX
We have implemented “sparse” versions of Series and DataFrame. These are not sparse in the typical “mostly 0”.
Rather, you can view these objects as being “compressed” where any data matching a specific value (NaN / missing
value, though any value can be chosen) is omitted. A special SparseIndex object tracks where data has been
“sparsified”. This will make much more sense with an example. All of the standard pandas data structures have a
to_sparse method:
In [1]: ts = pd.Series(randn(10))
In [4]: sts
Out[4]:
0 0.469112
1 -0.282863
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 -0.861849
9 -2.104569
dtype: float64
BlockIndex
Block locations: array([0, 8], dtype=int32)
Block lengths: array([2, 2], dtype=int32)
The to_sparse method takes a kind argument (for the sparse index, see below) and a fill_value. So if we
had a mostly zero Series, we could convert it to sparse with fill_value=0:
In [5]: ts.fillna(0).to_sparse(fill_value=0)
Out[5]:
0 0.469112
1 -0.282863
2 0.000000
3 0.000000
4 0.000000
5 0.000000
(continues on next page)
1257
pandas: powerful Python data analysis toolkit, Release 0.23.4
The sparse objects exist for memory efficiency reasons. Suppose you had a large, mostly NA DataFrame:
In [6]: df = pd.DataFrame(randn(10000, 4))
In [9]: sdf
Out[9]:
0 1 2 3
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
5 NaN NaN NaN NaN
6 NaN NaN NaN NaN
... ... ... ... ...
9993 NaN NaN NaN NaN
9994 NaN NaN NaN NaN
9995 NaN NaN NaN NaN
9996 NaN NaN NaN NaN
9997 NaN NaN NaN NaN
9998 0.509184 -0.774928 -1.369894 -0.382141
9999 0.280249 -1.648493 1.490865 -0.890819
In [10]: sdf.density
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→0.0002
As you can see, the density (% of values that have not been “compressed”) is extremely low. This sparse object takes
up much less memory on disk (pickled) and in the Python interpreter. Functionally, their behavior should be nearly
identical to their dense counterparts.
Any sparse object can be converted back to the standard dense form by calling to_dense:
In [11]: sts.to_dense()
Out[11]:
0 0.469112
1 -0.282863
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
(continues on next page)
26.1 SparseArray
SparseArray is the base layer for all of the sparse indexed data structures. It is a 1-dimensional ndarray-like object
storing only values distinct from the fill_value:
In [15]: sparr
Out[15]:
[-1.9556635297215477, -1.6588664275960427, nan, nan, nan, 1.1589328886422277, 0.
˓→14529711373305043, nan, 0.6060271905134522, 1.3342113401317768]
Fill: nan
IntIndex
Indices: array([0, 1, 5, 6, 8, 9], dtype=int32)
Like the indexed objects (SparseSeries, SparseDataFrame), a SparseArray can be converted back to a regular
ndarray by calling to_dense:
In [16]: sparr.to_dense()
Out[16]:
array([-1.9557, -1.6589, nan, nan, nan, 1.1589, 0.1453,
nan, 0.606 , 1.3342])
Two kinds of SparseIndex are implemented, block and integer. We recommend using block as it’s more
memory efficient. The integer format keeps an arrays of all of the locations where the data are not equal to the fill
value. The block format tracks only the locations and sizes of blocks of data.
Sparse data should have the same dtype as its dense representation. Currently, float64, int64 and bool dtypes
are supported. Depending on the original dtype, fill_value default changes:
• float64: np.nan
• int64: 0
• bool: False
In [18]: s
Out[18]:
0 1.0
1 NaN
2 NaN
dtype: float64
In [19]: s.to_sparse()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[19]:
0 1.0
1 NaN
2 NaN
dtype: float64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([1], dtype=int32)
In [21]: s
Out[21]:
0 1
1 0
2 0
dtype: int64
In [22]: s.to_sparse()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[22]:
0 1
1 0
2 0
dtype: int64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([1], dtype=int32)
In [24]: s
Out[24]:
0 True
1 False
2 True
dtype: bool
In [25]: s.to_sparse()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[25]:
0 True
1 False
2 True
dtype: bool
BlockIndex
Block locations: array([0, 2], dtype=int32)
Block lengths: array([1, 1], dtype=int32)
You can change the dtype using .astype(), the result is also sparse. Note that .astype() also affects to the
In [27]: s
Out[27]:
0 1
1 0
2 0
3 0
4 0
dtype: int64
In [28]: ss = s.to_sparse()
In [29]: ss
Out[29]:
0 1
1 0
2 0
3 0
4 0
dtype: int64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([1], dtype=int32)
In [30]: ss.astype(np.float64)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1.0
1 0.0
2 0.0
3 0.0
4 0.0
dtype: float64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([1], dtype=int32)
In [2]: ss.astype(np.int64)
ValueError: unable to coerce current fill_value nan to int64 dtype
You can apply NumPy ufuncs to SparseArray and get a SparseArray as a result.
In [32]: np.abs(arr)
Out[32]:
[1.0, nan, nan, 2.0, nan]
Fill: nan
IntIndex
Indices: array([0, 3], dtype=int32)
The ufunc is also applied to fill_value. This is needed to get the correct dense result.
In [34]: np.abs(arr)
Out[34]:
[1.0, 1.0, 1.0, 2.0, 1.0]
Fill: 1
IntIndex
Indices: array([0, 3], dtype=int32)
In [35]: np.abs(arr).to_dense()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[35]:
˓→array([ 1., 1., 1., 2., 1.])
26.5.1 SparseDataFrame
In [40]: sp_arr
Out[40]:
<1000x5 sparse matrix of type '<class 'numpy.float64'>'
with 517 stored elements in Compressed Sparse Row format>
In [42]: sdf
Out[42]:
0 1 2 3 4
(continues on next page)
All sparse formats are supported, but matrices that are not in COOrdinate format will be converted, copying
data as needed. To convert a SparseDataFrame back to sparse SciPy matrix in COO format, you can use the
SparseDataFrame.to_coo() method:
In [43]: sdf.to_coo()
Out[43]:
<1000x5 sparse matrix of type '<class 'numpy.float64'>'
with 517 stored elements in COOrdinate format>
26.5.2 SparseSeries
In [46]: s
Out[46]:
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
(continues on next page)
# SparseSeries
In [47]: ss = s.to_sparse()
In [48]: ss
Out[48]:
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
BlockIndex
Block locations: array([0, 2], dtype=int32)
Block lengths: array([1, 2], dtype=int32)
In the example below, we transform the SparseSeries to a sparse representation of a 2-d array by specifying that
the first and second MultiIndex levels define labels for the rows and the third and fourth levels define labels for the
columns. We also specify that the column and row labels should be sorted in the final sparse representation.
In [50]: A
Out[50]:
<3x4 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [51]: A.todense()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [52]: rows
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→[(1, 1), (1, 2), (2, 1)]
In [53]: columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→[('a', 0), ('a', 1), ('b', 0), ('b', 1)]
Specifying different row and column labels (and not sorting them) yields a different sparse matrix:
In [55]: A
Out[55]:
(continues on next page)
In [56]: A.todense()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [57]: rows
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→[(1, 2, 'a'), (1, 1, 'b'), (2, 1, 'b')]
In [58]: columns
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→[0, 1]
In [61]: A
Out[61]:
<3x4 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [62]: A.todense()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
The default behaviour (with dense_index=False) simply returns a SparseSeries containing only the non-
null entries.
In [63]: ss = pd.SparseSeries.from_coo(A)
In [64]: ss
Out[64]:
0 2 1.0
3 2.0
1 0 3.0
dtype: float64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([3], dtype=int32)
Specifying dense_index=True will result in an index that is the Cartesian product of the row and columns coordi-
nates of the matrix. Note that this will consume a significant amount of memory (relative to dense_index=False)
if the sparse matrix is large (and sparse) enough.
In [66]: ss_dense
Out[66]:
0 0 NaN
1 NaN
2 1.0
3 2.0
1 0 3.0
1 NaN
2 NaN
3 NaN
2 0 NaN
1 NaN
2 NaN
3 NaN
dtype: float64
BlockIndex
Block locations: array([2], dtype=int32)
Block lengths: array([3], dtype=int32)
TWENTYSEVEN
The memory usage of a DataFrame (including the index) is shown when calling the info(). A configuration
option, display.memory_usage (see the list of options), specifies if the DataFrame’s memory usage will be
displayed when invoking the df.info() method.
For example, the memory usage of the DataFrame below is shown when calling info():
In [2]: n = 5000
In [4]: df = pd.DataFrame(data)
In [6]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5000 entries, 0 to 4999
Data columns (total 8 columns):
int64 5000 non-null int64
float64 5000 non-null float64
datetime64[ns] 5000 non-null datetime64[ns]
timedelta64[ns] 5000 non-null timedelta64[ns]
complex128 5000 non-null complex128
object 5000 non-null object
bool 5000 non-null bool
categorical 5000 non-null category
dtypes: bool(1), category(1), complex128(1), datetime64[ns](1), float64(1), int64(1),
˓→object(1), timedelta64[ns](1)
The + symbol indicates that the true memory usage could be higher, because pandas does not count the memory used
by values in columns with dtype=object.
Passing memory_usage='deep' will enable a more accurate memory usage report, accounting for the full usage
of the contained objects. This is optional as it can be expensive to do this deeper introspection.
1267
pandas: powerful Python data analysis toolkit, Release 0.23.4
In [7]: df.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5000 entries, 0 to 4999
Data columns (total 8 columns):
int64 5000 non-null int64
float64 5000 non-null float64
datetime64[ns] 5000 non-null datetime64[ns]
timedelta64[ns] 5000 non-null timedelta64[ns]
complex128 5000 non-null complex128
object 5000 non-null object
bool 5000 non-null bool
categorical 5000 non-null category
dtypes: bool(1), category(1), complex128(1), datetime64[ns](1), float64(1), int64(1),
˓→object(1), timedelta64[ns](1)
By default the display option is set to True but can be explicitly overridden by passing the memory_usage argument
when invoking df.info().
The memory usage of each column can be found by calling the memory_usage() method. This returns a Series
with an index represented by column names and memory usage of each column shown in bytes. For the DataFrame
above, the memory usage of each column and the total memory usage can be found with the memory_usage method:
In [8]: df.memory_usage()
Out[8]:
Index 80
int64 40000
float64 40000
datetime64[ns] 40000
timedelta64[ns] 40000
complex128 80000
object 40000
bool 5000
categorical 10920
dtype: int64
By default the memory usage of the DataFrame’s index is shown in the returned Series, the memory usage of the
index can be suppressed by passing the index=False argument:
In [10]: df.memory_usage(index=False)
Out[10]:
int64 40000
float64 40000
datetime64[ns] 40000
timedelta64[ns] 40000
complex128 80000
object 40000
bool 5000
categorical 10920
dtype: int64
The memory usage displayed by the info() method utilizes the memory_usage() method to determine the mem-
ory usage of a DataFrame while also formatting the output in human-readable units (base-2 representation; i.e. 1KB
= 1024 bytes).
See also Categorical Memory Usage.
pandas follows the NumPy convention of raising an error when you try to convert something to a bool. This happens
in an if-statement or when using the boolean operations: and, or, and not. It is not clear what the result of the
following code should be:
Should it be True because it’s not zero-length, or False because there are False values? It is unclear, so instead,
pandas raises a ValueError:
You need to explicitly choose what you want to do with the DataFrame, e.g. use any(), all() or empty().
Alternatively, you might want to compare if the pandas object is None:
To evaluate single-element pandas objects in a boolean context, use the method bool():
In [11]: pd.Series([True]).bool()
Out[11]: True
In [12]: pd.Series([False]).bool()
\\\\\\\\\\\\\\Out[12]: False
In [13]: pd.DataFrame([[True]]).bool()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[13]: True
In [14]: pd.DataFrame([[False]]).bool()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[14]: False
Bitwise boolean operators like == and != return a boolean Series, which is almost always what you want anyways.
>>> s = pd.Series(range(5))
>>> s == 4
0 False
1 False
2 False
3 False
4 True
dtype: bool
Using the Python in operator on a Series tests for membership in the index, not membership among the values.
In [16]: 2 in s
Out[16]: False
In [17]: 'b' in s
\\\\\\\\\\\\\\\Out[17]: True
If this behavior is surprising, keep in mind that using in on a Python dictionary tests keys, not values, and Series
are dict-like. To test for membership in the values, use the method isin():
In [18]: s.isin([2])
Out[18]:
a False
b False
c True
d False
e False
dtype: bool
In [19]: s.isin([2]).any()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[19]:
˓→True
For DataFrames, likewise, in applies to the column axis, testing for membership in the list of column names.
For lack of NA (missing) support from the ground up in NumPy and Python in general, we were given the difficult
choice between either:
• A masked array solution: an array of data and an array of boolean values indicating whether a value is there or
is missing.
• Using a special sentinel value, bit pattern, or set of sentinel values to denote NA across the dtypes.
For many reasons we chose the latter. After years of production use it has proven, at least in my opinion, to be the best
decision given the state of affairs in NumPy and Python in general. The special value NaN (Not-A-Number) is used
everywhere as the NA value, and there are API functions isna and notna which can be used across the dtypes to
detect NA values.
However, it comes with it a couple of trade-offs which I most certainly have not ignored.
In the absence of high performance NA support being built into NumPy from the ground up, the primary casualty is
the ability to represent NAs in integer arrays. For example:
In [21]: s
Out[21]:
a 1
b 2
c 3
d 4
e 5
dtype: int64
In [22]: s.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[22]: dtype('int64')
In [24]: s2
Out[24]:
a 1.0
b 2.0
c 3.0
f NaN
u NaN
dtype: float64
In [25]: s2.dtype
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[25]: dtype(
˓→'float64')
This trade-off is made largely for memory and performance reasons, and also so that the resulting Series continues
to be “numeric”. One possibility is to use dtype=object arrays instead.
When introducing NAs into an existing Series or DataFrame via reindex() or some other means, boolean and
integer types will be promoted to a different dtype in order to store the NAs. The promotions are summarized in this
table:
While this may seem like a heavy trade-off, I have found very few cases where this is an issue in practice i.e. storing
values greater than 2**53. Some explanation for the motivation is in the next section.
Many people have suggested that NumPy should simply emulate the NA support present in the more domain-specific
statistical programming language R. Part of the reason is the NumPy type hierarchy:
Typeclass Dtypes
numpy.floating float16, float32, float64, float128
numpy.integer int8, int16, int32, int64
numpy.unsignedinteger uint8, uint16, uint32, uint64
numpy.object_ object_
numpy.bool_ bool_
numpy.character string_, unicode_
The R language, by contrast, only has a handful of built-in data types: integer, numeric (floating-point),
character, and boolean. NA types are implemented by reserving special bit patterns for each type to be used
as the missing value. While doing this with the full NumPy type hierarchy would be possible, it would be a more
substantial trade-off (especially for the 8- and 16-bit data types) and implementation undertaking.
An alternate approach is that of using masked arrays. A masked array is an array of data with an associated boolean
mask denoting whether each value should be considered NA or not. I am personally not in love with this approach as I
feel that overall it places a fairly heavy burden on the user and the library implementer. Additionally, it exacts a fairly
high performance cost when working with numerical data compared with the simple approach of using NaN. Thus,
I have chosen the Pythonic “practicality beats purity” approach and traded integer NA capability for a much simpler
approach of using a special value in float and object arrays to denote NA, and promoting integer arrays to floating when
NAs must be introduced.
For Series and DataFrame objects, var() normalizes by N-1 to produce unbiased estimates of the sample vari-
ance, while NumPy’s var normalizes by N, which measures the variance of the sample. Note that cov() normalizes
by N-1 in both pandas and NumPy.
27.5 Thread-safety
As of pandas 0.11, pandas is not 100% thread safe. The known issues relate to the copy() method. If you are doing
a lot of copying of DataFrame objects shared among threads, we recommend holding locks inside the threads where
the data copying occurs.
See this link for more information.
Occasionally you may have to deal with data that were created on a machine with a different byte order than the one
on which you are running Python. A common symptom of this issue is an error like:
Traceback
...
ValueError: Big-endian buffer not supported on little-endian compiler
To deal with this issue you should convert the underlying NumPy array to the native system byte order before passing
it to Series or DataFrame constructors using something similar to the following:
In [28]: s = pd.Series(newx)
TWENTYEIGHT
RPY2 / R INTERFACE
Warning: Up to pandas 0.19, a pandas.rpy module existed with functionality to convert between pandas and
rpy2 objects. This functionality now lives in the rpy2 project itself. See the updating section of the previous
documentation for a guide to port your code from the removed pandas.rpy to rpy2 functions.
rpy2 is an interface to R running embedded in a Python process, and also includes functionality to deal with pan-
das DataFrames. Converting data frames back and forth between rpy2 and pandas should be largely automated (no
need to convert explicitly, it will be done on the fly in most rpy2 functions). To convert explicitly, the functions are
pandas2ri.py2ri() and pandas2ri.ri2py().
See also the documentation of the rpy2 project: https://rpy2.readthedocs.io.
In the remainder of this page, a few examples of explicit conversion is given. The pandas conversion of rpy2 needs
first to be activated:
/opt/conda/envs/pandas/lib/python3.6/site-packages/rpy2/robjects/pandas2ri.py in
˓→<module>()
20 import numpy
21 import pytz
---> 22 import tzlocal
23 import warnings
24
In [2]: pandas2ri.activate()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→--------------------------------------------------------------------------
1275
pandas: powerful Python data analysis toolkit, Release 0.23.4
Once the pandas conversion is activated (pandas2ri.activate()), many conversions of R to pandas objects will
be done automatically. For example, to obtain the ‘iris’ dataset as a pandas DataFrame:
In [3]: r.data('iris')
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-8bdc5639fb0c> in <module>()
----> 1 r.data('iris')
In [4]: r['iris'].head()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→--------------------------------------------------------------------------
If the pandas conversion was not activated, the above could also be accomplished by explicitly converting it with the
pandas2ri.ri2py function (pandas2ri.ri2py(r['iris'])).
The pandas2ri.py2ri function support the reverse operation to convert DataFrames into the equivalent R object
(that is, data.frame):
In [5]: df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6], 'C':[7,8,9]},
...: index=["one", "two", "three"])
...:
In [7]: print(type(r_dataframe))
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→--------------------------------------------------------------------------
In [8]: print(r_dataframe)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→--------------------------------------------------------------------------
The DataFrame’s index is stored as the rownames attribute of the data.frame instance.
TWENTYNINE
PANDAS ECOSYSTEM
Increasingly, packages are being built on top of pandas to address specific needs in data preparation, analysis and
visualization. This is encouraging because it means pandas is not only helping users to handle their data tasks but also
that it provides a better starting point for developers to build powerful and more focused data tools. The creation of
libraries that complement pandas’ functionality also allows pandas development to remain focused around it’s original
requirements.
This is an in-exhaustive list of projects that build on pandas in order to provide tools in the PyData space.
We’d like to make it easier for users to find these project, if you know of other substantial projects that you feel should
be on this list, please let us know.
29.1.1 Statsmodels
Statsmodels is the prominent Python “statistics and econometrics library” and it has a long-standing special relation-
ship with pandas. Statsmodels provides powerful statistics, econometrics, analysis and modeling functionality that is
out of pandas’ scope. Statsmodels leverages pandas objects as the underlying data container for computation.
29.1.2 sklearn-pandas
29.1.3 Featuretools
Featuretools is a Python library for automated feature engineering built on top of pandas. It excels at transforming
temporal and relational datasets into feature matrices for machine learning using reusable feature engineering “primi-
tives”. Users can contribute their own primitives in Python and share them with the rest of the community.
29.2 Visualization
29.2.1 Bokeh
Bokeh is a Python interactive visualization library for large datasets that natively uses the latest web technologies.
Its goal is to provide elegant, concise construction of novel graphics in the style of Protovis/D3, while delivering
high-performance interactivity over large data to thin clients.
1279
pandas: powerful Python data analysis toolkit, Release 0.23.4
29.2.2 seaborn
Seaborn is a Python visualization library based on matplotlib. It provides a high-level, dataset-oriented interface for
creating attractive statistical graphics. The plotting functions in seaborn understand pandas objects and leverage pandas
grouping operations internally to support concise specification of complex visualizations. Seaborn also goes beyond
matplotlib and pandas with the option to perform statistical estimation while plotting, aggregating across observations
and visualizing the fit of statistical models to emphasize patterns in a dataset.
29.2.3 yhat/ggplot
Hadley Wickham’s ggplot2 is a foundational exploratory visualization package for the R language. Based on “The
Grammar of Graphics” it provides a powerful, declarative and extremely general way to generate bespoke plots of
any kind of data. It’s really quite incredible. Various implementations to other languages are available, but a faithful
implementation for Python users has long been missing. Although still young (as of Jan-2014), the yhat/ggplot project
has been progressing quickly in that direction.
29.2.4 Vincent
The Vincent project leverages Vega (that in turn, leverages d3) to create plots. Although functional, as of Summer
2016 the Vincent project has not been updated in over two years and is unlikely to receive further updates.
Like Vincent, the IPython Vega project leverages Vega to create plots, but primarily targets the IPython Notebook
environment.
29.2.6 Plotly
Plotly’s Python API enables interactive figures and web shareability. Maps, 2D, 3D, and live-streaming graphs are
rendered with WebGL and D3.js. The library supports plotting directly from a pandas DataFrame and cloud-based
collaboration. Users of matplotlib, ggplot for Python, and Seaborn can convert figures into interactive web-based
plots. Plots can be drawn in IPython Notebooks , edited with R or MATLAB, modified in a GUI, or embedded in apps
and dashboards. Plotly is free for unlimited sharing, and has cloud, offline, or on-premise accounts for private use.
29.2.7 QtPandas
Spun off from the main pandas library, the qtpandas library enables DataFrame visualization and manipulation in
PyQt4 and PySide applications.
29.3 IDE
29.3.1 IPython
IPython is an interactive command shell and distributed computing environment. IPython Notebook is a web appli-
cation for creating IPython notebooks. An IPython notebook is a JSON document containing an ordered list of in-
put/output cells which can contain code, text, mathematics, plots and rich media. IPython notebooks can be converted
to a number of open standard output formats (HTML, HTML presentation slides, LaTeX, PDF, ReStructuredText,
Markdown, Python) through ‘Download As’ in the web interface and ipython nbconvert in a shell.
Pandas DataFrames implement _repr_html_ methods which are utilized by IPython Notebook for displaying (ab-
breviated) HTML tables. (Note: HTML tables may or may not be compatible with non-HTML IPython output for-
mats.)
29.3.2 quantopian/qgrid
qgrid is “an interactive grid for sorting and filtering DataFrames in IPython Notebook” built with SlickGrid.
29.3.3 Spyder
Spyder is a cross-platform Qt-based open-source Python IDE with editing, testing, debugging, and introspection fea-
tures. Spyder can now introspect and display Pandas DataFrames and show both “column wise min/max and global
min/max coloring.”
29.4 API
29.4.1 pandas-datareader
29.4.2 quandl/Python
Quandl API for Python wraps the Quandl REST API to return Pandas DataFrames with timeseries indexes.
29.4.3 pydatastream
PyDatastream is a Python interface to the Thomson Dataworks Enterprise (DWE/Datastream) SOAP API to return
indexed Pandas DataFrames or Panels with financial data. This package requires valid credentials for this API (non
free).
29.4.4 pandaSDMX
pandaSDMX is a library to retrieve and acquire statistical data and metadata disseminated in SDMX 2.1, an ISO-
standard widely used by institutions such as statistics offices, central banks, and international organisations. pandaS-
DMX can expose datasets and related structural metadata including dataflows, code-lists, and datastructure definitions
as pandas Series or multi-indexed DataFrames.
29.4.5 fredapi
fredapi is a Python interface to the Federal Reserve Economic Data (FRED) provided by the Federal Reserve Bank of
St. Louis. It works with both the FRED database and ALFRED database that contains point-in-time data (i.e. historic
data revisions). fredapi provides a wrapper in Python to the FRED HTTP API, and also provides several convenient
methods for parsing and analyzing point-in-time data from ALFRED. fredapi makes use of pandas and returns data in
a Series or DataFrame. This module requires a FRED API key that you can obtain for free on the FRED website.
29.5.1 Geopandas
Geopandas extends pandas data objects to include geographic information which support geometric operations. If your
work entails maps and geographical coordinates, and you love pandas, you should take a close look at Geopandas.
29.5.2 xarray
xarray brings the labeled data power of pandas to the physical sciences by providing N-dimensional variants of the
core pandas data structures. It aims to provide a pandas-like and pandas-compatible toolkit for analytics on multi-
dimensional arrays, rather than the tabular data for which pandas excels.
29.6 Out-of-core
29.6.1 Dask
Dask is a flexible parallel computing library for analytics. Dask provides a familiar DataFrame interface for out-of-
core, parallel and distributed computing.
29.6.2 Dask-ML
Dask-ML enables parallel and distributed machine learning using Dask alongside existing machine learning libraries
like Scikit-Learn, XGBoost, and TensorFlow.
29.6.3 Blaze
Blaze provides a standard API for doing computations with various in-memory and on-disk backends: NumPy, Pandas,
SQLAlchemy, MongoDB, PyTables, PySpark.
29.6.4 Odo
Odo provides a uniform API for moving data between different formats. It uses pandas own read_csv for CSV
IO and leverages many existing packages such as PyTables, h5py, and pymongo to move data between non pandas
formats. Its graph based approach is also extensible by end users for custom formats that may be too specific for the
core of odo.
29.7.1 Engarde
Engarde is a lightweight library used to explicitly state your assumptions abour your datasets and check that they’re
actually true.
Pandas provides an interface for defining extension types to extend NumPy’s type system. The following libraries im-
plement that interface to provide types not found in NumPy or pandas, which work well with pandas’ data containers.
29.8.1 cyberpandas
Cyberpandas provides an extension type for storing arrays of IP Addresses. These arrays can be stored inside pandas’
Series and DataFrame.
29.9 Accessors
A directory of projects providing extension accessors. This is for users to discover new accessors and for library
authors to coordinate on the namespace.
THIRTY
Since pandas aims to provide a lot of the data manipulation and analysis functionality that people use R for, this
page was started to provide a more detailed look at the R language and its many third party libraries as they relate to
pandas. In comparisons with R and CRAN libraries, we care about the following things:
• Functionality / flexibility: what can/cannot be done with each tool
• Performance: how fast are operations. Hard numbers/benchmarks are preferable
• Ease-of-use: Is one tool easier/harder to use (you may have to be the judge of this, given side-by-side code
comparisons)
This page is also here to offer a bit of a translation guide for users of these R packages.
For transfer of DataFrame objects from pandas to R, one option is to use HDF5 files, see External Compatibility
for an example.
We’ll start off with a quick reference guide pairing some common R operations using dplyr with pandas equivalents.
R pandas
dim(df) df.shape
head(df) df.head()
slice(df, 1:10) df.iloc[:9]
filter(df, col1 == 1, col2 == 1) df.query('col1 == 1 & col2 == 1')
df[df$col1 == 1 & df$col2 == 1,] df[(df.col1 == 1) & (df.col2 == 1)]
select(df, col1, col2) df[['col1', 'col2']]
select(df, col1:col3) df.loc[:, 'col1':'col3']
select(df, -(col1:col3)) df.drop(cols_to_drop, axis=1) but see1
distinct(select(df, col1)) df[['col1']].drop_duplicates()
distinct(select(df, col1, col2)) df[['col1', 'col2']].drop_duplicates()
sample_n(df, 10) df.sample(n=10)
sample_frac(df, 0.01) df.sample(frac=0.01)
1 R’s shorthand for a subrange of columns (select(df, col1:col3)) can be approached cleanly in pandas, if you have the list of columns,
for example df[cols[1:3]] or df.drop(cols[1:3]), but doing this by column name is a bit messy.
1285
pandas: powerful Python data analysis toolkit, Release 0.23.4
30.1.2 Sorting
R pandas
arrange(df, col1, col2) df.sort_values(['col1', 'col2'])
arrange(df, desc(col1)) df.sort_values('col1', ascending=False)
30.1.3 Transforming
R pandas
select(df, col_one = df.rename(columns={'col1': 'col_one'})['col_one']
col1)
rename(df, col_one = df.rename(columns={'col1': 'col_one'})
col1)
mutate(df, c=a-b) df.assign(c=df.a-df.b)
R pandas
summary(df) df.describe()
gdf <- group_by(df, col1) gdf = df.groupby('col1')
summarise(gdf, avg=mean(col1, na. df.groupby('col1').agg({'col1':
rm=TRUE)) 'mean'})
summarise(gdf, total=sum(col1)) df.groupby('col1').sum()
30.2 Base R
or by integer location
df <- data.frame(matrix(rnorm(1000), ncol=100))
df[, c(1:10, 25:30, 40, 50:100)]
a c
0 -1.039575 -0.424972
1 0.567020 -1.087401
2 -0.673690 -1.478427
3 0.524988 0.577046
4 -1.715002 -0.370647
5 -1.157892 0.844885
6 1.075770 1.643563
7 -1.469388 -0.674600
8 -1.776904 -1.294524
9 0.413738 -0.472035
Selecting multiple noncontiguous columns by integer location can be achieved with a combination of the iloc indexer
attribute and numpy.r_.
In [5]: n = 30
30.2.2 aggregate
In R you may want to split data into subsets and compute the mean for each. Using a data.frame called df and splitting
it into groups by1 and by2:
df <- data.frame(
v1 = c(1,3,5,7,8,3,5,NA,4,5,7,9),
v2 = c(11,33,55,77,88,33,55,NA,44,55,77,99),
by1 = c("red", "blue", 1, 2, NA, "big", 1, 2, "red", 1, NA, 12),
by2 = c("wet", "dry", 99, 95, NA, "damp", 95, 99, "red", 99, NA, NA))
aggregate(x=df[, c("v1", "v2")], by=list(mydf2$by1, mydf2$by2), FUN = mean)
In [9]: df = pd.DataFrame({
...: 'v1': [1,3,5,7,8,3,5,np.nan,4,5,7,9],
...: 'v2': [11,33,55,77,88,33,55,np.nan,44,55,77,99],
...: 'by1': ["red", "blue", 1, 2, np.nan, "big", 1, 2, "red", 1, np.nan, 12],
...: 'by2': ["wet", "dry", 99, 95, np.nan, "damp", 95, 99, "red", 99, np.nan,
...: np.nan]
...: })
...:
In [10]: g = df.groupby(['by1','by2'])
In [11]: g[['v1','v2']].mean()
Out[11]:
v1 v2
by1 by2
1 95 5.0 55.0
99 5.0 55.0
2 95 7.0 77.0
99 NaN NaN
big damp 3.0 33.0
blue dry 3.0 33.0
red red 4.0 44.0
wet 1.0 11.0
A common way to select data in R is using %in% which is defined using the function match. The operator %in% is
used to return a logical vector indicating if there is a match or not:
s <- 0:4
s %in% c(2,4)
In [12]: s = pd.Series(np.arange(5),dtype=np.float32)
The match function returns a vector of the positions of matches of its first argument in its second:
s <- 0:4
match(s, c(2,4))
30.2.4 tapply
tapply is similar to aggregate, but data can be in a ragged array, since the subclass sizes are possibly irregular.
Using a data.frame called baseball, and retrieving information based on the array team:
baseball <-
data.frame(team = gl(5, 5,
labels = paste("Team", LETTERS[1:5])),
player = sample(letters, 25),
batting.average = runif(25, .200, .400))
tapply(baseball$batting.average, baseball.example$team,
max)
30.2.5 subset
The query() method is similar to the base R subset function. In R you might want to get the rows of a data.
frame where one column’s values are less than another column’s values:
In pandas, there are a few ways to perform subsetting. You can use query() or pass an expression as if it were an
index/slice as well as standard boolean indexing:
a b
0 -1.003455 -0.990738
1 0.083515 0.548796
3 -0.524392 0.904400
4 -0.837804 0.746374
8 -0.507219 0.245479
a b
0 -1.003455 -0.990738
1 0.083515 0.548796
3 -0.524392 0.904400
4 -0.837804 0.746374
8 -0.507219 0.245479
30.2.6 with
An expression using a data.frame called df in R with the columns a and b would be evaluated using with like so:
In pandas the equivalent expression, using the eval() method, would be:
0 -0.920205
1 -0.860236
2 1.154370
3 0.188140
4 -1.163718
5 0.001397
6 -0.825694
7 -1.138198
8 -1.708034
9 1.148616
dtype: float64
In certain cases eval() will be much faster than evaluation in pure Python. For more details and examples see the
eval documentation.
30.3 plyr
plyr is an R library for the split-apply-combine strategy for data analysis. The functions revolve around three data
structures in R, a for arrays, l for lists, and d for data.frame. The table below shows how these data
structures could be mapped in Python.
R Python
array list
lists dictionary or list of objects
data.frame dataframe
30.3.1 ddply
require(plyr)
df <- data.frame(
x = runif(120, 1, 168),
y = runif(120, 7, 334),
z = runif(120, 1.7, 20.7),
month = rep(c(5,6,7,8),30),
week = sample(1:4, 120, TRUE)
)
In pandas the equivalent expression, using the groupby() method, would be:
In [25]: df = pd.DataFrame({
....: 'x': np.random.uniform(1., 168., 120),
....: 'y': np.random.uniform(7., 334., 120),
....: 'z': np.random.uniform(1.7, 20.7, 120),
....: 'month': [5,6,7,8]*30,
....: 'week': np.random.randint(1,4, 120)
....: })
....:
30.4.1 melt.array
An expression using a 3 dimensional array called a in R where you want to melt it into a data.frame:
In [28]: a = np.array(list(range(1,24))+[np.NAN]).reshape(2,3,4)
30.4.2 melt.list
An expression using a list called a in R where you want to melt it into a data.frame:
In Python, this list would be a list of tuples, so DataFrame() method would convert it to a dataframe as required.
In [30]: a = list(enumerate(list(range(1,5))+[np.NAN]))
In [31]: pd.DataFrame(a)
Out[31]:
0 1
0 0 1.0
1 1 2.0
2 2 3.0
3 3 4.0
4 4 NaN
For more details and examples see the Into to Data Structures documentation.
30.4.3 melt.data.frame
An expression using a data.frame called cheese in R where you want to reshape the data.frame:
first last
John Doe height 5.5
weight 130.0
Mary Bo height 6.0
weight 150.0
dtype: float64
30.4.4 cast
In R acast is an expression using a data.frame called df in R to cast into a higher dimensional array:
df <- data.frame(
x = runif(12, 1, 168),
y = runif(12, 7, 334),
z = runif(12, 1.7, 20.7),
month = rep(c(5,6,7),4),
week = rep(c(1,2), 6)
)
In [35]: df = pd.DataFrame({
....: 'x': np.random.uniform(1., 168., 12),
....: 'y': np.random.uniform(7., 334., 12),
(continues on next page)
Similarly for dcast which uses a data.frame called df in R to aggregate information based on Animal and
FeedType:
df <- data.frame(
Animal = c('Animal1', 'Animal2', 'Animal3', 'Animal2', 'Animal1',
'Animal2', 'Animal3'),
FeedType = c('A', 'B', 'A', 'A', 'B', 'B', 'A'),
Amount = c(10, 7, 4, 2, 5, 6, 2)
)
Python can approach this in two different ways. Firstly, similar to above using pivot_table():
In [38]: df = pd.DataFrame({
....: 'Animal': ['Animal1', 'Animal2', 'Animal3', 'Animal2', 'Animal1',
....: 'Animal2', 'Animal3'],
....: 'FeedType': ['A', 'B', 'A', 'A', 'B', 'B', 'A'],
....: 'Amount': [10, 7, 4, 2, 5, 6, 2],
....: })
....:
Out[39]:
FeedType A B
Animal
Animal1 10.0 5.0
Animal2 2.0 13.0
Animal3 6.0 NaN
In [40]: df.groupby(['Animal','FeedType'])['Amount'].sum()
Out[40]:
Animal FeedType
Animal1 A 10
B 5
Animal2 A 2
B 13
Animal3 A 6
Name: Amount, dtype: int64
For more details and examples see the reshaping documentation or the groupby documentation.
30.4.5 factor
cut(c(1,2,3,4,5,6), 3)
factor(c(1,2,3,2,2,3))
In [41]: pd.cut(pd.Series([1,2,3,4,5,6]), 3)
Out[41]:
0 (0.995, 2.667]
1 (0.995, 2.667]
2 (2.667, 4.333]
3 (2.667, 4.333]
4 (4.333, 6.0]
5 (4.333, 6.0]
dtype: category
Categories (3, interval[float64]): [(0.995, 2.667] < (2.667, 4.333] < (4.333, 6.0]]
In [42]: pd.Series([1,2,3,2,2,3]).astype("category")
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 1
1 2
2 3
3 2
4 2
5 3
dtype: category
Categories (3, int64): [1, 2, 3]
For more details and examples see categorical introduction and the API documentation. There is also a documentation
regarding the differences to R’s factor.
THIRTYONE
Since many potential pandas users have some familiarity with SQL, this page is meant to provide some examples of
how various SQL operations would be performed using pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas to familiarize yourself with the
library.
As is customary, we import pandas and NumPy as follows:
In [1]: import pandas as pd
Most of the examples will utilize the tips dataset found within pandas tests. We’ll read the data into a DataFrame
called tips and assume we have a database table of the same name and structure.
In [3]: url = 'https://raw.github.com/pandas-dev/pandas/master/pandas/tests/data/tips.
˓→csv'
In [5]: tips.head()
Out[5]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
31.1 SELECT
In SQL, selection is done using a comma-separated list of columns you’d like to select (or a * to select all columns):
SELECT total_bill, tip, smoker, time
FROM tips
LIMIT 5;
With pandas, column selection is done by passing a list of column names to your DataFrame:
In [6]: tips[['total_bill', 'tip', 'smoker', 'time']].head(5)
Out[6]:
total_bill tip smoker time
(continues on next page)
1297
pandas: powerful Python data analysis toolkit, Release 0.23.4
Calling the DataFrame without the list of column names would display all columns (akin to SQL’s *).
31.2 WHERE
SELECT *
FROM tips
WHERE time = 'Dinner'
LIMIT 5;
DataFrames can be filtered in multiple ways; the most intuitive of which is using boolean indexing.
The above statement is simply passing a Series of True/False objects to the DataFrame, returning all rows with
True.
In [9]: is_dinner.value_counts()
Out[9]:
True 176
False 68
Name: time, dtype: int64
In [10]: tips[is_dinner].head(5)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[10]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
Just like SQL’s OR and AND, multiple conditions can be passed to a DataFrame using | (OR) and & (AND).
-- tips by parties of at least 5 diners OR bill total was more than $45
SELECT *
FROM tips
WHERE size >= 5 OR total_bill > 45;
# tips by parties of at least 5 diners OR bill total was more than $45
In [12]: tips[(tips['size'] >= 5) | (tips['total_bill'] > 45)]
Out[12]:
total_bill tip sex smoker day time size
59 48.27 6.73 Male No Sat Dinner 4
125 29.80 4.20 Female No Thur Lunch 6
141 34.30 6.70 Male No Thur Lunch 6
142 41.19 5.00 Male No Thur Lunch 5
143 27.05 5.00 Female No Thur Lunch 6
155 29.85 5.14 Female No Sun Dinner 5
156 48.17 5.00 Male No Sun Dinner 6
170 50.81 10.00 Male Yes Sat Dinner 3
182 45.35 3.50 Male Yes Sun Dinner 3
185 20.69 5.00 Male No Sun Dinner 5
187 30.46 2.00 Male Yes Sun Dinner 5
212 48.33 9.00 Male No Sat Dinner 4
216 28.15 3.00 Male Yes Sat Dinner 5
In [14]: frame
Out[14]:
col1 col2
0 A F
1 B NaN
2 NaN G
3 C H
4 D I
Assume we have a table of the same structure as our DataFrame above. We can see only the records where col2 IS
NULL with the following query:
SELECT *
FROM frame
WHERE col2 IS NULL;
In [15]: frame[frame['col2'].isna()]
Out[15]:
col1 col2
1 B NaN
Getting items where col1 IS NOT NULL can be done with notna().
SELECT *
FROM frame
WHERE col1 IS NOT NULL;
In [16]: frame[frame['col1'].notna()]
Out[16]:
col1 col2
0 A F
1 B NaN
3 C H
4 D I
31.3 GROUP BY
In pandas, SQL’s GROUP BY operations are performed using the similarly named groupby() method.
groupby() typically refers to a process where we’d like to split a dataset into groups, apply some function (typically
aggregation) , and then combine the groups together.
A common SQL operation would be getting the count of records in each group throughout a dataset. For instance, a
query getting us the number of tips left by sex:
In [17]: tips.groupby('sex').size()
Out[17]:
sex
Female 87
Male 157
dtype: int64
Notice that in the pandas code we used size() and not count(). This is because count() applies the function
to each column, returning the number of not null records within each.
In [18]: tips.groupby('sex').count()
Out[18]:
total_bill tip smoker day time size
sex
Female 87 87 87 87 87 87
Male 157 157 157 157 157 157
In [19]: tips.groupby('sex')['total_bill'].count()
Out[19]:
sex
Female 87
Male 157
Name: total_bill, dtype: int64
Multiple functions can also be applied at once. For instance, say we’d like to see how tip amount differs by day of
the week - agg() allows you to pass a dictionary to your grouped DataFrame, indicating which functions to apply to
specific columns.
Grouping by more than one column is done by passing a list of columns to the groupby() method.
31.4 JOIN
JOINs can be performed with join() or merge(). By default, join() will join the DataFrames on their indices.
Each method has parameters allowing you to specify the type of join to perform (LEFT, RIGHT, INNER, FULL) or
the columns to join on (column names or indices).
In [22]: df1 = pd.DataFrame({'key': ['A', 'B', 'C', 'D'],
....: 'value': np.random.randn(4)})
....:
Assume we have two database tables of the same name and structure as our DataFrames.
Now let’s go over the various types of JOINs.
SELECT *
FROM df1
INNER JOIN df2
ON df1.key = df2.key;
merge() also offers parameters for cases when you’d like to join one DataFrame’s column with another DataFrame’s
index.
In [25]: indexed_df2 = df2.set_index('key')
pandas also allows for FULL JOINs, which display both sides of the dataset, whether or not the joined columns find a
match. As of writing, FULL JOINs are not supported in all RDBMS (MySQL).
-- show all records from both tables
SELECT *
FROM df1
FULL OUTER JOIN df2
ON df1.key = df2.key;
31.5 UNION
SQL’s UNION is similar to UNION ALL, however UNION will remove duplicate rows.
SELECT city, rank
FROM df1
UNION
SELECT city, rank
(continues on next page)
31.6 Pandas equivalents for some SQL analytic and aggregate func-
tions
-- MySQL
SELECT * FROM tips
ORDER BY tip DESC
LIMIT 10 OFFSET 5;
31.6. Pandas equivalents for some SQL analytic and aggregate functions 1305
pandas: powerful Python data analysis toolkit, Release 0.23.4
Let’s find tips with (rank < 3) per gender group for (tips < 2). Notice that when using rank(method='min')
function rnk_min remains the same for the same tip (as Oracle’s RANK() function)
31.7 UPDATE
UPDATE tips
SET tip = tip*2
WHERE tip < 2;
31.8 DELETE
In pandas we select the rows that should remain, instead of deleting them
THIRTYTWO
For potential users coming from SAS this page is meant to demonstrate how different SAS operations would be
performed in pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas to familiarize yourself with the
library.
As is customary, we import pandas and NumPy as follows:
Note: Throughout this tutorial, the pandas DataFrame will be displayed by calling df.head(), which displays
the first N (default 5) rows of the DataFrame. This is often used in interactive work (e.g. Jupyter notebook or
terminal) - the equivalent in SAS would be:
pandas SAS
DataFrame data set
column variable
row observation
groupby BY-group
NaN .
A DataFrame in pandas is analogous to a SAS data set - a two-dimensional data source with labeled columns that
can be of different types. As will be shown in this document, almost any operation that can be applied to a data set
using SAS’s DATA step, can also be accomplished in pandas.
1309
pandas: powerful Python data analysis toolkit, Release 0.23.4
A Series is the data structure that represents one column of a DataFrame. SAS doesn’t have a separate data
structure for a single column, but in general, working with a Series is analogous to referencing a column in the
DATA step.
32.1.3 Index
Every DataFrame and Series has an Index - which are labels on the rows of the data. SAS does not have an
exactly analogous concept. A data set’s rows are essentially unlabeled, other than an implicit integer index that can be
accessed during the DATA step (_N_).
In pandas, if no index is specified, an integer index is also used by default (first row = 0, second row = 1, and so on).
While using a labeled Index or MultiIndex can enable sophisticated analyses and is ultimately an important part
of pandas to understand, for this comparison we will essentially ignore the Index and just treat the DataFrame as
a collection of columns. Please see the indexing documentation for much more on how to use an Index effectively.
A SAS data set can be built from specified values by placing the data after a datalines statement and specifying
the column names.
data df;
input x y;
datalines;
1 2
3 4
5 6
;
run;
A pandas DataFrame can be constructed in many different ways, but for a small number of values, it is often
convenient to specify it as a Python dictionary, where the keys are the column names and the values are the data.
In [3]: df = pd.DataFrame({
...: 'x': [1, 3, 5],
...: 'y': [2, 4, 6]})
...:
In [4]: df
Out[4]:
x y
0 1 2
1 3 4
2 5 6
Like SAS, pandas provides utilities for reading in data from many formats. The tips dataset, found within the pandas
tests (csv) will be used in many of the following examples.
SAS provides PROC IMPORT to read csv data into a data set.
In [7]: tips.head()
Out[7]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
Like PROC IMPORT, read_csv can take a number of parameters to specify how the data should be parsed. For
example, if the data was instead tab delimited, and did not have column names, the pandas command would be:
In addition to text/csv, pandas supports a variety of other data formats such as Excel, HDF5, and SQL databases. These
are all read via a pd.read_* function. See the IO documentation for more details.
Similarly in pandas, the opposite of read_csv is to_csv(), and other data formats follow a similar api.
tips.to_csv('tips2.csv')
In the DATA step, arbitrary math expressions can be used on new or existing columns.
data tips;
set tips;
total_bill = total_bill - 2;
new_bill = total_bill / 2;
run;
pandas provides similar vectorized operations by specifying the individual Series in the DataFrame. New
columns can be assigned in the same way.
In [10]: tips.head()
Out[10]:
total_bill tip sex smoker day time size new_bill
0 14.99 1.01 Female No Sun Dinner 2 7.495
1 8.34 1.66 Male No Sun Dinner 3 4.170
2 19.01 3.50 Male No Sun Dinner 3 9.505
3 21.68 3.31 Male No Sun Dinner 2 10.840
4 22.59 3.61 Female No Sun Dinner 4 11.295
32.3.2 Filtering
data tips;
set tips;
if total_bill > 10;
run;
data tips;
set tips;
where total_bill > 10;
/* equivalent in this case - where happens before the
DATA step begins and can also be used in PROC statements */
run;
DataFrames can be filtered in multiple ways; the most intuitive of which is using boolean indexing
data tips;
set tips;
format bucket $4.;
The same operation in pandas can be accomplished using the where method from numpy.
In [13]: tips.head()
Out[13]:
total_bill tip sex smoker day time size bucket
0 14.99 1.01 Female No Sun Dinner 2 high
1 8.34 1.66 Male No Sun Dinner 3 low
2 19.01 3.50 Male No Sun Dinner 3 high
3 21.68 3.31 Male No Sun Dinner 2 high
4 22.59 3.61 Female No Sun Dinner 4 high
data tips;
set tips;
format date1 date2 date1_plusmonth mmddyy10.;
date1 = mdy(1, 15, 2013);
date2 = mdy(2, 15, 2015);
date1_year = year(date1);
date2_month = month(date2);
* shift date to beginning of next interval;
date1_next = intnx('MONTH', date1, 1);
* count intervals between dates;
months_between = intck('MONTH', date1, date2);
run;
The equivalent pandas operations are shown below. In addition to these functions pandas supports other Time Series
features not available in Base SAS (such as resampling and custom offsets) - see the timeseries documentation for
more details.
In [20]: tips[['date1','date2','date1_year','date2_month',
....: 'date1_next','months_between']].head()
....:
Out[20]:
date1 date2 date1_year date2_month date1_next months_between
0 2013-01-15 2015-02-15 2013 2 2013-02-01 25
1 2013-01-15 2015-02-15 2013 2 2013-02-01 25
2 2013-01-15 2015-02-15 2013 2 2013-02-01 25
(continues on next page)
SAS provides keywords in the DATA step to select, drop, and rename columns.
data tips;
set tips;
keep sex total_bill tip;
run;
data tips;
set tips;
drop sex;
run;
data tips;
set tips;
rename total_bill=total_bill_2;
run;
# keep
In [21]: tips[['sex', 'total_bill', 'tip']].head()
Out[21]:
sex total_bill tip
0 Female 14.99 1.01
1 Male 8.34 1.66
2 Male 19.01 3.50
3 Male 21.68 3.31
4 Female 22.59 3.61
# drop
In [22]: tips.drop('sex', axis=1).head()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
# rename
In [23]: tips.rename(columns={'total_bill':'total_bill_2'}).head()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
pandas objects have a sort_values() method, which takes a list of columns to sort by.
In [25]: tips.head()
Out[25]:
total_bill tip sex smoker day time size
67 1.07 1.00 Female Yes Sat Dinner 1
92 3.75 1.00 Female Yes Fri Dinner 2
111 5.25 1.00 Female No Sat Dinner 1
145 6.35 1.50 Female No Thur Lunch 2
135 6.51 1.25 Female No Thur Lunch 2
32.4.1 Length
SAS determines the length of a character string with the LENGTHN and LENGTHC functions. LENGTHN excludes
trailing blanks and LENGTHC includes trailing blanks.
data _null_;
set tips;
put(LENGTHN(time));
put(LENGTHC(time));
run;
Python determines the length of a character string with the len function. len includes trailing blanks. Use len and
rstrip to exclude trailing blanks.
In [26]: tips['time'].str.len().head()
Out[26]:
67 6
92 6
111 6
145 5
135 5
Name: time, dtype: int64
In [27]: tips['time'].str.rstrip().str.len().head()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[27]:
˓→
67 6
92 6
111 6
145 5
135 5
Name: time, dtype: int64
32.4.2 Find
SAS determines the position of a character in a string with the FINDW function. FINDW takes the string defined by
the first argument and searches for the first position of the substring you supply as the second argument.
data _null_;
set tips;
put(FINDW(sex,'ale'));
run;
Python determines the position of a character in a string with the find function. find searches for the first position
of the substring. If the substring is found, the function returns its position. Keep in mind that Python indexes are
zero-based and the function will return -1 if it fails to find the substring.
In [28]: tips['sex'].str.find("ale").head()
Out[28]:
67 3
92 3
111 3
145 3
135 3
Name: sex, dtype: int64
32.4.3 Substring
SAS extracts a substring from a string based on its position with the SUBSTR function.
data _null_;
set tips;
put(substr(sex,1,1));
run;
With pandas you can use [] notation to extract a substring from a string by position locations. Keep in mind that
Python indexes are zero-based.
In [29]: tips['sex'].str[0:1].head()
Out[29]:
67 F
92 F
111 F
145 F
135 F
Name: sex, dtype: object
32.4.4 Scan
The SAS SCAN function returns the nth word from a string. The first argument is the string you want to parse and the
second argument specifies which word you want to extract.
data firstlast;
input String $60.;
First_Name = scan(string, 1);
Last_Name = scan(string, -1);
datalines2;
(continues on next page)
Python extracts a substring from a string based on its text by using regular expressions. There are much more powerful
approaches, but this just shows a simple approach.
In [33]: firstlast
Out[33]:
String First_Name Last_Name
0 John Smith John John
1 Jane Cook Jane Jane
The SAS UPCASE LOWCASE and PROPCASE functions change the case of the argument.
data firstlast;
input String $60.;
string_up = UPCASE(string);
string_low = LOWCASE(string);
string_prop = PROPCASE(string);
datalines2;
John Smith;
Jane Cook;
;;;
run;
In [38]: firstlast
Out[38]:
String string_up string_low string_prop
0 John Smith JOHN SMITH john smith John Smith
1 Jane Cook JANE COOK jane cook Jane Cook
32.5 Merging
In [40]: df1
Out[40]:
key value
0 A -0.857326
1 B 1.075416
2 C 0.371727
3 D 1.065735
In [42]: df2
Out[42]:
key value
0 B -0.227314
1 D 2.102726
2 D -0.092796
3 E 0.094694
In SAS, data must be explicitly sorted before merging. Different types of joins are accomplished using the in= dummy
variables to track whether a match was found in one or both input frames.
proc sort data=df1;
by key;
run;
pandas DataFrames have a merge() method, which provides similar functionality. Note that the data does not have
to be sorted ahead of time, and different join types are accomplished via the how keyword.
In [43]: inner_join = df1.merge(df2, on=['key'], how='inner')
In [44]: inner_join
Out[44]:
key value_x value_y
0 B 1.075416 -0.227314
(continues on next page)
In [46]: left_join
Out[46]:
key value_x value_y
0 A -0.857326 NaN
1 B 1.075416 -0.227314
2 C 0.371727 NaN
3 D 1.065735 2.102726
4 D 1.065735 -0.092796
In [48]: right_join
Out[48]:
key value_x value_y
0 B 1.075416 -0.227314
1 D 1.065735 2.102726
2 D 1.065735 -0.092796
3 E NaN 0.094694
In [50]: outer_join
Out[50]:
key value_x value_y
0 A -0.857326 NaN
1 B 1.075416 -0.227314
2 C 0.371727 NaN
3 D 1.065735 2.102726
4 D 1.065735 -0.092796
5 E NaN 0.094694
Like SAS, pandas has a representation for missing data - which is the special float value NaN (not a number). Many
of the semantics are the same, for example missing data propagates through numeric operations, and is ignored by
default for aggregations.
In [51]: outer_join
Out[51]:
key value_x value_y
0 A -0.857326 NaN
1 B 1.075416 -0.227314
2 C 0.371727 NaN
3 D 1.065735 2.102726
4 D 1.065735 -0.092796
5 E NaN 0.094694
In [53]: outer_join['value_x'].sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→2.7212865354426201
One difference is that missing data cannot be compared to its sentinel value. For example, in SAS you could do this
to filter missing values.
data outer_join_nulls;
set outer_join;
if value_x = .;
run;
data outer_join_no_nulls;
set outer_join;
if value_x ^= .;
run;
Which doesn’t work in pandas. Instead, the pd.isna or pd.notna functions should be used for comparisons.
In [54]: outer_join[pd.isna(outer_join['value_x'])]
Out[54]:
key value_x value_y
5 E NaN 0.094694
In [55]: outer_join[pd.notna(outer_join['value_x'])]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[55]:
key value_x value_y
0 A -0.857326 NaN
1 B 1.075416 -0.227314
2 C 0.371727 NaN
3 D 1.065735 2.102726
4 D 1.065735 -0.092796
pandas also provides a variety of methods to work with missing data - some of which would be challenging to express
in SAS. For example, there are methods to drop all rows with any missing values, replacing missing values with a
specified value, like the mean, or forward filling from previous rows. See the missing data documentation for more.
In [56]: outer_join.dropna()
Out[56]:
key value_x value_y
1 B 1.075416 -0.227314
3 D 1.065735 2.102726
4 D 1.065735 -0.092796
In [57]: outer_join.fillna(method='ffill')
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
In [58]: outer_join['value_x'].fillna(outer_join['value_x'].mean())
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
0 -0.857326
1 1.075416
2 0.371727
3 1.065735
4 1.065735
5 0.544257
Name: value_x, dtype: float64
32.7 GroupBy
32.7.1 Aggregation
SAS’s PROC SUMMARY can be used to group by one or more key variables and compute aggregations on numeric
columns.
pandas provides a flexible groupby mechanism that allows similar aggregations. See the groupby documentation for
more details and examples.
In [60]: tips_summed.head()
Out[60]:
total_bill tip
sex smoker
Female No 869.68 149.77
Yes 527.27 96.74
Male No 1725.75 302.00
Yes 1217.07 183.07
32.7.2 Transformation
In SAS, if the group aggregations need to be used with the original frame, it must be merged back together. For
example, to subtract the mean for each observation by smoker group.
data tips;
merge tips(in=a) smoker_means(in=b);
by smoker;
adj_total_bill = total_bill - group_bill;
if a and b;
run;
pandas groubpy provides a transform mechanism that allows these type of operations to be succinctly expressed
in one operation.
In [61]: gb = tips.groupby('smoker')['total_bill']
In [63]: tips.head()
Out[63]:
total_bill tip sex smoker day time size adj_total_bill
67 1.07 1.00 Female Yes Sat Dinner 1 -17.686344
92 3.75 1.00 Female Yes Fri Dinner 2 -15.006344
111 5.25 1.00 Female No Sat Dinner 1 -11.938278
145 6.35 1.50 Female No Thur Lunch 2 -10.838278
135 6.51 1.25 Female No Thur Lunch 2 -10.678278
In addition to aggregation, pandas groupby can be used to replicate most other by group processing from SAS. For
example, this DATA step reads the data by sex/smoker group and filters to the first entry for each.
data tips_first;
set tips;
by sex smoker;
if FIRST.sex or FIRST.smoker then output;
run;
In [64]: tips.groupby(['sex','smoker']).first()
Out[64]:
total_bill tip day time size adj_total_bill
sex smoker
Female No 5.25 1.00 Sat Dinner 1 -11.938278
(continues on next page)
pandas operates exclusively in memory, where a SAS data set exists on disk. This means that the size of data able to
be loaded in pandas is limited by your machine’s memory, but also that the operations on that data may be faster.
If out of core processing is needed, one possibility is the dask.dataframe library (currently in development) which
provides a subset of pandas functionality for an on-disk DataFrame
pandas provides a read_sas() method that can read SAS data saved in the XPORT or SAS7BDAT binary format.
df = pd.read_sas('transport-file.xpt')
df = pd.read_sas('binary-file.sas7bdat')
You can also specify the file format directly. By default, pandas will try to infer the file format based on its extension.
df = pd.read_sas('transport-file.xpt', format='xport')
df = pd.read_sas('binary-file.sas7bdat', format='sas7bdat')
XPORT is a relatively limited format and the parsing of it is not as optimized as some of the other pandas readers. An
alternative way to interop data between SAS and pandas is to serialize to csv.
THIRTYTHREE
For potential users coming from Stata this page is meant to demonstrate how different Stata operations would be
performed in pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas to familiarize yourself with the
library.
As is customary, we import pandas and NumPy as follows. This means that we can refer to the libraries as pd and np,
respectively, for the rest of the document.
Note: Throughout this tutorial, the pandas DataFrame will be displayed by calling df.head(), which displays
the first N (default 5) rows of the DataFrame. This is often used in interactive work (e.g. Jupyter notebook or
terminal) – the equivalent in Stata would be:
list in 1/5
pandas Stata
DataFrame data set
column variable
row observation
groupby bysort
NaN .
A DataFrame in pandas is analogous to a Stata data set – a two-dimensional data source with labeled columns that
can be of different types. As will be shown in this document, almost any operation that can be applied to a data set in
Stata can also be accomplished in pandas.
1325
pandas: powerful Python data analysis toolkit, Release 0.23.4
A Series is the data structure that represents one column of a DataFrame. Stata doesn’t have a separate data
structure for a single column, but in general, working with a Series is analogous to referencing a column of a data
set in Stata.
33.1.3 Index
Every DataFrame and Series has an Index – labels on the rows of the data. Stata does not have an exactly
analogous concept. In Stata, a data set’s rows are essentially unlabeled, other than an implicit integer index that can
be accessed with _n.
In pandas, if no index is specified, an integer index is also used by default (first row = 0, second row = 1, and so on).
While using a labeled Index or MultiIndex can enable sophisticated analyses and is ultimately an important part
of pandas to understand, for this comparison we will essentially ignore the Index and just treat the DataFrame as
a collection of columns. Please see the indexing documentation for much more on how to use an Index effectively.
A Stata data set can be built from specified values by placing the data after an input statement and specifying the
column names.
input x y
1 2
3 4
5 6
end
A pandas DataFrame can be constructed in many different ways, but for a small number of values, it is often
convenient to specify it as a Python dictionary, where the keys are the column names and the values are the data.
In [3]: df = pd.DataFrame({
...: 'x': [1, 3, 5],
...: 'y': [2, 4, 6]})
...:
In [4]: df
Out[4]:
x y
0 1 2
1 3 4
2 5 6
Like Stata, pandas provides utilities for reading in data from many formats. The tips data set, found within the
pandas tests (csv) will be used in many of the following examples.
Stata provides import delimited to read csv data into a data set in memory. If the tips.csv file is in the
current working directory, we can import it as follows.
The pandas method is read_csv(), which works similarly. Additionally, it will automatically download the data
set if presented with a url.
In [5]: url = 'https://raw.github.com/pandas-dev/pandas/master/pandas/tests/data/tips.
˓→csv'
In [7]: tips.head()
Out[7]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
Like import delimited, read_csv() can take a number of parameters to specify how the data should be
parsed. For example, if the data were instead tab delimited, did not have column names, and existed in the current
working directory, the pandas command would be:
tips = pd.read_csv('tips.csv', sep='\t', header=None)
Pandas can also read Stata data sets in .dta format with the read_stata() function.
df = pd.read_stata('data.dta')
In addition to text/csv and Stata files, pandas supports a variety of other data formats such as Excel, SAS, HDF5,
Parquet, and SQL databases. These are all read via a pd.read_* function. See the IO documentation for more
details.
Pandas can also export to Stata file format with the DataFrame.to_stata() method.
tips.to_stata('tips2.dta')
In Stata, arbitrary math expressions can be used with the generate and replace commands on new or existing
columns. The drop command drops the column from the data set.
pandas provides similar vectorized operations by specifying the individual Series in the DataFrame. New
columns can be assigned in the same way. The DataFrame.drop() method drops a column from the DataFrame.
In [8]: tips['total_bill'] = tips['total_bill'] - 2
In [10]: tips.head()
Out[10]:
total_bill tip sex smoker day time size new_bill
0 14.99 1.01 Female No Sun Dinner 2 7.495
1 8.34 1.66 Male No Sun Dinner 3 4.170
2 19.01 3.50 Male No Sun Dinner 3 9.505
3 21.68 3.31 Male No Sun Dinner 2 10.840
4 22.59 3.61 Female No Sun Dinner 4 11.295
33.3.2 Filtering
DataFrames can be filtered in multiple ways; the most intuitive of which is using boolean indexing.
In [12]: tips[tips['total_bill'] > 10].head()
Out[12]:
total_bill tip sex smoker day time size
0 14.99 1.01 Female No Sun Dinner 2
2 19.01 3.50 Male No Sun Dinner 3
3 21.68 3.31 Male No Sun Dinner 2
4 22.59 3.61 Female No Sun Dinner 4
5 23.29 4.71 Male No Sun Dinner 4
The same operation in pandas can be accomplished using the where method from numpy.
In [13]: tips['bucket'] = np.where(tips['total_bill'] < 10, 'low', 'high')
In [14]: tips.head()
Out[14]:
total_bill tip sex smoker day time size bucket
0 14.99 1.01 Female No Sun Dinner 2 high
(continues on next page)
The equivalent pandas operations are shown below. In addition to these functions, pandas supports other Time Series
features not available in Stata (such as time zone handling and custom offsets) – see the timeseries documentation for
more details.
In [21]: tips[['date1','date2','date1_year','date2_month',
....: 'date1_next','months_between']].head()
....:
Out[21]:
date1 date2 date1_year date2_month date1_next months_between
0 2013-01-15 2015-02-15 2013 2 2013-02-01 25
1 2013-01-15 2015-02-15 2013 2 2013-02-01 25
2 2013-01-15 2015-02-15 2013 2 2013-02-01 25
3 2013-01-15 2015-02-15 2013 2 2013-02-01 25
4 2013-01-15 2015-02-15 2013 2 2013-02-01 25
drop sex
The same operations are expressed in pandas below. Note that in contrast to Stata, these operations do not happen in
place. To make these changes persist, assign the operation back to a variable.
# keep
In [22]: tips[['sex', 'total_bill', 'tip']].head()
Out[22]:
sex total_bill tip
0 Female 14.99 1.01
1 Male 8.34 1.66
2 Male 19.01 3.50
3 Male 21.68 3.31
4 Female 22.59 3.61
# drop
In [23]: tips.drop('sex', axis=1).head()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
# rename
In [24]: tips.rename(columns={'total_bill': 'total_bill_2'}).head()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→
pandas objects have a DataFrame.sort_values() method, which takes a list of columns to sort by.
In [26]: tips.head()
Out[26]:
total_bill tip sex smoker day time size
67 1.07 1.00 Female Yes Sat Dinner 1
(continues on next page)
Stata determines the length of a character string with the strlen() and ustrlen() functions for ASCII and
Unicode strings, respectively.
Python determines the length of a character string with the len function. In Python 3, all strings are Unicode strings.
len includes trailing blanks. Use len and rstrip to exclude trailing blanks.
In [27]: tips['time'].str.len().head()
Out[27]:
67 6
92 6
111 6
145 5
135 5
Name: time, dtype: int64
In [28]: tips['time'].str.rstrip().str.len().head()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[28]:
˓→
67 6
92 6
111 6
145 5
135 5
Name: time, dtype: int64
Stata determines the position of a character in a string with the strpos() function. This takes the string defined by
the first argument and searches for the first position of the substring you supply as the second argument.
Python determines the position of a character in a string with the find() function. find searches for the first
position of the substring. If the substring is found, the function returns its position. Keep in mind that Python indexes
are zero-based and the function will return -1 if it fails to find the substring.
In [29]: tips['sex'].str.find("ale").head()
Out[29]:
67 3
(continues on next page)
Stata extracts a substring from a string based on its position with the substr() function.
With pandas you can use [] notation to extract a substring from a string by position locations. Keep in mind that
Python indexes are zero-based.
In [30]: tips['sex'].str[0:1].head()
Out[30]:
67 F
92 F
111 F
145 F
135 F
Name: sex, dtype: object
The Stata word() function returns the nth word from a string. The first argument is the string you want to parse and
the second argument specifies which word you want to extract.
clear
input str20 string
"John Smith"
"Jane Cook"
end
Python extracts a substring from a string based on its text by using regular expressions. There are much more powerful
approaches, but this just shows a simple approach.
In [34]: firstlast
Out[34]:
string First_Name Last_Name
0 John Smith John John
1 Jane Cook Jane Jane
clear
input str20 string
"John Smith"
"Jane Cook"
end
In [39]: firstlast
Out[39]:
string upper lower title
0 John Smith JOHN SMITH john smith John Smith
1 Jane Cook JANE COOK jane cook Jane Cook
33.5 Merging
In [41]: df1
Out[41]:
key value
0 A 0.885906
1 B 0.794848
2 C -0.943848
3 D 0.328609
In [43]: df2
Out[43]:
key value
(continues on next page)
In Stata, to perform a merge, one data set must be in memory and the other must be referenced as a file name on disk.
In contrast, Python must have both DataFrames already in memory.
By default, Stata performs an outer join, where all observations from both data sets are left in memory after the merge.
One can keep only observations from the initial data set, the merged data set, or the intersection of the two by using
the values created in the _merge variable.
preserve
* Left join
merge 1:n key using df2.dta
keep if _merge == 1
* Right join
restore, preserve
merge 1:n key using df2.dta
keep if _merge == 2
* Inner join
restore, preserve
merge 1:n key using df2.dta
keep if _merge == 3
* Outer join
restore
merge 1:n key using df2.dta
pandas DataFrames have a DataFrame.merge() method, which provides similar functionality. Note that different
join types are accomplished via the how keyword.
In [45]: inner_join
Out[45]:
key value_x value_y
0 B 0.794848 -1.634931
1 D 0.328609 2.197567
2 D 0.328609 0.054695
In [47]: left_join
Out[47]:
key value_x value_y
0 A 0.885906 NaN
1 B 0.794848 -1.634931
2 C -0.943848 NaN
3 D 0.328609 2.197567
4 D 0.328609 0.054695
In [49]: right_join
Out[49]:
key value_x value_y
0 B 0.794848 -1.634931
1 D 0.328609 2.197567
2 D 0.328609 0.054695
3 E NaN 0.283297
In [51]: outer_join
Out[51]:
key value_x value_y
0 A 0.885906 NaN
1 B 0.794848 -1.634931
2 C -0.943848 NaN
3 D 0.328609 2.197567
4 D 0.328609 0.054695
5 E NaN 0.283297
Like Stata, pandas has a representation for missing data – the special float value NaN (not a number). Many of the
semantics are the same; for example missing data propagates through numeric operations, and is ignored by default
for aggregations.
In [52]: outer_join
Out[52]:
key value_x value_y
0 A 0.885906 NaN
1 B 0.794848 -1.634931
2 C -0.943848 NaN
3 D 0.328609 2.197567
(continues on next page)
0 NaN
1 -0.840083
2 NaN
3 2.526176
4 0.383304
5 NaN
dtype: float64
In [54]: outer_join['value_x'].sum()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
˓→1.3941243310085349
One difference is that missing data cannot be compared to its sentinel value. For example, in Stata you could do this
to filter missing values.
This doesn’t work in pandas. Instead, the pd.isna() or pd.notna() functions should be used for comparisons.
In [55]: outer_join[pd.isna(outer_join['value_x'])]
Out[55]:
key value_x value_y
5 E NaN 0.283297
In [56]: outer_join[pd.notna(outer_join['value_x'])]
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Out[56]:
key value_x value_y
0 A 0.885906 NaN
1 B 0.794848 -1.634931
2 C -0.943848 NaN
3 D 0.328609 2.197567
4 D 0.328609 0.054695
Pandas also provides a variety of methods to work with missing data – some of which would be challenging to express
in Stata. For example, there are methods to drop all rows with any missing values, replacing missing values with a
specified value, like the mean, or forward filling from previous rows. See the missing data documentation for more.
# Fill forwards
In [58]: outer_join.fillna(method='ffill')
(continues on next page)
0 0.885906
1 0.794848
2 -0.943848
3 0.328609
4 0.328609
5 0.278825
Name: value_x, dtype: float64
33.7 GroupBy
33.7.1 Aggregation
Stata’s collapse can be used to group by one or more key variables and compute aggregations on numeric columns.
pandas provides a flexible groupby mechanism that allows similar aggregations. See the groupby documentation for
more details and examples.
In [61]: tips_summed.head()
Out[61]:
total_bill tip
sex smoker
Female No 869.68 149.77
Yes 527.27 96.74
Male No 1725.75 302.00
Yes 1217.07 183.07
33.7.2 Transformation
In Stata, if the group aggregations need to be used with the original data set, one would usually use bysort with
egen(). For example, to subtract the mean for each observation by smoker group.
pandas groubpy provides a transform mechanism that allows these type of operations to be succinctly expressed
in one operation.
In [62]: gb = tips.groupby('smoker')['total_bill']
In [64]: tips.head()
Out[64]:
total_bill tip sex smoker day time size adj_total_bill
67 1.07 1.00 Female Yes Sat Dinner 1 -17.686344
92 3.75 1.00 Female Yes Fri Dinner 2 -15.006344
111 5.25 1.00 Female No Sat Dinner 1 -11.938278
145 6.35 1.50 Female No Thur Lunch 2 -10.838278
135 6.51 1.25 Female No Thur Lunch 2 -10.678278
In addition to aggregation, pandas groupby can be used to replicate most other bysort processing from Stata. For
example, the following example lists the first observation in the current sort order by sex/smoker group.
In [65]: tips.groupby(['sex','smoker']).first()
Out[65]:
total_bill tip day time size adj_total_bill
sex smoker
Female No 5.25 1.00 Sat Dinner 1 -11.938278
Yes 1.07 1.00 Sat Dinner 1 -17.686344
Male No 5.51 2.00 Thur Lunch 2 -11.678278
Yes 5.25 5.15 Sun Dinner 2 -13.506344
Pandas and Stata both operate exclusively in memory. This means that the size of data able to be loaded in pandas is
limited by your machine’s memory. If out of core processing is needed, one possibility is the dask.dataframe library,
which provides a subset of pandas functionality for an on-disk DataFrame.
THIRTYFOUR
API REFERENCE
This page gives an overview of all public pandas objects, functions and methods. All classes and functions exposed in
pandas.* namespace are public.
Some subpackages are public which include pandas.errors, pandas.plotting, and pandas.testing.
Public functions in pandas.io and pandas.tseries submodules are mentioned in the documentation.
pandas.api.types subpackage holds some public functions related to data types in pandas.
Warning: The pandas.core, pandas.compat, and pandas.util top-level modules are PRIVATE. Sta-
ble functionality in such modules is not guaranteed.
34.1 Input/Output
34.1.1 Pickling
read_pickle(path[, compression]) Load pickled pandas object (or any object) from file.
34.1.1.1 pandas.read_pickle
pandas.read_pickle(path, compression=’infer’)
Load pickled pandas object (or any object) from file.
Warning: Loading pickled data received from untrusted sources can be unsafe. See here.
1339
pandas: powerful Python data analysis toolkit, Release 0.23.4
See also:
Examples
>>> import os
>>> os.remove("./dummy.pkl")
34.1.2.1 pandas.read_table
parse_dates : boolean or list of ints or names or list of lists or dict, default False
• boolean. If True -> try parsing the index.
• list of ints or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate
date column.
• list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date
column.
• dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’
If a column or index contains an unparseable date, the entire column or index will be
returned unaltered as an object data type. For non-standard datetime parsing, use pd.
to_datetime after pd.read_csv
Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_format : boolean, default False
If True and parse_dates is enabled, pandas will attempt to infer the format of the date-
time strings in the columns, and if it can be inferred, switch to a faster method of parsing
them. In some cases this can increase the parsing speed by 5-10x.
keep_date_col : boolean, default False
If True and parse_dates specifies combining multiple columns then keep the original
columns.
date_parser : function, default None
Function to use for converting a sequence of string columns to an array of datetime in-
stances. The default uses dateutil.parser.parser to do the conversion. Pandas
will try to call date_parser in three different ways, advancing to the next if an exception
occurs: 1) Pass one or more arrays (as defined by parse_dates) as arguments; 2) con-
catenate (row-wise) the string values from the columns defined by parse_dates into a
single array and pass that; and 3) call date_parser once for each row using one or more
strings (corresponding to the columns defined by parse_dates) as arguments.
dayfirst : boolean, default False
DD/MM format dates, international and European format
iterator : boolean, default False
Return TextFileReader object for iteration or getting chunks with get_chunk().
chunksize : int, default None
Return TextFileReader object for iteration. See the IO Tools docs for more information
on iterator and chunksize.
compression : {‘infer’, ‘gzip’, ‘bz2’, ‘zip’, ‘xz’, None}, default ‘infer’
For on-the-fly decompression of on-disk data. If ‘infer’ and filepath_or_buffer is path-
like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, or ‘.xz’
(otherwise no decompression). If using ‘zip’, the ZIP file must contain only one data
file to be read in. Set to None for no decompression.
New in version 0.18.1: support for ‘zip’ and ‘xz’ compression.
thousands : str, default None
Thousands separator
decimal : str, default ‘.’
Character to recognize as decimal point (e.g. use ‘,’ for European data).
float_precision : string, default None
Specifies which converter the C engine should use for floating-point values. The op-
tions are None for the ordinary converter, high for the high-precision converter, and
round_trip for the round-trip converter.
lineterminator : str (length 1), default None
Character to break file into lines. Only valid with C parser.
quotechar : str (length 1), optional
The character used to denote the start and end of a quoted item. Quoted items can
include the delimiter and it will be ignored.
quoting : int or csv.QUOTE_* instance, default 0
Control field quoting behavior per csv.QUOTE_* constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or
QUOTE_NONE (3).
doublequote : boolean, default True
When quotechar is specified and quoting is not QUOTE_NONE, indicate whether or not
to interpret two consecutive quotechar elements INSIDE a field as a single quotechar
element.
escapechar : str (length 1), default None
One-character string used to escape delimiter when quoting is QUOTE_NONE.
comment : str, default None
Indicates remainder of line should not be parsed. If found at the beginning of a line, the
line will be ignored altogether. This parameter must be a single character. Like empty
lines (as long as skip_blank_lines=True), fully commented lines are ignored
by the parameter header but not by skiprows. For example, if comment='#', parsing
#empty\na,b,c\n1,2,3 with header=0 will result in ‘a,b,c’ being treated as the
header.
encoding : str, default None
Encoding to use for UTF when reading/writing (ex. ‘utf-8’). List of Python standard
encodings
dialect : str or csv.Dialect instance, default None
If provided, this parameter will override values (default or not) for the following pa-
rameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting.
If it is necessary to override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
tupleize_cols : boolean, default False
Deprecated since version 0.21.0: This argument will be removed and will always con-
vert to MultiIndex
Leave a list of tuples on columns as is (default is to convert to a MultiIndex on the
columns)
error_bad_lines : boolean, default True
Lines with too many fields (e.g. a csv line with too many commas) will by default cause
an exception to be raised, and no DataFrame will be returned. If False, then these “bad
lines” will dropped from the DataFrame that is returned.
warn_bad_lines : boolean, default True
If error_bad_lines is False, and warn_bad_lines is True, a warning for each “bad line”
will be output.
low_memory : boolean, default True
Internally process the file in chunks, resulting in lower memory use while parsing, but
possibly mixed type inference. To ensure no mixed types either set False, or specify the
type with the dtype parameter. Note that the entire file is read into a single DataFrame
regardless, use the chunksize or iterator parameter to return the data in chunks. (Only
valid with C parser)
memory_map : boolean, default False
If a filepath is provided for filepath_or_buffer, map the file object directly onto memory
and access the data directly from there. Using this option can improve performance
because there is no longer any I/O overhead.
Returns
result [DataFrame or TextParser]
34.1.2.2 pandas.read_csv
detect the separator by Python’s builtin sniffer tool, csv.Sniffer. In addition, sep-
arators longer than 1 character and different from '\s+' will be interpreted as regular
expressions and will also force the use of the Python parsing engine. Note that regex
delimiters are prone to ignoring quoted data. Regex example: '\r\t'
delimiter : str, default None
Alternative argument name for sep.
delim_whitespace : boolean, default False
Specifies whether or not whitespace (e.g. ' ' or '\t') will be used as the sep.
Equivalent to setting sep='\s+'. If this option is set to True, nothing should be
passed in for the delimiter parameter.
New in version 0.18.1: support for the Python parser.
header : int or list of ints, default ‘infer’
Row number(s) to use as the column names, and the start of the data. Default be-
havior is to infer the column names: if no names are passed the behavior is identical
to header=0 and column names are inferred from the first line of the file, if col-
umn names are passed explicitly then the behavior is identical to header=None.
Explicitly pass header=0 to be able to replace existing names. The header can
be a list of integers that specify row locations for a multi-index on the columns e.g.
[0,1,3]. Intervening rows that are not specified will be skipped (e.g. 2 in this exam-
ple is skipped). Note that this parameter ignores commented lines and empty lines if
skip_blank_lines=True, so header=0 denotes the first line of data rather than
the first line of the file.
names : array-like, default None
List of column names to use. If file contains no header row, then you should explicitly
pass header=None. Duplicates in this list will cause a UserWarning to be issued.
index_col : int or sequence or False, default None
Column to use as the row labels of the DataFrame. If a sequence is given, a MultiIndex
is used. If you have a malformed file with delimiters at the end of each line, you might
consider index_col=False to force pandas to _not_ use the first column as the index (row
names)
usecols : list-like or callable, default None
Return a subset of the columns. If list-like, all elements must either be positional
(i.e. integer indices into the document columns) or strings that correspond to col-
umn names provided either by the user in names or inferred from the document
header row(s). For example, a valid list-like usecols parameter would be [0, 1, 2]
or [‘foo’, ‘bar’, ‘baz’]. Element order is ignored, so usecols=[0, 1] is the
same as [1, 0]. To instantiate a DataFrame from data with element order pre-
served use pd.read_csv(data, usecols=['foo', 'bar'])[['foo',
'bar']] for columns in ['foo', 'bar'] order or pd.read_csv(data,
usecols=['foo', 'bar'])[['bar', 'foo']] for ['bar', 'foo'] or-
der.
If callable, the callable function will be evaluated against the column names, returning
names where the callable function evaluates to True. An example of a valid callable ar-
gument would be lambda x: x.upper() in ['AAA', 'BBB', 'DDD'].
Using this parameter results in much faster parsing time and lower memory usage.
squeeze : boolean, default False
If the parsed data only contains one column then return a Series
prefix : str, default None
Prefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, . . .
mangle_dupe_cols : boolean, default True
Duplicate columns will be specified as ‘X’, ‘X.1’, . . . ’X.N’, rather than ‘X’. . . ’X’. Pass-
ing in False will cause data to be overwritten if there are duplicate names in the columns.
dtype : Type name or dict of column -> type, default None
Data type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32} Use str or object to-
gether with suitable na_values settings to preserve and not interpret dtype. If converters
are specified, they will be applied INSTEAD of dtype conversion.
engine : {‘c’, ‘python’}, optional
Parser engine to use. The C engine is faster while the python engine is currently more
feature-complete.
converters : dict, default None
Dict of functions for converting values in certain columns. Keys can either be integers
or column labels
true_values : list, default None
Values to consider as True
false_values : list, default None
Values to consider as False
skipinitialspace : boolean, default False
Skip spaces after delimiter.
skiprows : list-like or integer or callable, default None
Line numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file.
If callable, the callable function will be evaluated against the row indices, returning
True if the row should be skipped and False otherwise. An example of a valid callable
argument would be lambda x: x in [0, 2].
skipfooter : int, default 0
Number of lines at bottom of file to skip (Unsupported with engine=’c’)
nrows : int, default None
Number of rows of file to read. Useful for reading pieces of large files
na_values : scalar, str, list-like, or dict, default None
Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA
values. By default the following values are interpreted as NaN: ‘’, ‘#N/A’, ‘#N/A N/A’,
‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘N/A’, ‘NA’,
‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’.
keep_default_na : bool, default True
Whether or not to include the default NaN values when parsing the data. Depending on
whether na_values is passed in, the behavior is as follows:
single array and pass that; and 3) call date_parser once for each row using one or more
strings (corresponding to the columns defined by parse_dates) as arguments.
dayfirst : boolean, default False
DD/MM format dates, international and European format
iterator : boolean, default False
Return TextFileReader object for iteration or getting chunks with get_chunk().
chunksize : int, default None
Return TextFileReader object for iteration. See the IO Tools docs for more information
on iterator and chunksize.
compression : {‘infer’, ‘gzip’, ‘bz2’, ‘zip’, ‘xz’, None}, default ‘infer’
For on-the-fly decompression of on-disk data. If ‘infer’ and filepath_or_buffer is path-
like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, or ‘.xz’
(otherwise no decompression). If using ‘zip’, the ZIP file must contain only one data
file to be read in. Set to None for no decompression.
New in version 0.18.1: support for ‘zip’ and ‘xz’ compression.
thousands : str, default None
Thousands separator
decimal : str, default ‘.’
Character to recognize as decimal point (e.g. use ‘,’ for European data).
float_precision : string, default None
Specifies which converter the C engine should use for floating-point values. The op-
tions are None for the ordinary converter, high for the high-precision converter, and
round_trip for the round-trip converter.
lineterminator : str (length 1), default None
Character to break file into lines. Only valid with C parser.
quotechar : str (length 1), optional
The character used to denote the start and end of a quoted item. Quoted items can
include the delimiter and it will be ignored.
quoting : int or csv.QUOTE_* instance, default 0
Control field quoting behavior per csv.QUOTE_* constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or
QUOTE_NONE (3).
doublequote : boolean, default True
When quotechar is specified and quoting is not QUOTE_NONE, indicate whether or not
to interpret two consecutive quotechar elements INSIDE a field as a single quotechar
element.
escapechar : str (length 1), default None
One-character string used to escape delimiter when quoting is QUOTE_NONE.
comment : str, default None
Indicates remainder of line should not be parsed. If found at the beginning of a line, the
line will be ignored altogether. This parameter must be a single character. Like empty
lines (as long as skip_blank_lines=True), fully commented lines are ignored
by the parameter header but not by skiprows. For example, if comment='#', parsing
#empty\na,b,c\n1,2,3 with header=0 will result in ‘a,b,c’ being treated as the
header.
encoding : str, default None
Encoding to use for UTF when reading/writing (ex. ‘utf-8’). List of Python standard
encodings
dialect : str or csv.Dialect instance, default None
If provided, this parameter will override values (default or not) for the following pa-
rameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting.
If it is necessary to override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
tupleize_cols : boolean, default False
Deprecated since version 0.21.0: This argument will be removed and will always con-
vert to MultiIndex
Leave a list of tuples on columns as is (default is to convert to a MultiIndex on the
columns)
error_bad_lines : boolean, default True
Lines with too many fields (e.g. a csv line with too many commas) will by default cause
an exception to be raised, and no DataFrame will be returned. If False, then these “bad
lines” will dropped from the DataFrame that is returned.
warn_bad_lines : boolean, default True
If error_bad_lines is False, and warn_bad_lines is True, a warning for each “bad line”
will be output.
low_memory : boolean, default True
Internally process the file in chunks, resulting in lower memory use while parsing, but
possibly mixed type inference. To ensure no mixed types either set False, or specify the
type with the dtype parameter. Note that the entire file is read into a single DataFrame
regardless, use the chunksize or iterator parameter to return the data in chunks. (Only
valid with C parser)
memory_map : boolean, default False
If a filepath is provided for filepath_or_buffer, map the file object directly onto memory
and access the data directly from there. Using this option can improve performance
because there is no longer any I/O overhead.
Returns
result [DataFrame or TextParser]
34.1.2.3 pandas.read_fwf
Return a subset of the columns. If list-like, all elements must either be positional
(i.e. integer indices into the document columns) or strings that correspond to col-
umn names provided either by the user in names or inferred from the document
header row(s). For example, a valid list-like usecols parameter would be [0, 1, 2]
or [‘foo’, ‘bar’, ‘baz’]. Element order is ignored, so usecols=[0, 1] is the
same as [1, 0]. To instantiate a DataFrame from data with element order pre-
served use pd.read_csv(data, usecols=['foo', 'bar'])[['foo',
'bar']] for columns in ['foo', 'bar'] order or pd.read_csv(data,
usecols=['foo', 'bar'])[['bar', 'foo']] for ['bar', 'foo'] or-
der.
If callable, the callable function will be evaluated against the column names, returning
names where the callable function evaluates to True. An example of a valid callable ar-
gument would be lambda x: x.upper() in ['AAA', 'BBB', 'DDD'].
Using this parameter results in much faster parsing time and lower memory usage.
squeeze : boolean, default False
If the parsed data only contains one column then return a Series
prefix : str, default None
Prefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, . . .
mangle_dupe_cols : boolean, default True
Duplicate columns will be specified as ‘X’, ‘X.1’, . . . ’X.N’, rather than ‘X’. . . ’X’. Pass-
ing in False will cause data to be overwritten if there are duplicate names in the columns.
dtype : Type name or dict of column -> type, default None
Data type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32} Use str or object to-
gether with suitable na_values settings to preserve and not interpret dtype. If converters
are specified, they will be applied INSTEAD of dtype conversion.
converters : dict, default None
Dict of functions for converting values in certain columns. Keys can either be integers
or column labels
true_values : list, default None
Values to consider as True
false_values : list, default None
Values to consider as False
skipinitialspace : boolean, default False
Skip spaces after delimiter.
skiprows : list-like or integer or callable, default None
Line numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file.
If callable, the callable function will be evaluated against the row indices, returning
True if the row should be skipped and False otherwise. An example of a valid callable
argument would be lambda x: x in [0, 2].
skipfooter : int, default 0
Number of lines at bottom of file to skip (Unsupported with engine=’c’)
nrows : int, default None
Number of rows of file to read. Useful for reading pieces of large files
na_values : scalar, str, list-like, or dict, default None
Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA
values. By default the following values are interpreted as NaN: ‘’, ‘#N/A’, ‘#N/A N/A’,
‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘N/A’, ‘NA’,
‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’.
keep_default_na : bool, default True
Whether or not to include the default NaN values when parsing the data. Depending on
whether na_values is passed in, the behavior is as follows:
• If keep_default_na is True, and na_values are specified, na_values is appended to the
default NaN values used for parsing.
• If keep_default_na is True, and na_values are not specified, only the default NaN
values are used for parsing.
• If keep_default_na is False, and na_values are specified, only the NaN values speci-
fied na_values are used for parsing.
• If keep_default_na is False, and na_values are not specified, no strings will be parsed
as NaN.
Note that if na_filter is passed in as False, the keep_default_na and na_values parame-
ters will be ignored.
na_filter : boolean, default True
Detect missing value markers (empty strings and the value of na_values). In data with-
out any NAs, passing na_filter=False can improve the performance of reading a large
file
verbose : boolean, default False
Indicate number of NA values placed in non-numeric columns
skip_blank_lines : boolean, default True
If True, skip over blank lines rather than interpreting as NaN values
parse_dates : boolean or list of ints or names or list of lists or dict, default False
• boolean. If True -> try parsing the index.
• list of ints or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate
date column.
• list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date
column.
• dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’
If a column or index contains an unparseable date, the entire column or index will be
returned unaltered as an object data type. For non-standard datetime parsing, use pd.
to_datetime after pd.read_csv
Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_format : boolean, default False
If True and parse_dates is enabled, pandas will attempt to infer the format of the date-
time strings in the columns, and if it can be inferred, switch to a faster method of parsing
them. In some cases this can increase the parsing speed by 5-10x.
Returns
result [DataFrame or TextParser]
34.1.2.4 pandas.read_msgpack
34.1.3 Clipboard
34.1.3.1 pandas.read_clipboard
pandas.read_clipboard(sep=’\\s+’, **kwargs)
Read text from clipboard and pass to read_table. See read_table for the full argument list
Parameters sep : str, default ‘s+’.
A string or regex delimiter. The default of ‘s+’ denotes one or more whitespace charac-
ters.
Returns
parsed [DataFrame]
34.1.4 Excel
read_excel(io[, sheet_name, header, names, . . . ]) Read an Excel table into a pandas DataFrame
ExcelFile.parse([sheet_name, header, names, Parse specified sheet(s) into a DataFrame
. . . ])
34.1.4.1 pandas.read_excel
Examples
The file can be read using the file name as string or an open file object:
>>> pd.read_excel('tmp.xlsx')
Name Value
0 string1 1
1 string2 2
2 string3 3
>>> pd.read_excel(open('tmp.xlsx','rb'))
Name Value
0 string1 1
1 string2 2
2 string3 3
Index and header can be specified via the index_col and header arguments
True, False, and NA values, and thousands separators have defaults, but can be explicitly specified, too. Supply
the values you would like as strings or lists of strings!
>>> pd.read_excel('tmp.xlsx',
... na_values=['string1', 'string2'])
Name Value
0 NaN 1
1 NaN 2
2 string3 3
Comment lines in the excel input file can be skipped using the comment kwarg
34.1.4.2 pandas.ExcelFile.parse
34.1.5 JSON
34.1.5.1 pandas.read_json
See also:
DataFrame.to_json
Notes
Specific to orient='table', if a DataFrame with a literal Index name of index gets written with
to_json(), the subsequent read operation will incorrectly set the Index name to None. This is because
index is also used by DataFrame.to_json() to denote a missing Index name, and the subsequent
read_json() operation cannot distinguish between the two. The same limitation is encountered with a
MultiIndex and any names beginning with 'level_'.
Examples
>>> df.to_json(orient='split')
'{"columns":["col 1","col 2"],
"index":["row 1","row 2"],
"data":[["a","b"],["c","d"]]}'
>>> pd.read_json(_, orient='split')
col 1 col 2
row 1 a b
row 2 c d
>>> df.to_json(orient='index')
'{"row 1":{"col 1":"a","col 2":"b"},"row 2":{"col 1":"c","col 2":"d"}}'
>>> pd.read_json(_, orient='index')
col 1 col 2
row 1 a b
row 2 c d
Encoding/decoding a Dataframe using 'records' formatted JSON. Note that index labels are not preserved
with this encoding.
>>> df.to_json(orient='records')
'[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
>>> pd.read_json(_, orient='records')
col 1 col 2
0 a b
1 c d
>>> df.to_json(orient='table')
'{"schema": {"fields": [{"name": "index", "type": "string"},
{"name": "col 1", "type": "string"},
{"name": "col 2", "type": "string"}],
(continues on next page)
json_normalize(data[, record_path, meta, . . . ]) “Normalize” semi-structured JSON data into a flat table
build_table_schema(data[, index, . . . ]) Create a Table schema from data.
34.1.5.2 pandas.io.json.json_normalize
Examples
34.1.5.3 pandas.io.json.build_table_schema
Notes
See _as_json_table_type for conversion types. Timedeltas as converted to ISO8601 duration format with 9
decimal places after the secnods field for nanosecond precision.
Categoricals are converted to the any dtype, and use the enum field constraint to list the allowed values. The
ordered attribute is included in an ordered field.
Examples
>>> df = pd.DataFrame(
... {'A': [1, 2, 3],
... 'B': ['a', 'b', 'c'],
... 'C': pd.date_range('2016-01-01', freq='d', periods=3),
... }, index=pd.Index(range(3), name='idx'))
>>> build_table_schema(df)
{'fields': [{'name': 'idx', 'type': 'integer'},
{'name': 'A', 'type': 'integer'},
{'name': 'B', 'type': 'string'},
{'name': 'C', 'type': 'datetime'}],
'pandas_version': '0.20.0',
'primaryKey': ['idx']}
34.1.6 HTML
read_html(io[, match, flavor, header, . . . ]) Read HTML tables into a list of DataFrame ob-
jects.
34.1.6.1 pandas.read_html
the HTML is extremely simple you will probably need to pass a non-empty string here.
Defaults to ‘.+’ (match any non-empty string). The default value will return all tables
contained on a page. This value is converted to a regular expression so that there is
consistent behavior between Beautiful Soup and lxml.
flavor : str or None, container of strings
The parsing engine to use. ‘bs4’ and ‘html5lib’ are synonymous with each other, they
are both there for backwards compatibility. The default of None tries to use lxml to
parse and if that fails it falls back on bs4 + html5lib.
header : int or list-like or None, optional
The row (or list of rows for a MultiIndex) to use to make the columns headers.
index_col : int or list-like or None, optional
The column (or list of columns) to use to create the index.
skiprows : int or list-like or slice or None, optional
0-based. Number of rows to skip after parsing the column integer. If a sequence of
integers or a slice is given, will skip the rows indexed by that sequence. Note that a
single element sequence means ‘skip the nth row’ whereas an integer means ‘skip n
rows’.
attrs : dict or None, optional
This is a dictionary of attributes that you can pass to use to identify the table in the
HTML. These are not checked for validity before being passed to lxml or Beautiful
Soup. However, these attributes must be valid HTML table attributes to work correctly.
For example,
is a valid attribute dictionary because the ‘id’ HTML tag attribute is a valid HTML
attribute for any HTML tag as per this document.
is not a valid attribute dictionary because ‘asdf’ is not a valid HTML attribute even if
it is a valid XML attribute. Valid HTML 4.01 table attributes can be found here. A
working draft of the HTML 5 spec can be found here. It contains the latest information
on table attributes for the modern web.
parse_dates : bool, optional
See read_csv() for more details.
tupleize_cols : bool, optional
If False try to parse multiple header rows into a MultiIndex, otherwise return raw
tuples. Defaults to False.
Deprecated since version 0.21.0: This argument will be removed and will always con-
vert to MultiIndex
thousands : str, optional
Separator to use to parse thousands. Defaults to ','.
encoding : str or None, optional
The encoding used to decode the web page. Defaults to None.‘‘None‘‘ preserves the
previous encoding behavior, which depends on the underlying parser library (e.g., the
parser library will try to use the encoding provided by the document).
decimal : str, default ‘.’
Character to recognize as decimal point (e.g. use ‘,’ for European data).
New in version 0.19.0.
converters : dict, default None
Dict of functions for converting values in certain columns. Keys can either be integers or
column labels, values are functions that take one input argument, the cell (not column)
content, and return the transformed content.
New in version 0.19.0.
na_values : iterable, default None
Custom NA values
New in version 0.19.0.
keep_default_na : bool, default True
If na_values are specified and keep_default_na is False the default NaN values are over-
ridden, otherwise they’re appended to
New in version 0.19.0.
display_only : bool, default True
Whether elements with “display: none” should be parsed
New in version 0.23.0.
Returns
dfs [list of DataFrames]
See also:
pandas.read_csv
Notes
Before using this function you should read the gotchas about the HTML parsing libraries.
Expect to do some cleanup after you call this function. For example, you might need to manually assign column
names if the column names are converted to NaN when you pass the header=0 argument. We try to assume as
little as possible about the structure of the table and push the idiosyncrasies of the HTML contained in the table
to the user.
This function searches for <table> elements and only for <tr> and <th> rows and <td> elements within
each <tr> or <th> element in the table. <td> stands for “table data”.
Similar to read_csv() the header argument is applied after skiprows is applied.
This function will always return a list of DataFrame or it will fail, e.g., it will not return an empty list.
Examples
See the read_html documentation in the IO section of the docs for some examples of reading in HTML tables.
read_hdf(path_or_buf[, key, mode]) Read from the store, close it if we opened it.
HDFStore.put(key, value[, format, append]) Store object in HDFStore
HDFStore.append(key, value[, format, . . . ]) Append to Table in file.
HDFStore.get(key) Retrieve pandas object stored in file
HDFStore.select(key[, where, start, stop, . . . ]) Retrieve pandas object stored in file, optionally based
on where criteria
HDFStore.info() print detailed information on the store
HDFStore.keys() Return a (potentially unordered) list of the keys corre-
sponding to the objects stored in the HDFStore.
34.1.7.1 pandas.read_hdf
Examples
34.1.7.2 pandas.HDFStore.put
34.1.7.3 pandas.HDFStore.append
Notes
Does not check if data being appended overlaps with existing data in the table, so be careful
34.1.7.4 pandas.HDFStore.get
HDFStore.get(key)
Retrieve pandas object stored in file
Parameters
key [object]
Returns
obj [type of object stored in file]
34.1.7.5 pandas.HDFStore.select
34.1.7.6 pandas.HDFStore.info
HDFStore.info()
print detailed information on the store
New in version 0.21.0.
34.1.7.7 pandas.HDFStore.keys
HDFStore.keys()
Return a (potentially unordered) list of the keys corresponding to the objects stored in the HDFStore. These are
ABSOLUTE path-names (e.g. have the leading ‘/’
34.1.8 Feather
34.1.8.1 pandas.read_feather
pandas.read_feather(path, nthreads=1)
Load a feather-format object from the file path
Parameters
path [string file path, or file-like object]
34.1.9 Parquet
read_parquet(path[, engine, columns]) Load a parquet object from the file path, returning a
DataFrame.
34.1.9.1 pandas.read_parquet
Returns
DataFrame
34.1.10 SAS
34.1.10.1 pandas.read_sas
34.1.11 SQL
34.1.12.1 pandas.read_gbq
See also:
34.1.13 STATA
34.1.13.1 pandas.read_stata
Examples
34.1.13.2 pandas.io.stata.StataReader.data
StataReader.data(**kwargs)
Reads observations from Stata file, converting them into a dataframe
Deprecated since version This: is a legacy method. Use read in new code.
34.1.13.3 pandas.io.stata.StataReader.data_label
StataReader.data_label()
Returns data label of Stata file
34.1.13.4 pandas.io.stata.StataReader.value_labels
StataReader.value_labels()
Returns a dict, associating each variable name a dict, associating each value its corresponding label
34.1.13.5 pandas.io.stata.StataReader.variable_labels
StataReader.variable_labels()
Returns variable labels as a dict, associating each variable name with corresponding label
34.1.13.6 pandas.io.stata.StataWriter.write_file
StataWriter.write_file()
melt(frame[, id_vars, value_vars, var_name, . . . ]) “Unpivots” a DataFrame from wide format to long for-
mat, optionally leaving identifier variables set.
pivot(index, columns, values) Produce ‘pivot’ table based on 3 columns of this
DataFrame.
pivot_table(data[, values, index, columns, . . . ]) Create a spreadsheet-style pivot table as a DataFrame.
crosstab(index, columns[, values, rownames, . . . ]) Compute a simple cross-tabulation of two (or more) fac-
tors.
cut(x, bins[, right, labels, retbins, . . . ]) Bin values into discrete intervals.
qcut(x, q[, labels, retbins, precision, . . . ]) Quantile-based discretization function.
merge(left, right[, how, on, left_on, . . . ]) Merge DataFrame objects by performing a database-
style join operation by columns or indexes.
merge_ordered(left, right[, on, left_on, . . . ]) Perform merge with optional filling/interpolation de-
signed for ordered data like time series data.
merge_asof(left, right[, on, left_on, . . . ]) Perform an asof merge.
concat(objs[, axis, join, join_axes, . . . ]) Concatenate pandas objects along a particular axis with
optional set logic along the other axes.
get_dummies(data[, prefix, prefix_sep, . . . ]) Convert categorical variable into dummy/indicator vari-
ables
factorize(values[, sort, order, . . . ]) Encode the object as an enumerated type or categorical
variable.
unique(values) Hash table-based unique.
wide_to_long(df, stubnames, i, j[, sep, suffix]) Wide panel to long format.
34.2.1.1 pandas.melt
Examples
34.2.1.2 pandas.pivot
See also:
DataFrame.pivot_table generalization of pivot that can handle duplicate values for one index/column
pair
Notes
Obviously, all 3 of the input arguments must have the same length
34.2.1.3 pandas.pivot_table
Examples
34.2.1.4 pandas.crosstab
Notes
Any Series passed will have their name attributes used unless row or column names for the cross-tabulation are
specified.
Any input passed containing Categorical data will have all of its categories included in the cross-tabulation,
even if the actual data does not contain any instances of a particular category.
In the event that there aren’t overlapping indexes an empty DataFrame will be returned.
Examples
34.2.1.5 pandas.cut
• int : Defines the number of equal-width bins in the range of x. The range of x is
extended by .1% on each side to include the minimum and maximum values of x.
• sequence of scalars : Defines the bin edges allowing for non-uniform width. No
extension of the range of x is done.
• IntervalIndex : Defines the exact bins to be used.
right : bool, default True
Indicates whether bins includes the rightmost edge or not. If right == True (the
default), then the bins [1, 2, 3, 4] indicate (1,2], (2,3], (3,4]. This argument is
ignored when bins is an IntervalIndex.
labels : array or bool, optional
Specifies the labels for the returned bins. Must be the same length as the resulting bins.
If False, returns only integer indicators of the bins. This affects the type of the output
container (see below). This argument is ignored when bins is an IntervalIndex.
retbins : bool, default False
Whether to return the bins or not. Useful when bins is provided as a scalar.
precision : int, default 3
The precision at which to store and display the bins labels.
include_lowest : bool, default False
Whether the first interval should be left-inclusive or not.
duplicates : {default ‘raise’, ‘drop’}, optional
If bin edges are not unique, raise ValueError or drop non-uniques.
New in version 0.23.0.
Returns out : pandas.Categorical, Series, or ndarray
An array-like object representing the respective bin for each value of x. The type de-
pends on the value of labels.
• True (default) : returns a Series for Series x or a pandas.Categorical for all other
inputs. The values stored within are Interval dtype.
• sequence of scalars : returns a Series for Series x or a pandas.Categorical for all other
inputs. The values stored within are whatever the type in the sequence is.
• False : returns an ndarray of integers.
bins : numpy.ndarray or IntervalIndex.
The computed or specified bins. Only returned when retbins=True. For scalar or se-
quence bins, this is an ndarray with the computed bins. If set duplicates=drop, bins will
drop non-unique bin. For an IntervalIndex bins, this is equal to bins.
See also:
qcut Discretize variable into equal-sized buckets based on rank or based on sample quantiles.
pandas.Categorical Array type for storing data that come from a fixed set of values.
Series One-dimensional array with axis labels (including time series).
pandas.IntervalIndex Immutable Index implementing an ordered, sliceable set.
Notes
Any NA values will be NA in the result. Out of bounds values will be NA in the resulting Series or pan-
das.Categorical object.
Examples
Discovers the same bins, but assign them specific labels. Notice that the returned Categorical’s categories are
labels and is ordered.
Passing a Series as an input returns a Series with mapping value. It is used to map numerically to intervals based
on bins.
Passing an IntervalIndex for bins results in those categories exactly. Notice that values not covered by the
IntervalIndex are set to NaN. 0 is to the left of the first bin (which is closed on the right), and 1.5 falls between
two bins.
34.2.1.6 pandas.qcut
Notes
Examples
>>> pd.qcut(range(5), 4)
...
[(-0.001, 1.0], (-0.001, 1.0], (1.0, 2.0], (2.0, 3.0], (3.0, 4.0]]
Categories (4, interval[float64]): [(-0.001, 1.0] < (1.0, 2.0] ...
34.2.1.7 pandas.merge
• inner: use intersection of keys from both frames, similar to a SQL inner join; preserve the
order of the left keys
on : label or list
Column or index level names to join on. These must be found in both DataFrames. If on
is None and not merging on indexes then this defaults to the intersection of the columns
in both DataFrames.
left_on : label or list, or array-like
Column or index level names to join on in the left DataFrame. Can also be an array or
list of arrays of the length of the left DataFrame. These arrays are treated as if they are
columns.
right_on : label or list, or array-like
Column or index level names to join on in the right DataFrame. Can also be an array or
list of arrays of the length of the right DataFrame. These arrays are treated as if they are
columns.
left_index : boolean, default False
Use the index from the left DataFrame as the join key(s). If it is a MultiIndex, the
number of keys in the other DataFrame (either the index or a number of columns) must
match the number of levels
right_index : boolean, default False
Use the index from the right DataFrame as the join key. Same caveats as left_index
sort : boolean, default False
Sort the join keys lexicographically in the result DataFrame. If False, the order of the
join keys depends on the join type (how keyword)
suffixes : 2-length sequence (tuple, list, . . . )
Suffix to apply to overlapping column names in the left and right side, respectively
copy : boolean, default True
If False, do not copy data unnecessarily
indicator : boolean or string, default False
If True, adds a column to output DataFrame called “_merge” with information on the
source of each row. If string, column with information on source of each row will be
added to output DataFrame, and column will be named value of string. Information
column is Categorical-type and takes on a value of “left_only” for observations whose
merge key only appears in ‘left’ DataFrame, “right_only” for observations whose merge
key only appears in ‘right’ DataFrame, and “both” if the observation’s merge key is
found in both.
validate : string, default None
If specified, checks if merge is of specified type.
• “one_to_one” or “1:1”: check if merge keys are unique in both left and right datasets.
• “one_to_many” or “1:m”: check if merge keys are unique in left dataset.
• “many_to_one” or “m:1”: check if merge keys are unique in right dataset.
• “many_to_many” or “m:m”: allowed, but does not result in checks.
Notes
Support for specifying index levels as the on, left_on, and right_on parameters was added in version 0.23.0
Examples
>>> A >>> B
lkey value rkey value
0 foo 1 0 foo 5
1 bar 2 1 bar 6
2 baz 3 2 qux 7
3 foo 4 3 bar 8
34.2.1.8 pandas.merge_ordered
Group left DataFrame by group columns and merge piece by piece with right DataFrame
right_by : column name or list of column names
Group right DataFrame by group columns and merge piece by piece with left DataFrame
fill_method : {‘ffill’, None}, default None
Interpolation method for data
suffixes : 2-length sequence (tuple, list, . . . )
Suffix to apply to overlapping column names in the left and right side, respectively
how : {‘left’, ‘right’, ‘outer’, ‘inner’}, default ‘outer’
• left: use only keys from left frame (SQL: left outer join)
• right: use only keys from right frame (SQL: right outer join)
• outer: use union of keys from both frames (SQL: full outer join)
• inner: use intersection of keys from both frames (SQL: inner join)
New in version 0.19.0.
Returns merged : DataFrame
The output type will the be same as ‘left’, if it is a subclass of DataFrame.
See also:
merge, merge_asof
Examples
>>> A >>> B
key lvalue group key rvalue
0 a 1 a 0 b 1
1 c 2 a 1 c 2
2 e 3 a 2 d 3
3 a 1 b
4 c 2 b
5 e 3 b
34.2.1.9 pandas.merge_asof
Examples
>>> quotes
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
>>> trades
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
We only asof within 2ms between the quote time and the trade time
We only asof within 10ms between the quote time and the trade time and we exclude exact matches on time.
However prior data will propagate forward
34.2.1.10 pandas.concat
When concatenating all Series along the index (axis=0), a Series is returned.
When objs contains at least one DataFrame, a DataFrame is returned. When
concatenating along the columns (axis=1), a DataFrame is returned.
See also:
Series.append, DataFrame.append, DataFrame.join, DataFrame.merge
Notes
Examples
Clear the existing index and reset it in the result by setting the ignore_index option to True.
>>> pd.concat([s1, s2], ignore_index=True)
0 a
1 b
2 c
3 d
dtype: object
Add a hierarchical index at the outermost level of the data with the keys option.
>>> pd.concat([s1, s2], keys=['s1', 's2',])
s1 0 a
1 b
s2 0 c
1 d
dtype: object
Label the index keys you create with the names option.
>>> pd.concat([s1, s2], keys=['s1', 's2'],
... names=['Series name', 'Row ID'])
Series name Row ID
s1 0 a
1 b
s2 0 c
1 d
dtype: object
Combine DataFrame objects with overlapping columns and return everything. Columns outside the intersec-
tion will be filled with NaN values.
Combine DataFrame objects with overlapping columns and return only those that are shared by passing
inner to the join keyword argument.
Prevent the result from including duplicate index values with the verify_integrity option.
34.2.1.11 pandas.get_dummies
See also:
Series.str.get_dummies
Examples
>>> pd.get_dummies(s)
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
>>> pd.get_dummies(s1)
a b
0 1 0
1 0 1
2 0 0
>>> pd.get_dummies(pd.Series(list('abcaa')))
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
4 1 0 0
34.2.1.12 pandas.factorize
Note: Even if there’s a missing value in values, uniques will not contain an entry for
it.
See also:
Examples
These examples all show factorize as a top-level method like pd.factorize(values). The results are
identical for methods like Series.factorize().
With sort=True, the uniques will be sorted, and labels will be shuffled so that the relationship is the main-
tained.
Missing values are indicated in labels with na_sentinel (-1 by default). Note that missing values are never
included in uniques.
Thus far, we’ve only factorized lists (which are internally coerced to NumPy arrays). When factorizing pandas
objects, the type of uniques will differ. For Categoricals, a Categorical is returned.
34.2.1.13 pandas.unique
pandas.unique(values)
Hash table-based unique. Uniques are returned in order of appearance. This does NOT sort.
Significantly faster than numpy.unique. Includes NA values.
Parameters
values [1d array-like]
Returns unique values.
• If the input is an Index, the return is an Index
• If the input is a Categorical dtype, the return is a Categorical
• If the input is a Series/ndarray, the return will be an ndarray
See also:
pandas.Index.unique, pandas.Series.unique
Examples
>>> pd.unique(Series([pd.Timestamp('20160101'),
... pd.Timestamp('20160101')]))
array(['2016-01-01T00:00:00.000000000'], dtype='datetime64[ns]')
>>> pd.unique(list('baabc'))
array(['b', 'a', 'c'], dtype=object)
>>> pd.unique(Series(pd.Categorical(list('baabc'))))
[b, a, c]
Categories (3, object): [b, a, c]
>>> pd.unique(Series(pd.Categorical(list('baabc'),
... categories=list('abc'))))
[b, a, c]
Categories (3, object): [b, a, c]
>>> pd.unique(Series(pd.Categorical(list('baabc'),
... categories=list('abc'),
(continues on next page)
An array of tuples
34.2.1.14 pandas.wide_to_long
A DataFrame that contains each stub name as a variable, with new index (i, j)
Notes
All extra variables are left untouched. This simply uses pandas.melt under the hood, but is hard-coded to “do
the right thing” in a typical case.
Examples
Going from long back to wide just takes some creative use of unstack
>>> w = l.unstack()
>>> w.columns = w.columns.map('{0[0]}{0[1]}'.format)
>>> w.reset_index()
famid birth ht1 ht2
0 1 1 2.8 3.4
1 1 2 2.9 3.8
2 1 3 2.2 2.9
3 2 1 2.0 3.2
4 2 2 1.8 2.8
5 2 3 1.9 2.4
6 3 1 2.2 3.3
7 3 2 2.3 3.4
8 3 3 2.1 2.9
>>> np.random.seed(0)
>>> df = pd.DataFrame({'A(quarterly)-2010': np.random.rand(3),
... 'A(quarterly)-2011': np.random.rand(3),
... 'B(quarterly)-2010': np.random.rand(3),
... 'B(quarterly)-2011': np.random.rand(3),
... 'X' : np.random.randint(3, size=3)})
>>> df['id'] = df.index
>>> df
A(quarterly)-2010 A(quarterly)-2011 B(quarterly)-2010 ...
0 0.548814 0.544883 0.437587 ...
1 0.715189 0.423655 0.891773 ...
2 0.602763 0.645894 0.963663 ...
X id
0 0 0
1 1 1
2 1 2
If we have many columns, we could also use a regex to find our stubnames and pass that list on to wide_to_long
>>> stubnames = sorted(
... set([match[0] for match in df.columns.str.findall(
... r'[A-B]\(.*\)').values if match != [] ])
... )
>>> list(stubnames)
['A(quarterly)', 'B(quarterly)']
All of the above examples have integers as suffixes. It is possible to have non-integers as suffixes.
>>> df = pd.DataFrame({
... 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3],
... 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3],
... 'ht_one': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1],
... 'ht_two': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9]
... })
>>> df
birth famid ht_one ht_two
0 1 1 2.8 3.4
1 2 1 2.9 3.8
2 3 1 2.2 2.9
3 1 2 2.0 3.2
4 2 2 1.8 2.8
5 3 2 1.9 2.4
6 1 3 2.2 3.3
7 2 3 2.3 3.4
8 3 3 2.1 2.9
34.2.2.1 pandas.isna
pandas.isna(obj)
Detect missing values for an array-like object.
This function takes a scalar or array-like object and indictates whether values are missing (NaN in numeric
arrays, None or NaN in object arrays, NaT in datetimelike).
Parameters obj : scalar or array-like
Object to check for null or missing values.
Returns bool or array-like of bool
For scalar input, returns a scalar boolean. For array input, returns an array of boolean
indicating whether each corresponding element is missing.
See also:
Examples
>>> pd.isna('dog')
False
>>> pd.isna(np.nan)
True
For Series and DataFrame, the same type is returned, containing booleans.
>>> df = pd.DataFrame([['ant', 'bee', 'cat'], ['dog', None, 'fly']])
>>> df
0 1 2
0 ant bee cat
1 dog None fly
>>> pd.isna(df)
0 1 2
0 False False False
1 False True False
>>> pd.isna(df[1])
0 False
1 True
Name: 1, dtype: bool
34.2.2.2 pandas.isnull
pandas.isnull(obj)
Detect missing values for an array-like object.
This function takes a scalar or array-like object and indictates whether values are missing (NaN in numeric
arrays, None or NaN in object arrays, NaT in datetimelike).
Parameters obj : scalar or array-like
Object to check for null or missing values.
Returns bool or array-like of bool
For scalar input, returns a scalar boolean. For array input, returns an array of boolean
indicating whether each corresponding element is missing.
See also:
Examples
>>> pd.isna('dog')
False
>>> pd.isna(np.nan)
True
For Series and DataFrame, the same type is returned, containing booleans.
>>> pd.isna(df[1])
0 False
1 True
Name: 1, dtype: bool
34.2.2.3 pandas.notna
pandas.notna(obj)
Detect non-missing values for an array-like object.
This function takes a scalar or array-like object and indictates whether values are valid (not missing, which is
NaN in numeric arrays, None or NaN in object arrays, NaT in datetimelike).
Parameters obj : array-like or object value
Object to check for not null or non-missing values.
Returns bool or array-like of bool
For scalar input, returns a scalar boolean. For array input, returns an array of boolean
indicating whether each corresponding element is valid.
See also:
Examples
>>> pd.notna('dog')
True
>>> pd.notna(np.nan)
False
For Series and DataFrame, the same type is returned, containing booleans.
>>> pd.notna(df[1])
0 True
1 False
Name: 1, dtype: bool
34.2.2.4 pandas.notnull
pandas.notnull(obj)
Detect non-missing values for an array-like object.
This function takes a scalar or array-like object and indictates whether values are valid (not missing, which is
NaN in numeric arrays, None or NaN in object arrays, NaT in datetimelike).
Parameters obj : array-like or object value
Object to check for not null or non-missing values.
Returns bool or array-like of bool
For scalar input, returns a scalar boolean. For array input, returns an array of boolean
indicating whether each corresponding element is valid.
See also:
Examples
>>> pd.notna('dog')
True
>>> pd.notna(np.nan)
False
For Series and DataFrame, the same type is returned, containing booleans.
>>> pd.notna(df[1])
0 True
1 False
Name: 1, dtype: bool
34.2.3.1 pandas.to_numeric
Examples
34.2.4.1 pandas.to_datetime
See also:
Examples
Assembling a datetime from multiple columns of a DataFrame. The keys can be common abbreviations like
[‘year’, ‘month’, ‘day’, ‘minute’, ‘second’, ‘ms’, ‘us’, ‘ns’]) or plurals of the same
If a date does not meet the timestamp limitations, passing errors=’ignore’ will return the original input instead
of raising any exception.
Passing errors=’coerce’ will force an out-of-bounds date to NaT, in addition to forcing non-dates (or non-
parseable dates) to NaT.
Passing infer_datetime_format=True can often-times speedup a parsing if its not an ISO8601 format exactly,
but in a regular format.
>>> s.head()
0 3/11/2000
1 3/12/2000
2 3/13/2000
3 3/11/2000
4 3/12/2000
dtype: object
Warning: For float arg, precision rounding might happen. To prevent unexpected behavior use a fixed-
width exact type.
34.2.4.2 pandas.to_timedelta
Examples
34.2.4.3 pandas.date_range
Notes
Of the four parameters start, end, periods, and freq, exactly three must be specified. If freq is omitted,
the resulting DatetimeIndex will have periods linearly spaced elements between start and end (closed
on both sides).
To learn more about the frequency strings, please see this link.
Examples
Specify start, end, and periods; the frequency is generated automatically (linearly spaced).
Other Parameters
Changed the freq (frequency) to 'M' (month end frequency).
closed controls whether to include start and end that are on the boundary. The default includes boundary points
on either end.
>>> pd.date_range(start='2017-01-01', end='2017-01-04', closed=None)
DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03', '2017-01-04'],
dtype='datetime64[ns]', freq='D')
34.2.4.4 pandas.bdate_range
Notes
Of the four parameters: start, end, periods, and freq, exactly three must be specified. Specifying freq
is a requirement for bdate_range. Use date_range if specifying freq is not desired.
To learn more about the frequency strings, please see this link.
34.2.4.5 pandas.period_range
Notes
Of the three parameters: start, end, and periods, exactly two must be specified.
To learn more about the frequency strings, please see this link.
Examples
If start or end are Period objects, they will be used as anchor endpoints for a PeriodIndex with
frequency matching that of the period_range constructor.
34.2.4.6 pandas.timedelta_range
Returns
rng [TimedeltaIndex]
Notes
Of the four parameters start, end, periods, and freq, exactly three must be specified. If freq is omit-
ted, the resulting TimedeltaIndex will have periods linearly spaced elements between start and end
(closed on both sides).
To learn more about the frequency strings, please see this link.
Examples
The closed parameter specifies which endpoint is included. The default behavior is to include both endpoints.
The freq parameter specifies the frequency of the TimedeltaIndex. Only fixed frequencies can be passed,
non-fixed frequencies such as ‘M’ (month end) will raise.
Specify start, end, and periods; the frequency is generated automatically (linearly spaced).
34.2.4.7 pandas.infer_freq
pandas.infer_freq(index, warn=True)
Infer the most likely frequency given the input index. If the frequency is uncertain, a warning will be printed.
Parameters index : DatetimeIndex or TimedeltaIndex
if passed a Series will use the values of the series (NOT THE INDEX)
34.2.5.1 pandas.interval_range
IntervalIndex an Index of intervals that are all closed on the same side.
Notes
Of the four parameters start, end, periods, and freq, exactly three must be specified. If freq is omit-
ted, the resulting IntervalIndex will have periods linearly spaced elements between start and end,
inclusively.
To learn more about datetime-like frequency strings, please see this link.
Examples
>>> pd.interval_range(start=pd.Timestamp('2017-01-01'),
end=pd.Timestamp('2017-01-04'))
IntervalIndex([(2017-01-01, 2017-01-02], (2017-01-02, 2017-01-03],
(2017-01-03, 2017-01-04]]
closed='right', dtype='interval[datetime64[ns]]')
The freq parameter specifies the frequency between the left and right. endpoints of the individual intervals
within the IntervalIndex. For numeric start and end, the frequency must also be numeric.
Similarly, for datetime-like start and end, the frequency must be convertible to a DateOffset.
>>> pd.interval_range(start=pd.Timestamp('2017-01-01'),
periods=3, freq='MS')
IntervalIndex([(2017-01-01, 2017-02-01], (2017-02-01, 2017-03-01],
(2017-03-01, 2017-04-01]]
closed='right', dtype='interval[datetime64[ns]]')
Specify start, end, and periods; the frequency is generated automatically (linearly spaced).
The closed parameter specifies which endpoints of the individual intervals within the IntervalIndex are
closed.
eval(expr[, parser, engine, truediv, . . . ]) Evaluate a Python expression as a string using various
backends.
34.2.6.1 pandas.eval
Python expressions.
parser : string, default ‘pandas’, {‘pandas’, ‘python’}
The parser to use to construct the syntax tree from the expression. The default of
'pandas' parses code slightly different than standard Python. Alternatively, you can
parse an expression using the 'python' parser to retain strict Python semantics. See
the enhancing performance documentation for more details.
engine : string or None, default ‘numexpr’, {‘python’, ‘numexpr’}
The engine used to evaluate the expression. Supported engines are
• None : tries to use numexpr, falls back to python
• 'numexpr': This default engine evaluates pandas objects using numexpr for
large speed ups in complex expressions with large frames.
• 'python': Performs operations as if you had eval’d in top level python. This
engine is generally not that useful.
More backends may be available in the future.
truediv : bool, optional
Whether to use true division, like in Python >= 3
local_dict : dict or None, optional
A dictionary of local variables, taken from locals() by default.
global_dict : dict or None, optional
A dictionary of global variables, taken from globals() by default.
resolvers : list of dict-like or None, optional
A list of objects implementing the __getitem__ special method that you can use
to inject an additional collection of namespaces to use for variable lookup. For ex-
ample, this is used in the query() method to inject the DataFrame.index and
DataFrame.columns variables that refer to their respective DataFrame instance
attributes.
level : int, optional
The number of prior stack frames to traverse and add to the current scope. Most users
will not need to change this parameter.
target : object, optional, default None
This is the target object for assignment. It is used when there is variable assignment in
the expression. If so, then target must support item assignment with string keys, and if
a copy is being returned, it must also support .copy().
inplace : bool, default False
If target is provided, and the expression mutates target, whether to modify target in-
place. Otherwise, return a copy of target with the mutation.
Returns
ndarray, numeric scalar, DataFrame, Series
Raises ValueError
There are many instances where such an error can be raised:
Notes
The dtype of any objects involved in an arithmetic % operation are recursively cast to float64.
See the enhancing performance documentation for more details.
34.2.7 Testing
test([extra_args])
34.2.7.1 pandas.test
pandas.test(extra_args=None)
34.3 Series
34.3.1 Constructor
Series([data, index, dtype, name, copy, . . . ]) One-dimensional ndarray with axis labels (including
time series).
34.3.1.1 pandas.Series
same length. The result index will be the sorted union of the two indexes.
Parameters data : array-like, dict, or scalar value
Contains data stored in Series
Changed in version 0.23.0: If data is a dict, argument order is maintained for Python
3.6 and later.
index : array-like or Index (1d)
Values must be hashable and have the same length as data. Non-unique index values
are allowed. Will default to RangeIndex (0, 1, 2, . . . , n) if not provided. If both a dict
and index sequence are used, the index will override the keys found in the dict.
dtype : numpy.dtype or None
If None, dtype will be inferred
copy : boolean, default False
Copy input data
Attributes
pandas.Series.T
Series.T
return the transpose, which is by definition self
pandas.Series.asobject
Series.asobject
Return object Series which contains boxed values.
Deprecated since version 0.23.0: Use astype(object) instead.
this is an internal non-public method
pandas.Series.at
Series.at
Access a single value for a row/column label pair.
Similar to loc, in that both provide label-based lookups. Use at if you only need to get or set a single
value in a DataFrame or Series.
Raises KeyError
When label does not exist in DataFrame
See also:
Examples
>>> df.loc[5].at['B']
4
pandas.Series.axes
Series.axes
Return a list of the row axis labels
pandas.Series.base
Series.base
return the base object if the memory of the underlying data is shared
pandas.Series.blocks
Series.blocks
Internal property, property synonym for as_blocks()
Deprecated since version 0.21.0.
pandas.Series.data
Series.data
return the data pointer of the underlying data
pandas.Series.dtype
Series.dtype
return the dtype object of the underlying data
pandas.Series.dtypes
Series.dtypes
return the dtype object of the underlying data
pandas.Series.flags
Series.flags
pandas.Series.ftype
Series.ftype
return if the data is sparse|dense
pandas.Series.ftypes
Series.ftypes
return if the data is sparse|dense
pandas.Series.hasnans
Series.hasnans
return if I have any nans; enables various perf speedups
pandas.Series.iat
Series.iat
Access a single value for a row/column pair by integer position.
Similar to iloc, in that both provide integer-based lookups. Use iat if you only need to get or set a
single value in a DataFrame or Series.
Raises IndexError
When integer position is out of bounds
See also:
Examples
>>> df.iat[1, 2]
1
>>> df.iat[1, 2] = 10
>>> df.iat[1, 2]
10
>>> df.loc[0].iat[1]
2
pandas.Series.iloc
Series.iloc
Purely integer-location based indexing for selection by position.
.iloc[] is primarily integer position based (from 0 to length-1 of the axis), but may also be used
with a boolean array.
Allowed inputs are:
• An integer, e.g. 5.
• A list or array of integers, e.g. [4, 3, 0].
• A slice object with ints, e.g. 1:7.
• A boolean array.
• A callable function with one argument (the calling Series, DataFrame or Panel) and that returns
valid output for indexing (one of the above)
.iloc will raise IndexError if a requested indexer is out-of-bounds, except slice indexers which allow
out-of-bounds indexing (this conforms with python/numpy slice semantics).
See more at Selection by Position
pandas.Series.index
Series.index
The index (axis labels) of the Series.
pandas.Series.is_monotonic
Series.is_monotonic
Return boolean if values in the object are monotonic_increasing
New in version 0.19.0.
Returns
is_monotonic [boolean]
pandas.Series.is_monotonic_decreasing
Series.is_monotonic_decreasing
Return boolean if values in the object are monotonic_decreasing
New in version 0.19.0.
Returns
is_monotonic_decreasing [boolean]
pandas.Series.is_monotonic_increasing
Series.is_monotonic_increasing
Return boolean if values in the object are monotonic_increasing
New in version 0.19.0.
Returns
is_monotonic [boolean]
pandas.Series.is_unique
Series.is_unique
Return boolean if values in the object are unique
Returns
is_unique [boolean]
pandas.Series.itemsize
Series.itemsize
return the size of the dtype of the item of the underlying data
pandas.Series.ix
Series.ix
A primarily label-location based indexer, with integer position fallback.
Warning: Starting in 0.20.0, the .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers.
.ix[] supports mixed integer and label based access. It is primarily label based, but will fall back to
integer positional access unless the corresponding axis is of integer type.
.ix is the most general indexer and will support any of the inputs in .loc and .iloc. .ix also supports
floating point label schemes. .ix is exceptionally useful when dealing with mixed positional and label
based hierarchical indexes.
However, when an axis is integer based, ONLY label based access and not positional access is supported.
Thus, in such cases, it’s usually better to be explicit and use .iloc or .loc.
See more at Advanced Indexing.
pandas.Series.loc
Series.loc
Access a group of rows and columns by label(s) or a boolean array.
.loc[] is primarily label based, but may also be used with a boolean array.
Allowed inputs are:
• A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index, and never as an integer
position along the index).
• A list or array of labels, e.g. ['a', 'b', 'c'].
• A slice object with labels, e.g. 'a':'f'.
Warning: Note that contrary to usual python slices, both the start and the stop are included
• A boolean array of the same length as the axis being sliced, e.g. [True, False, True].
• A callable function with one argument (the calling Series, DataFrame or Panel) and that returns
valid output for indexing (one of the above)
See more at Selection by Label
Raises KeyError:
when any items are not found
See also:
Examples
Getting values
>>> df.loc['viper']
max_speed 4
shield 5
Name: viper, dtype: int64
Slice with labels for row and single label for column. As mentioned above, note that both the start and
stop of the slice are included.
Setting values
Set value for all items matching the list of labels
>>> df.loc['cobra'] = 10
>>> df
max_speed shield
cobra 10 10
viper 4 50
sidewinder 7 50
Slice with integer labels for rows. As mentioned above, note that both the start and stop of the slice are
included.
>>> df.loc[7:9]
max_speed shield
7 1 2
8 4 5
9 7 8
>>> tuples = [
... ('cobra', 'mark i'), ('cobra', 'mark ii'),
... ('sidewinder', 'mark i'), ('sidewinder', 'mark ii'),
... ('viper', 'mark ii'), ('viper', 'mark iii')
... ]
>>> index = pd.MultiIndex.from_tuples(tuples)
>>> values = [[12, 2], [0, 4], [10, 20],
... [1, 4], [7, 1], [16, 36]]
>>> df = pd.DataFrame(values, columns=['max_speed', 'shield'], index=index)
>>> df
max_speed shield
cobra mark i 12 2
mark ii 0 4
sidewinder mark i 10 20
mark ii 1 4
viper mark ii 7 1
mark iii 16 36
>>> df.loc['cobra']
max_speed shield
mark i 12 2
mark ii 0 4
Single label for row and column. Similar to passing in a tuple, this returns a Series.
Single tuple for the index with a single label for the column
pandas.Series.nbytes
Series.nbytes
return the number of bytes in the underlying data
pandas.Series.ndim
Series.ndim
return the number of dimensions of the underlying data, by definition 1
pandas.Series.shape
Series.shape
return a tuple of the shape of the underlying data
pandas.Series.size
Series.size
return the number of elements in the underlying data
pandas.Series.strides
Series.strides
return the strides of the underlying data
pandas.Series.values
Series.values
Return Series as ndarray or ndarray-like depending on the dtype
Returns
arr [numpy.ndarray or ndarray-like]
Examples
>>> pd.Series(list('aabc')).values
array(['a', 'a', 'b', 'c'], dtype=object)
>>> pd.Series(list('aabc')).astype('category').values
[a, a, b, c]
Categories (3, object): [a, b, c]
empty
imag
is_copy
name
real
Methods
pandas.Series.abs
Series.abs()
Return a Series/DataFrame with absolute numeric value of each element.
This function only applies to elements that are all numeric.
Returns abs
Series/DataFrame containing the absolute value of each element.
See also:
Notes
√
For complex inputs, 1.2 + 1j, the absolute value is 𝑎2 + 𝑏2 .
Examples
Select rows with data closest to certain value using argsort (from StackOverflow).
>>> df = pd.DataFrame({
... 'a': [4, 5, 6, 7],
... 'b': [10, 20, 30, 40],
... 'c': [100, 50, -30, -50]
... })
>>> df
a b c
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
>>> df.loc[(df.c - 43).abs().argsort()]
a b c
1 5 20 50
0 4 10 100
2 6 30 -30
3 7 40 -50
pandas.Series.add
See also:
Series.radd
Examples
pandas.Series.add_prefix
Series.add_prefix(prefix)
Prefix labels with string prefix.
For Series, the row labels are prefixed. For DataFrame, the column labels are prefixed.
Parameters prefix : str
The string to add before each label.
Returns Series or DataFrame
New Series or DataFrame with updated labels.
See also:
Examples
>>> s.add_prefix('item_')
item_0 1
item_1 2
item_2 3
item_3 4
dtype: int64
>>> df.add_prefix('col_')
col_A col_B
0 1 3
1 2 4
2 3 5
3 4 6
pandas.Series.add_suffix
Series.add_suffix(suffix)
Suffix labels with string suffix.
For Series, the row labels are suffixed. For DataFrame, the column labels are suffixed.
Parameters suffix : str
The string to add after each label.
Returns Series or DataFrame
New Series or DataFrame with updated labels.
See also:
Examples
>>> s.add_suffix('_item')
0_item 1
1_item 2
2_item 3
3_item 4
dtype: int64
>>> df.add_suffix('_col')
A_col B_col
0 1 3
1 2 4
2 3 5
3 4 6
pandas.Series.agg
Returns
aggregated [Series]
See also:
pandas.Series.apply, pandas.Series.transform
Notes
Examples
>>> s = Series(np.random.randn(10))
>>> s.agg('min')
-1.3018049988556679
pandas.Series.aggregate
Notes
Examples
>>> s = Series(np.random.randn(10))
>>> s.agg('min')
-1.3018049988556679
pandas.Series.align
pandas.Series.all
Examples
Series
DataFrames
Create a dataframe from a dictionary.
>>> df.all()
col1 True
col2 False
dtype: bool
>>> df.all(axis='columns')
0 True
1 False
dtype: bool
>>> df.all(axis=None)
False
pandas.Series.any
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into
a scalar.
bool_only : boolean, default None
Include only boolean columns. If None, will attempt to use everything, then use only
boolean data. Not implemented for Series.
**kwargs : any, default None
Additional keywords have no effect but might be accepted for compatibility with
NumPy.
Returns
any [scalar or Series (if level specified)]
See also:
Examples
Series
For Series input, the output is a scalar indicating whether any element is True.
DataFrame
Whether each column contains at least one True element (the default).
>>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]})
>>> df
A B C
0 1 0 0
1 2 2 0
>>> df.any()
A True
B True
C False
dtype: bool
>>> df.any(axis='columns')
0 True
1 True
dtype: bool
>>> df.any(axis='columns')
0 True
1 False
dtype: bool
>>> df.any(axis=None)
True
>>> pd.DataFrame([]).any()
Series([], dtype: bool)
pandas.Series.append
Notes
Iteratively appending to a Series can be more computationally intensive than a single concatenate. A better
solution is to append values to a list and then concatenate the list with the original Series all at once.
Examples
>>> s1.append(s3)
0 1
1 2
2 3
3 4
4 5
5 6
dtype: int64
pandas.Series.apply
Returns
y [Series or DataFrame if func returns a Series]
See also:
Examples
Define a custom function that needs additional positional arguments and pass these additional arguments
using the args keyword.
Define a custom function that takes keyword arguments and pass these arguments to apply.
>>> series.apply(np.log)
London 2.995732
New York 3.044522
Helsinki 2.484907
dtype: float64
pandas.Series.argmax
numpy.argmax Return indices of the maximum values along the given axis.
DataFrame.idxmax Return index of first occurrence of maximum over requested axis.
Series.idxmin Return index label of the first occurrence of minimum of values.
Notes
This method is the Series version of ndarray.argmax. This method returns the label of the maximum,
while ndarray.argmax returns the position. To get the position, use series.values.argmax().
Examples
>>> s.idxmax()
'C'
If skipna is False and there is an NA value in the data, the function returns nan.
>>> s.idxmax(skipna=False)
nan
pandas.Series.argmin
numpy.argmin Return indices of the minimum values along the given axis.
DataFrame.idxmin Return index of first occurrence of minimum over requested axis.
Series.idxmax Return index label of the first occurrence of maximum of values.
Notes
This method is the Series version of ndarray.argmin. This method returns the label of the minimum,
while ndarray.argmin returns the position. To get the position, use series.values.argmin().
Examples
>>> s.idxmin()
'A'
If skipna is False and there is an NA value in the data, the function returns nan.
>>> s.idxmin(skipna=False)
nan
pandas.Series.argsort
order [ignored]
Returns
argsorted [Series, with -1 indicated where nan values are present]
See also:
numpy.ndarray.argsort
pandas.Series.as_blocks
Series.as_blocks(copy=True)
Convert the frame to a dict of dtype -> Constructor Types that each has a homogeneous dtype.
Deprecated since version 0.21.0.
NOTE: the dtypes of the blocks WILL BE PRESERVED HERE (unlike in as_matrix)
Parameters
copy [boolean, default True]
Returns
values [a dict of dtype -> Constructor Types]
pandas.Series.as_matrix
Series.as_matrix(columns=None)
Convert the frame to its Numpy-array representation.
Deprecated since version 0.23.0: Use DataFrame.values() instead.
Parameters columns: list, optional, default:None
If None, return all columns, otherwise, returns specified columns.
Returns values : ndarray
If the caller is heterogeneous and contains booleans or objects, the result will be of
dtype=object. See Notes.
See also:
pandas.DataFrame.values
Notes
pandas.Series.asfreq
Returns the original data conformed to a new index with the specified frequency. resample is more
appropriate if an operation, such as summarization, is necessary to represent the data at the new frequency.
Parameters
freq [DateOffset object, or string]
method : {‘backfill’/’bfill’, ‘pad’/’ffill’}, default None
Method to use for filling holes in reindexed Series (note this does not fill NaNs that
already were present):
• ‘pad’ / ‘ffill’: propagate last valid observation forward to next valid
• ‘backfill’ / ‘bfill’: use NEXT valid observation to fill
how : {‘start’, ‘end’}, default end
For PeriodIndex only, see PeriodIndex.asfreq
normalize : bool, default False
Whether to reset output index to midnight
fill_value: scalar, optional
Value to use for missing values, applied during upsampling (note this does not fill NaNs
that already were present).
New in version 0.20.0.
Returns
converted [type of caller]
See also:
reindex
Notes
To learn more about the frequency strings, please see this link.
Examples
>>> df.asfreq(freq='30S')
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 NaN
2000-01-01 00:01:30 NaN
2000-01-01 00:02:00 2.0
2000-01-01 00:02:30 NaN
2000-01-01 00:03:00 3.0
pandas.Series.asof
Series.asof(where, subset=None)
The last row without any NaN is taken (or the last row without NaN considering only the subset of columns
in the case of a DataFrame)
New in version 0.19.0: For DataFrame
If there is no good value, NaN is returned for a Series a Series of NaN values for a DataFrame
Parameters
where [date or array of dates]
subset : string or list of strings, default None
if not None use these columns for NaN propagation
Returns where is scalar
• value or NaN if input is Series
• Series if input is DataFrame
See also:
merge_asof
Notes
pandas.Series.astype
Returns
casted [type of caller]
See also:
Examples
>>> ser.astype('category')
0 1
1 2
dtype: category
Categories (2, int64): [1, 2]
Note that using copy=False and changing data on a new pandas object may propagate changes:
>>> s1 = pd.Series([1,2])
>>> s2 = s1.astype('int64', copy=False)
>>> s2[0] = 10
>>> s1 # note that s1[0] has changed too
0 10
1 2
dtype: int64
pandas.Series.at_time
Series.at_time(time, asof=False)
Select values at particular time of day (e.g. 9:30AM).
Parameters
time [datetime.time or string]
Returns
values_at_time [type of caller]
Raises TypeError
If the index is not a DatetimeIndex
See also:
Examples
>>> ts.at_time('12:00')
A
2018-04-09 12:00:00 2
2018-04-10 12:00:00 4
pandas.Series.autocorr
Series.autocorr(lag=1)
Lag-N autocorrelation
Parameters lag : int, default 1
Number of lags to apply before performing autocorrelation.
Returns
autocorr [float]
pandas.Series.between
See also:
Notes
This function is equivalent to (left <= ser) & (ser <= right)
Examples
pandas.Series.between_time
Examples
You get the times that are not between two times by setting start_time later than end_time:
>>> ts.between_time('0:45', '0:15')
A
2018-04-09 00:00:00 1
2018-04-12 01:00:00 4
pandas.Series.bfill
pandas.Series.bool
Series.bool()
Return the bool of a single element PandasObject.
This must be a boolean scalar value, either True or False. Raise a ValueError if the PandasObject does not
have exactly 1 element, or that element is not boolean
pandas.Series.cat
Series.cat()
Accessor object for categorical properties of the Series values.
Be aware that assigning to categories is a inplace operation, while all methods return new categorical data
per default (but can be called with inplace=True).
Parameters
data [Series or CategoricalIndex]
Examples
>>> s.cat.categories
>>> s.cat.categories = list('abc')
>>> s.cat.rename_categories(list('cab'))
>>> s.cat.reorder_categories(list('cab'))
>>> s.cat.add_categories(['d','e'])
>>> s.cat.remove_categories(['d'])
>>> s.cat.remove_unused_categories()
>>> s.cat.set_categories(list('abcde'))
>>> s.cat.as_ordered()
>>> s.cat.as_unordered()
pandas.Series.clip
Same type as calling object with the values outside the clip boundaries replaced
See also:
Examples
>>> data = {'col_0': [9, -3, 0, -1, 5], 'col_1': [-2, -7, 6, 8, -5]}
>>> df = pd.DataFrame(data)
>>> df
col_0 col_1
0 9 -2
1 -3 -7
2 0 6
3 -1 8
4 5 -5
>>> df.clip(-4, 6)
col_0 col_1
0 6 -2
1 -3 -4
2 0 6
3 -1 6
4 5 -4
Clips using specific lower and upper thresholds per column element:
pandas.Series.clip_lower
Minimum value allowed. All values below threshold will be set to this value.
• float : every value is compared to threshold.
• array-like : The shape of threshold should match the object it’s compared to. When
self is a Series, threshold should be the length. When self is a DataFrame, threshold
should 2-D and the same shape as self for axis=None, or 1-D and the same length
as the axis being compared.
axis : {0 or ‘index’, 1 or ‘columns’}, default 0
Align self with threshold along the given axis.
inplace : boolean, default False
Whether to perform the operation in place on the data.
New in version 0.21.0.
Returns
clipped [same type as input]
See also:
Series.clip Return copy of input with values below and above thresholds truncated.
Series.clip_upper Return copy of input with values above threshold truncated.
Examples
Series clipping element-wise using an array of thresholds. threshold should be the same length as the
Series.
>>> df.clip_lower(3)
A B
0 3 3
1 3 4
2 5 6
Or to an array of values. By default, threshold should be the same shape as the DataFrame.
Control how threshold is broadcast with axis. In this case threshold should be the same length as the axis
specified by axis.
pandas.Series.clip_upper
pandas.Series.combine
Returns
result [Series]
See also:
Series.combine_first Combine Series values, choosing the calling Series’s values first
Examples
pandas.Series.combine_first
Series.combine_first(other)
Combine Series values, choosing the calling Series’s values first. Result index will be the union of the two
indexes
Parameters
other [Series]
Returns
combined [Series]
See also:
Examples
pandas.Series.compound
pandas.Series.compress
pandas.Series.consolidate
Series.consolidate(inplace=False)
Compute NDFrame with “consolidated” internals (data of each dtype grouped together in a single ndarray).
Deprecated since version 0.20.0: Consolidate will be an internal implementation only.
pandas.Series.convert_objects
pandas.Series.copy
Series.copy(deep=True)
Make a copy of this object’s indices and data.
When deep=True (default), a new object will be created with a copy of the calling object’s data and
indices. Modifications to the data or indices of the copy will not be reflected in the original object (see
notes below).
When deep=False, a new object will be created without copying the calling object’s data or index (only
references to the data and index are copied). Any changes to the data of the original will be reflected in the
shallow copy (and vice versa).
Parameters deep : bool, default True
Make a deep copy, including a copy of the data and the indices. With deep=False
neither the indices nor the data are copied.
Returns copy : Series, DataFrame or Panel
Object type matches caller.
Notes
When deep=True, data is copied but actual Python objects will not be copied recursively, only the
reference to the object. This is in contrast to copy.deepcopy in the Standard Library, which recursively
copies object data (see examples below).
While Index objects are copied when deep=True, the underlying numpy array is not copied for per-
formance reasons. Since Index is immutable, the underlying data can be safely shared and a copy is not
needed.
Examples
>>> s is shallow
False
>>> s.values is shallow.values and s.index is shallow.index
True
>>> s is deep
False
>>> s.values is deep.values or s.index is deep.index
False
Updates to the data shared by shallow copy and original is reflected in both; deep copy remains unchanged.
>>> s[0] = 3
>>> shallow[1] = 4
>>> s
a 3
b 4
dtype: int64
>>> shallow
a 3
b 4
dtype: int64
>>> deep
a 1
b 2
dtype: int64
Note that when copying an object containing Python objects, a deep copy will copy the data, but will not
do so recursively. Updating a nested data object will be reflected in the deep copy.
pandas.Series.corr
pandas.Series.count
Series.count(level=None)
Return number of non-NA/null observations in the Series
Parameters level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into
a smaller Series
Returns
nobs [int or Series (if level specified)]
pandas.Series.cov
Series.cov(other, min_periods=None)
Compute covariance with Series, excluding missing values
Parameters
other [Series]
min_periods : int, optional
Minimum number of observations needed to have a valid result
Returns
covariance [float]
Normalized by N-1 (unbiased estimator).
pandas.Series.cummax
Examples
Series
>>> s.cummax()
0 2.0
1 NaN
2 5.0
3 5.0
4 5.0
dtype: float64
>>> s.cummax(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
By default, iterates over rows and finds the maximum in each column. This is equivalent to axis=None
or axis='index'.
>>> df.cummax()
A B
0 2.0 1.0
1 3.0 NaN
2 3.0 1.0
To iterate over columns and find the maximum in each row, use axis=1
>>> df.cummax(axis=1)
A B
0 2.0 2.0
1 3.0 NaN
2 1.0 1.0
pandas.Series.cummin
Examples
Series
>>> s.cummin()
0 2.0
1 NaN
2 2.0
3 -1.0
4 -1.0
dtype: float64
>>> s.cummin(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
By default, iterates over rows and finds the minimum in each column. This is equivalent to axis=None
or axis='index'.
>>> df.cummin()
A B
0 2.0 1.0
1 2.0 NaN
2 1.0 0.0
To iterate over columns and find the minimum in each row, use axis=1
>>> df.cummin(axis=1)
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
pandas.Series.cumprod
Examples
Series
>>> s.cumprod()
0 2.0
1 NaN
2 10.0
3 -10.0
4 -0.0
dtype: float64
>>> s.cumprod(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
By default, iterates over rows and finds the product in each column. This is equivalent to axis=None or
axis='index'.
>>> df.cumprod()
A B
0 2.0 1.0
1 6.0 NaN
2 6.0 0.0
To iterate over columns and find the product in each row, use axis=1
>>> df.cumprod(axis=1)
A B
0 2.0 2.0
1 3.0 NaN
2 1.0 0.0
pandas.Series.cumsum
Examples
Series
>>> s.cumsum()
0 2.0
1 NaN
2 7.0
3 6.0
4 6.0
dtype: float64
>>> s.cumsum(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
By default, iterates over rows and finds the sum in each column. This is equivalent to axis=None or
axis='index'.
>>> df.cumsum()
A B
0 2.0 1.0
1 5.0 NaN
2 6.0 1.0
To iterate over columns and find the sum in each row, use axis=1
>>> df.cumsum(axis=1)
A B
0 2.0 3.0
1 3.0 NaN
2 1.0 1.0
pandas.Series.describe
Notes
For numeric data, the result’s index will include count, mean, std, min, max as well as lower, 50 and
upper percentiles. By default the lower percentile is 25 and the upper percentile is 75. The 50 percentile
is the same as the median.
For object data (e.g. strings or timestamps), the result’s index will include count, unique, top, and
freq. The top is the most common value. The freq is the most common value’s frequency. Timestamps
also include the first and last items.
If multiple object values have the highest count, then the count and top results will be arbitrarily chosen
from among those with the highest count.
For mixed data types provided via a DataFrame, the default is to return only an analysis of numeric
columns. If the dataframe consists only of object and categorical data without any numeric columns,
the default is to return an analysis of both the object and categorical columns. If include='all' is
provided as an option, the result will include a union of attributes of each type.
The include and exclude parameters can be used to limit which columns in a DataFrame are analyzed
for the output. The parameters are ignored when analyzing a Series.
Examples
>>> df.describe(include='all')
categorical numeric object
count 3 3.0 3
unique 3 NaN 3
top f NaN c
freq 1 NaN 1
mean NaN 2.0 NaN
std NaN 1.0 NaN
min NaN 1.0 NaN
25% NaN 1.5 NaN
50% NaN 2.0 NaN
75% NaN 2.5 NaN
max NaN 3.0 NaN
>>> df.numeric.describe()
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Name: numeric, dtype: float64
>>> df.describe(include=[np.number])
numeric
count 3.0
(continues on next page)
>>> df.describe(include=[np.object])
object
count 3
unique 3
top c
freq 1
>>> df.describe(include=['category'])
categorical
count 3
unique 3
top f
freq 1
>>> df.describe(exclude=[np.number])
categorical object
count 3 3
unique 3 3
top f c
freq 1 1
>>> df.describe(exclude=[np.object])
categorical numeric
count 3 3.0
unique 3 NaN
top f NaN
freq 1 NaN
mean NaN 2.0
std NaN 1.0
min NaN 1.0
25% NaN 1.5
50% NaN 2.0
75% NaN 2.5
max NaN 3.0
pandas.Series.diff
Series.diff(periods=1)
First discrete difference of element.
Calculates the difference of a Series element compared with another element in the Series (default is
element in previous row).
Parameters periods : int, default 1
Periods to shift for calculating difference, accepts negative values.
Returns
diffed [Series]
See also:
Examples
>>> s.diff(periods=3)
0 NaN
1 NaN
2 NaN
3 2.0
4 4.0
5 6.0
dtype: float64
>>> s.diff(periods=-1)
0 0.0
1 -1.0
2 -1.0
3 -2.0
4 -3.0
5 NaN
dtype: float64
pandas.Series.div
Examples
pandas.Series.divide
Equivalent to series / other, but with support to substitute a fill_value for missing data in one of
the inputs.
Parameters
other [Series or scalar value]
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series
alignment, with this value before computation. If data in both corresponding Series
locations is missing the result will be missing
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level
Returns
result [Series]
See also:
Series.rtruediv
Examples
pandas.Series.divmod
Examples
pandas.Series.dot
Series.dot(other)
Matrix multiplication with DataFrame or inner-product with Series objects. Can also be called using self
@ other in Python >= 3.5.
Parameters
other [Series or DataFrame]
Returns
dot_product [scalar or Series]
pandas.Series.drop
Examples
Drop labels B en C
>>> s.drop(labels=['B','C'])
A 0
dtype: int64
pandas.Series.drop_duplicates
Series.drop_duplicates(keep=’first’, inplace=False)
Return Series with duplicate values removed.
Parameters keep : {‘first’, ‘last’, False}, default ‘first’
• ‘first’ : Drop duplicates except for the first occurrence.
• ‘last’ : Drop duplicates except for the last occurrence.
• False : Drop all duplicates.
inplace : boolean, default False
If True, performs operation inplace and returns None.
Returns
deduplicated [Series]
See also:
Examples
With the ‘keep’ parameter, the selection behaviour of duplicated values can be changed. The value ‘first’
keeps the first occurrence for each set of duplicated entries. The default value of keep is ‘first’.
>>> s.drop_duplicates()
0 lama
1 cow
3 beetle
5 hippo
Name: animal, dtype: object
The value ‘last’ for parameter ‘keep’ keeps the last occurrence for each set of duplicated entries.
>>> s.drop_duplicates(keep='last')
1 cow
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
The value False for parameter ‘keep’ discards all sets of duplicated entries. Setting the value of ‘inplace’
to True performs the operation inplace and returns None.
pandas.Series.dropna
Examples
>>> ser.dropna()
0 1.0
1 2.0
dtype: float64
>>> ser.dropna(inplace=True)
>>> ser
0 1.0
1 2.0
dtype: float64
pandas.Series.dt
Series.dt()
Accessor object for datetimelike properties of the Series values.
Examples
>>> s.dt.hour
>>> s.dt.second
>>> s.dt.quarter
Returns a Series indexed like the original Series. Raises TypeError if the Series does not contain datetime-
like values.
pandas.Series.duplicated
Series.duplicated(keep=’first’)
Indicate duplicate Series values.
Duplicated values are indicated as True values in the resulting Series. Either all duplicates, all except the
first or all except the last occurrence of duplicates can be indicated.
Parameters keep : {‘first’, ‘last’, False}, default ‘first’
• ‘first’ : Mark duplicates as True except for the first occurrence.
• ‘last’ : Mark duplicates as True except for the last occurrence.
• False : Mark all duplicates as True.
Returns
pandas.core.series.Series
See also:
Examples
By default, for each set of duplicated values, the first occurrence is set on False and all others on True:
which is equivalent to
>>> animals.duplicated(keep='first')
0 False
1 False
2 True
3 False
4 True
dtype: bool
By using ‘last’, the last occurrence of each set of duplicated values is set on False and all others on True:
>>> animals.duplicated(keep='last')
0 True
1 False
2 True
3 False
4 False
dtype: bool
>>> animals.duplicated(keep=False)
0 True
1 False
2 True
3 False
4 True
dtype: bool
pandas.Series.eq
Broadcast across a level, matching Index values on the passed MultiIndex level
Returns
result [Series]
See also:
Series.None
Examples
pandas.Series.equals
Series.equals(other)
Determines if two NDFrame objects contain the same elements. NaNs in the same location are considered
equal.
pandas.Series.ewm
Specify decay in terms of half-life, 𝛼 = 1−𝑒𝑥𝑝(𝑙𝑜𝑔(0.5)/ℎ𝑎𝑙𝑓 𝑙𝑖𝑓 𝑒), for ℎ𝑎𝑙𝑓 𝑙𝑖𝑓 𝑒 > 0
alpha : float, optional
Specify smoothing factor 𝛼 directly, 0 < 𝛼 ≤ 1
New in version 0.18.0.
min_periods : int, default 0
Minimum number of observations in window required to have a value (otherwise result
is NA).
adjust : boolean, default True
Divide by decaying adjustment factor in beginning periods to account for imbalance in
relative weightings (viewing EWMA as a moving average)
ignore_na : boolean, default False
Ignore missing values when calculating weights; specify True to reproduce pre-0.15.0
behavior
Returns
a Window sub-classed for the particular operation
See also:
Notes
Exactly one of center of mass, span, half-life, and alpha must be provided. Allowed values and relationship
between the parameters are specified in the parameter descriptions above; see the link at the end of this
section for a detailed explanation.
When adjust is True (default), weighted averages are calculated using weights (1-alpha)**(n-1), (1-
alpha)**(n-2), . . . , 1-alpha, 1.
When adjust is False, weighted averages are calculated recursively as: weighted_average[0] = arg[0];
weighted_average[i] = (1-alpha)*weighted_average[i-1] + alpha*arg[i].
When ignore_na is False (default), weights are based on absolute positions. For example, the weights of
x and y used in calculating the final weighted average of [x, None, y] are (1-alpha)**2 and 1 (if adjust is
True), and (1-alpha)**2 and alpha (if adjust is False).
When ignore_na is True (reproducing pre-0.15.0 behavior), weights are based on relative positions. For
example, the weights of x and y used in calculating the final weighted average of [x, None, y] are 1-alpha
and 1 (if adjust is True), and 1-alpha and alpha (if adjust is False).
More details can be found at http://pandas.pydata.org/pandas-docs/stable/computation.html#
exponentially-weighted-windows
Examples
>>> df.ewm(com=0.5).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.670213
pandas.Series.expanding
Returns
a Window sub-classed for the particular operation
See also:
Notes
By default, the result is set to the right edge of the window. This can be changed to the center of the
window by setting center=True.
Examples
>>> df.expanding(2).sum()
B
0 NaN
1 1.0
2 3.0
3 3.0
4 7.0
pandas.Series.factorize
Series.factorize(sort=False, na_sentinel=-1)
Encode the object as an enumerated type or categorical variable.
This method is useful for obtaining a numeric representation of an array when all that matters is identifying
distinct values. factorize is available as both a top-level function pandas.factorize(), and as a
method Series.factorize() and Index.factorize().
Parameters sort : boolean, default False
Sort uniques and shuffle labels to maintain the relationship.
na_sentinel : int, default -1
Value to mark “not found”.
Returns labels : ndarray
An integer ndarray that’s an indexer into uniques. uniques.take(labels) will
have the same values as values.
uniques : ndarray, Index, or Categorical
The unique valid values. When values is Categorical, uniques is a Categorical. When
values is some other pandas object, an Index is returned. Otherwise, a 1-D ndarray is
returned.
Note: Even if there’s a missing value in values, uniques will not contain an entry for it.
See also:
Examples
These examples all show factorize as a top-level method like pd.factorize(values). The results
are identical for methods like Series.factorize().
With sort=True, the uniques will be sorted, and labels will be shuffled so that the relationship is the
maintained.
Missing values are indicated in labels with na_sentinel (-1 by default). Note that missing values are never
included in uniques.
Thus far, we’ve only factorized lists (which are internally coerced to NumPy arrays). When factorizing
pandas objects, the type of uniques will differ. For Categoricals, a Categorical is returned.
pandas.Series.ffill
pandas.Series.fillna
reindex, asfreq
Examples
>>> df.fillna(0)
A B C D
0 0.0 2.0 0.0 0
1 3.0 4.0 0.0 1
2 0.0 0.0 0.0 5
3 0.0 3.0 0.0 4
>>> df.fillna(method='ffill')
A B C D
0 NaN 2.0 NaN 0
1 3.0 4.0 NaN 1
2 3.0 4.0 NaN 5
3 3.0 3.0 NaN 4
Replace all NaN elements in column ‘A’, ‘B’, ‘C’, and ‘D’, with 0, 1, 2, and 3 respectively.
pandas.Series.filter
Notes
The items, like, and regex parameters are enforced to be mutually exclusive.
axis defaults to the info axis that is used when indexing with [].
Examples
>>> df
one two three
mouse 1 2 3
rabbit 4 5 6
pandas.Series.first
Series.first(offset)
Convenience method for subsetting initial periods of time series data based on a date offset.
Parameters
offset [string, DateOffset, dateutil.relativedelta]
Returns
Examples
>>> ts.first('3D')
A
2018-04-09 1
2018-04-11 2
Notice the data for 3 first calender days were returned, not the first 3 days observed in the dataset, and
therefore data for 2018-04-13 was not returned.
pandas.Series.first_valid_index
Series.first_valid_index()
Return index for first non-NA/null value.
Returns
scalar [type of index]
Notes
If all elements are non-NA/null, returns None. Also returns None for empty NDFrame.
pandas.Series.floordiv
Parameters
other [Series or scalar value]
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series
alignment, with this value before computation. If data in both corresponding Series
locations is missing the result will be missing
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level
Returns
result [Series]
See also:
Series.rfloordiv
Examples
pandas.Series.from_array
pandas.Series.from_csv
pandas.Series.ge
Examples
pandas.Series.get
Series.get(key, default=None)
Get item from object for given key (DataFrame column, Panel slice, etc.). Returns default value if not
found.
Parameters
key [object]
Returns
value [type of items contained in object]
pandas.Series.get_dtype_counts
Series.get_dtype_counts()
Return counts of unique dtypes in this object.
Returns dtype : Series
Series with the count of columns with each dtype.
See also:
Examples
>>> df.get_dtype_counts()
float64 1
int64 1
object 1
dtype: int64
pandas.Series.get_ftype_counts
Series.get_ftype_counts()
Return counts of unique ftypes in this object.
Deprecated since version 0.23.0.
This is useful for SparseDataFrame or for DataFrames containing sparse arrays.
Returns dtype : Series
Series with the count of columns with each type and sparsity (dense/sparse)
See also:
Examples
>>> df.get_ftype_counts()
float64:dense 1
int64:dense 1
object:dense 1
dtype: int64
pandas.Series.get_value
Series.get_value(label, takeable=False)
Quickly retrieve single value at passed index label
Deprecated since version 0.21.0: Please use .at[] or .iat[] accessors.
Parameters
label [object]
takeable [interpret the index as indexers, default False]
Returns
value [scalar value]
pandas.Series.get_values
Series.get_values()
same as values (but handles sparseness conversions); is a view
pandas.Series.groupby
resample Convenience method for frequency conversion and resampling of time series.
Notes
Examples
DataFrame results
pandas.Series.gt
Equivalent to series > other, but with support to substitute a fill_value for missing data in one of
the inputs.
Parameters
other [Series or scalar value]
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series
alignment, with this value before computation. If data in both corresponding Series
locations is missing the result will be missing
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level
Returns
result [Series]
See also:
Series.None
Examples
pandas.Series.head
Series.head(n=5)
Return the first n rows.
This function returns the first n rows for the object based on position. It is useful for quickly testing if your
object has the right type of data in it.
Parameters n : int, default 5
Examples
>>> df.head()
animal
0 alligator
1 bee
2 falcon
3 lion
4 monkey
>>> df.head(3)
animal
0 alligator
1 bee
2 falcon
pandas.Series.hist
pandas.Series.idxmax
numpy.argmax Return indices of the maximum values along the given axis.
DataFrame.idxmax Return index of first occurrence of maximum over requested axis.
Series.idxmin Return index label of the first occurrence of minimum of values.
Notes
This method is the Series version of ndarray.argmax. This method returns the label of the maximum,
while ndarray.argmax returns the position. To get the position, use series.values.argmax().
Examples
>>> s.idxmax()
'C'
If skipna is False and there is an NA value in the data, the function returns nan.
>>> s.idxmax(skipna=False)
nan
pandas.Series.idxmin
Raises ValueError
If the Series is empty.
See also:
numpy.argmin Return indices of the minimum values along the given axis.
DataFrame.idxmin Return index of first occurrence of minimum over requested axis.
Series.idxmax Return index label of the first occurrence of maximum of values.
Notes
This method is the Series version of ndarray.argmin. This method returns the label of the minimum,
while ndarray.argmin returns the position. To get the position, use series.values.argmin().
Examples
>>> s.idxmin()
'A'
If skipna is False and there is an NA value in the data, the function returns nan.
>>> s.idxmin(skipna=False)
nan
pandas.Series.infer_objects
Series.infer_objects()
Attempt to infer better dtypes for object columns.
Attempts soft conversion of object-dtyped columns, leaving non-object and unconvertible columns un-
changed. The inference rules are the same as during normal Series/DataFrame construction.
New in version 0.21.0.
Returns
converted [same type as input object]
See also:
Examples
>>> df.dtypes
A object
dtype: object
>>> df.infer_objects().dtypes
A int64
dtype: object
pandas.Series.interpolate
• ‘linear’: ignore the index and treat the values as equally spaced. This is the only
method supported on MultiIndexes. default
• ‘time’: interpolation works on daily and higher resolution data to interpolate given
length of interval
• ‘index’, ‘values’: use the actual numerical values of the index
• ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘barycentric’, ‘polynomial’ is passed
to scipy.interpolate.interp1d. Both ‘polynomial’ and ‘spline’ require
that you also specify an order (int), e.g. df.interpolate(method=’polynomial’, or-
der=4). These use the actual numerical values of the index.
• ‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’ and ‘akima’ are all wrappers
around the scipy interpolation methods of similar names. These use the actual nu-
merical values of the index. For more information on their behavior, see the scipy
documentation and tutorial documentation
• ‘from_derivatives’ refers to BPoly.from_derivatives which replaces ‘piece-
wise_polynomial’ interpolation method in scipy 0.18
New in version 0.18.1: Added support for the ‘akima’ method Added interpolate method
‘from_derivatives’ which replaces ‘piecewise_polynomial’ in scipy 0.18; backwards-
compatible with scipy < 0.18
axis : {0, 1}, default 0
• 0: fill column-by-column
• 1: fill row-by-row
limit : int, default None.
Maximum number of consecutive NaNs to fill. Must be greater than 0.
Returns
Series or DataFrame of same shape interpolated at the NaNs
See also:
reindex, replace, fillna
Examples
Filling in NaNs
pandas.Series.isin
Series.isin(values)
Check whether values are contained in Series.
Return a boolean Series showing whether each element in the Series matches an element in the passed
sequence of values exactly.
Parameters values : set or list-like
The sequence of values to test. Passing in a single string will raise a TypeError.
Instead, turn a single string into a list of one element.
New in version 0.18.1: Support for values as a set.
Returns
isin [Series (bool dtype)]
Raises TypeError
• If values is a string
See also:
Examples
Passing a single string as s.isin('lama') will raise an error. Use a list of one element instead:
>>> s.isin(['lama'])
0 True
1 False
2 True
3 False
4 True
5 False
Name: animal, dtype: bool
pandas.Series.isna
Series.isna()
Detect missing values.
Return a boolean same-sized object indicating if the values are NA. NA values, such as None or numpy.
NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty
strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.
use_inf_as_na = True).
Returns Series
Mask of bool values for each element in Series that indicates whether an element is not
an NA value.
See also:
Examples
>>> df.isna()
age born name toy
0 False True False True
1 False False False False
2 True False False False
>>> ser.isna()
0 False
1 False
2 True
dtype: bool
pandas.Series.isnull
Series.isnull()
Detect missing values.
Return a boolean same-sized object indicating if the values are NA. NA values, such as None or numpy.
NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty
strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.
use_inf_as_na = True).
Returns Series
Mask of bool values for each element in Series that indicates whether an element is not
an NA value.
See also:
Examples
>>> df.isna()
age born name toy
0 False True False True
1 False False False False
2 True False False False
>>> ser.isna()
0 False
1 False
2 True
dtype: bool
pandas.Series.item
Series.item()
return the first element of the underlying data as a python scalar
pandas.Series.items
Series.items()
Lazily iterate over (index, value) tuples
pandas.Series.iteritems
Series.iteritems()
Lazily iterate over (index, value) tuples
pandas.Series.keys
Series.keys()
Alias for index
pandas.Series.kurt
pandas.Series.kurtosis
pandas.Series.last
Series.last(offset)
Convenience method for subsetting final periods of time series data based on a date offset.
Parameters
offset [string, DateOffset, dateutil.relativedelta]
Returns
subset [type of caller]
Raises TypeError
If the index is not a DatetimeIndex
See also:
Examples
>>> ts.last('3D')
A
2018-04-13 3
2018-04-15 4
Notice the data for 3 last calender days were returned, not the last 3 observed days in the dataset, and
therefore data for 2018-04-11 was not returned.
pandas.Series.last_valid_index
Series.last_valid_index()
Return index for last non-NA/null value.
Returns
scalar [type of index]
Notes
If all elements are non-NA/null, returns None. Also returns None for empty NDFrame.
pandas.Series.le
Examples
pandas.Series.lt
Examples
pandas.Series.mad
pandas.Series.map
Series.map(arg, na_action=None)
Map values of Series using input correspondence (a dict, Series, or function).
Parameters arg : function, dict, or Series
Mapping correspondence.
na_action : {None, ‘ignore’}
If ‘ignore’, propagate NA values, without passing them to the mapping correspondence.
Returns y : Series
Same index as caller.
See also:
Notes
When arg is a dictionary, values in Series that are not in the dictionary (as keys) are converted to NaN.
However, if the dictionary is a dict subclass that defines __missing__ (i.e. provides a method for
default values), then this default is used rather than NaN:
>>> from collections import Counter
>>> counter = Counter()
>>> counter['bar'] += 1
>>> y.map(counter)
1 0
2 1
3 0
dtype: int64
Examples
>>> x.map(y)
one foo
two bar
three baz
If arg is a dictionary, return a new Series with values converted according to the dictionary’s mapping:
>>> z = {1: 'A', 2: 'B', 3: 'C'}
>>> x.map(z)
one A
two B
three C
Use na_action to control whether NA values are affected by the mapping function.
>>> s = pd.Series([1, 2, 3, np.nan])
pandas.Series.mask
Returns
wh [same type as caller]
See also:
DataFrame.where()
Notes
The mask method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond is False the element is used; otherwise the corresponding element from the DataFrame other is
used.
The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).
For further details and examples see the mask documentation in indexing.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
pandas.Series.max
Parameters
axis [{index (0)}]
skipna : boolean, default True
Exclude NA/null values when computing the result.
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into
a scalar
numeric_only : boolean, default None
Include only float, int, boolean columns. If None, will attempt to use everything, then
use only numeric data. Not implemented for Series.
Returns
max [scalar or Series (if level specified)]
pandas.Series.mean
Include only float, int, boolean columns. If None, will attempt to use everything, then
use only numeric data. Not implemented for Series.
Returns
mean [scalar or Series (if level specified)]
pandas.Series.median
pandas.Series.memory_usage
Series.memory_usage(index=True, deep=False)
Return the memory usage of the Series.
The memory usage can optionally include the contribution of the index and of elements of object dtype.
Parameters index : bool, default True
Specifies whether to include the memory usage of the Series index.
deep : bool, default False
If True, introspect the data deeply by interrogating object dtypes for system-level mem-
ory consumption, and include it in the returned value.
Returns int
Bytes of memory consumed.
See also:
Examples
>>> s = pd.Series(range(3))
>>> s.memory_usage()
104
Not including the index gives the size of the rest of the data, which is necessarily smaller:
>>> s.memory_usage(index=False)
24
pandas.Series.min
Parameters
axis [{index (0)}]
skipna : boolean, default True
Exclude NA/null values when computing the result.
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into
a scalar
numeric_only : boolean, default None
Include only float, int, boolean columns. If None, will attempt to use everything, then
use only numeric data. Not implemented for Series.
Returns
min [scalar or Series (if level specified)]
pandas.Series.mod
Parameters
other [Series or scalar value]
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series
alignment, with this value before computation. If data in both corresponding Series
locations is missing the result will be missing
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level
Returns
result [Series]
See also:
Series.rmod
Examples
pandas.Series.mode
Series.mode()
Return the mode(s) of the dataset.
Always returns Series even if only one value is returned.
Returns
modes [Series (sorted)]
pandas.Series.mul
Examples
pandas.Series.multiply
Equivalent to series * other, but with support to substitute a fill_value for missing data in one of
the inputs.
Parameters
other [Series or scalar value]
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series
alignment, with this value before computation. If data in both corresponding Series
locations is missing the result will be missing
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level
Returns
result [Series]
See also:
Series.rmul
Examples
pandas.Series.ne
Examples
pandas.Series.nlargest
Series.nlargest(n=5, keep=’first’)
Return the largest n elements.
Parameters n : int
Return this many descending sorted values
keep : {‘first’, ‘last’}, default ‘first’
Where there are duplicate values: - first : take the first occurrence. - last : take
the last occurrence.
Returns top_n : Series
Notes
Examples
pandas.Series.nonzero
Series.nonzero()
Return the integer indices of the elements that are non-zero
This method is equivalent to calling numpy.nonzero on the series data. For compatibility with NumPy,
the return value is the same (a tuple with an array of indices for each dimension), but it will always be a
one-item tuple because series only have one dimension.
See also:
numpy.nonzero
Examples
pandas.Series.notna
Series.notna()
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to
True. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set
pandas.options.mode.use_inf_as_na = True). NA values, such as None or numpy.NaN,
get mapped to False values.
Returns Series
Mask of bool values for each element in Series that indicates whether an element is not
an NA value.
See also:
Examples
>>> df.notna()
age born name toy
0 True False True False
1 True True True True
2 False True True True
>>> ser.notna()
0 True
1 True
2 False
dtype: bool
pandas.Series.notnull
Series.notnull()
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to
True. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set
pandas.options.mode.use_inf_as_na = True). NA values, such as None or numpy.NaN,
get mapped to False values.
Returns Series
Mask of bool values for each element in Series that indicates whether an element is not
an NA value.
See also:
Examples
>>> df.notna()
age born name toy
0 True False True False
1 True True True True
2 False True True True
>>> ser.notna()
0 True
1 True
2 False
dtype: bool
pandas.Series.nsmallest
Series.nsmallest(n=5, keep=’first’)
Return the smallest n elements.
Parameters n : int
Return this many ascending sorted values
keep : {‘first’, ‘last’}, default ‘first’
Where there are duplicate values: - first : take the first occurrence. - last : take
the last occurrence.
Returns bottom_n : Series
The n smallest values in the Series, in sorted order
See also:
Series.nlargest
Notes
Faster than .sort_values().head(n) for small n relative to the size of the Series object.
Examples
pandas.Series.nunique
Series.nunique(dropna=True)
Return number of unique elements in the object.
Excludes NA values by default.
Parameters dropna : boolean, default True
Don’t include NaN in the count.
Returns
nunique [int]
pandas.Series.pct_change
Examples
Series
>>> s.pct_change()
0 NaN
1 0.011111
2 -0.065934
dtype: float64
>>> s.pct_change(periods=2)
0 NaN
1 NaN
2 -0.055556
dtype: float64
See the percentage change in a Series where filling NAs with last valid observation forward to next valid.
>>> s.pct_change(fill_method='ffill')
0 NaN
1 0.011111
2 0.000000
3 -0.065934
dtype: float64
DataFrame
Percentage change in French franc, Deutsche Mark, and Italian lira from 1980-01-01 to 1980-03-01.
>>> df = pd.DataFrame({
... 'FR': [4.0405, 4.0963, 4.3149],
... 'GR': [1.7246, 1.7482, 1.8519],
... 'IT': [804.74, 810.01, 860.13]},
... index=['1980-01-01', '1980-02-01', '1980-03-01'])
>>> df
(continues on next page)
>>> df.pct_change()
FR GR IT
1980-01-01 NaN NaN NaN
1980-02-01 0.013810 0.013684 0.006549
1980-03-01 0.053365 0.059318 0.061876
Percentage of change in GOOG and APPL stock volume. Shows computing the percentage change be-
tween columns.
>>> df = pd.DataFrame({
... '2016': [1769950, 30586265],
... '2015': [1500923, 40912316],
... '2014': [1371819, 41403351]},
... index=['GOOG', 'APPL'])
>>> df
2016 2015 2014
GOOG 1769950 1500923 1371819
APPL 30586265 40912316 41403351
>>> df.pct_change(axis='columns')
2016 2015 2014
GOOG NaN -0.151997 -0.086016
APPL NaN 0.337604 0.012002
pandas.Series.pipe
Notes
Use .pipe when chaining together functions that expect Series, DataFrames or GroupBy objects. Instead
of writing
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe(f, arg2=b, arg3=c)
... )
If you have a function that takes the data as (say) the second argument, pass a tuple indicating which
keyword expects the data. For example, suppose f takes its data as arg2:
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe((f, 'arg2'), arg1=a, arg3=c)
... )
pandas.Series.plot
xlim [2-tuple/list]
ylim [2-tuple/list]
Notes
pandas.Series.pop
Series.pop(item)
Return item and drop from frame. Raise KeyError if not found.
Parameters item : str
Column label to be popped
Returns
popped [Series]
Examples
>>> df.pop('class')
0 bird
1 bird
2 mammal
3 mammal
Name: class, dtype: object
>>> df
name max_speed
0 falcon 389.0
1 parrot 24.0
2 lion 80.5
3 monkey NaN
pandas.Series.pow
Examples
pandas.Series.prod
Examples
>>> pd.Series([]).prod()
1.0
>>> pd.Series([]).prod(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and empty series identically.
>>> pd.Series([np.nan]).prod()
1.0
>>> pd.Series([np.nan]).prod(min_count=1)
nan
pandas.Series.product
Examples
>>> pd.Series([]).prod()
1.0
>>> pd.Series([]).prod(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and empty series identically.
>>> pd.Series([np.nan]).prod()
1.0
>>> pd.Series([np.nan]).prod(min_count=1)
nan
pandas.Series.ptp
Parameters
axis [{index (0)}]
skipna : boolean, default True
Exclude NA/null values when computing the result.
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into
a scalar
numeric_only : boolean, default None
Include only float, int, boolean columns. If None, will attempt to use everything, then
use only numeric data. Not implemented for Series.
Returns
ptp [scalar or Series (if level specified)]
pandas.Series.put
Series.put(*args, **kwargs)
Applies the put method to its values attribute if it has one.
See also:
numpy.ndarray.put
pandas.Series.quantile
Series.quantile(q=0.5, interpolation=’linear’)
Return value at the given quantile, a la numpy.percentile.
Parameters q : float or array-like, default 0.5 (50% quantile)
Examples
pandas.Series.radd
Returns
result [Series]
See also:
Series.add
Examples
pandas.Series.rank
pandas.Series.ravel
Series.ravel(order=’C’)
Return the flattened underlying data as an ndarray
See also:
numpy.ndarray.ravel
pandas.Series.rdiv
Examples
pandas.Series.reindex
Series.reindex(index=None, **kwargs)
Conform Series to new index with optional filling logic, placing NA/NaN in locations having no value in
the previous index. A new object is produced unless the new index is equivalent to the current one and
copy=False
Parameters index : array-like, optional (should be specified using keywords)
New labels / index to conform to. Preferably an Index object to avoid duplicating data
method : {None, ‘backfill’/’bfill’, ‘pad’/’ffill’, ‘nearest’}, optional
method to use for filling holes in reindexed DataFrame. Please note: this is only appli-
cable to DataFrames/Series with a monotonically increasing/decreasing index.
• default: don’t fill gaps
• pad / ffill: propagate last valid observation forward to next valid
• backfill / bfill: use next valid observation to fill gap
• nearest: use nearest valid observations to fill gap
copy : boolean, default True
Return a new object, even if the passed indexes are the same
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level
fill_value : scalar, default np.NaN
Value to use for missing values. Defaults to NaN, but can be any “compatible” value
limit : int, default None
Maximum number of consecutive elements to forward or backward fill
tolerance : optional
Maximum distance between original and new labels for inexact matches. The values of
the index at the matching locations most satisfy the equation abs(index[indexer]
- target) <= tolerance.
Tolerance may be a scalar value, which applies the same tolerance to all values, or list-
like, which applies variable tolerance per element. List-like includes list, tuple, array,
Series, and must be the same size as the index and its dtype must exactly match the
index’s type.
New in version 0.21.0: (list-like tolerance)
Returns
reindexed [Series]
Examples
Create a new index and reindex the dataframe. By default values in the new index that do not have
corresponding records in the dataframe are assigned NaN.
We can fill in the missing values by passing a value to the keyword fill_value. Because the index is
not monotonically increasing or decreasing, we cannot use arguments to the keyword method to fill the
NaN values.
To further illustrate the filling functionality in reindex, we will create a dataframe with a monotonically
increasing index (for example, a sequence of dates).
>>> date_index = pd.date_range('1/1/2010', periods=6, freq='D')
>>> df2 = pd.DataFrame({"prices": [100, 101, np.nan, 100, 89, 88]},
... index=date_index)
>>> df2
prices
2010-01-01 100
2010-01-02 101
2010-01-03 NaN
2010-01-04 100
2010-01-05 89
2010-01-06 88
The index entries that did not have a value in the original data frame (for example, ‘2009-12-29’) are by
default filled with NaN. If desired, we can fill in the missing values using one of several options.
For example, to backpropagate the last valid value to fill the NaN values, pass bfill as an argument to
the method keyword.
Please note that the NaN value present in the original dataframe (at index value 2010-01-03) will not be
filled by any of the value propagation schemes. This is because filling while reindexing does not look at
dataframe values, but only compares the original and desired indexes. If you do want to fill in the NaN
values present in the original dataframe, use the fillna() method.
See the user guide for more.
pandas.Series.reindex_axis
pandas.Series.reindex_like
Notes
pandas.Series.rename
Series.rename(index=None, **kwargs)
Alter Series index labels or name
Function / dict values must be unique (1-to-1). Labels not contained in a dict / Series will be left as-is.
Extra labels listed don’t throw an error.
Alternatively, change Series.name with a scalar value.
See the user guide for more.
Parameters index : scalar, hashable sequence, dict-like or function, optional
dict-like or functions are transformations to apply to the index. Scalar or hashable
sequence-like will alter the Series.name attribute.
copy : boolean, default True
Also copy underlying data
inplace : boolean, default False
Whether to return a new Series. If True then value of copy is ignored.
level : int or level name, default None
In case of a MultiIndex, only rename labels in the specified level.
Returns
renamed [Series (new object)]
See also:
pandas.Series.rename_axis
Examples
pandas.Series.rename_axis
Notes
Prior to version 0.21.0, rename_axis could also be used to change the axis labels by passing a mapping
or scalar. This behavior is deprecated and will be removed in a future version. Use rename instead.
Examples
Series
DataFrame
pandas.Series.reorder_levels
Series.reorder_levels(order)
Rearrange index levels using input order. May not drop or duplicate levels
Parameters order : list of int representing new level order.
(reference level by number or key)
Returns
type of caller (new object)
pandas.Series.repeat
pandas.Series.replace
Notes
• Regex substitution is performed under the hood with re.sub. The rules for substitution for re.sub
are the same.
• Regular expressions will only substitute on strings, meaning you cannot provide, for example, a
regular expression matching floating point numbers and expect the columns in your frame that have
a numeric dtype to be matched. However, if those floating point numbers are strings, then you can
do this.
• This method has a lot of options. You are encouraged to experiment and play with this method to
gain intuition about how it works.
• When dict is used as the to_replace value, it is like key(s) in the dict are the to_replace part and
value(s) in the dict are the value parameter.
Examples
List-like ‘to_replace‘
dict-like ‘to_replace‘
>>> df.replace({0: 10, 1: 100})
A B C
0 10 5 a
1 100 6 b
2 2 7 c
3 3 8 d
4 4 9 e
Note that when replacing multiple bool or datetime64 objects, the data types in the to_replace pa-
rameter must match the data type of the value being replaced:
This raises a TypeError because one of the dict keys is not of the correct type for replacement.
Compare the behavior of s.replace({'a': None}) and s.replace('a', None) to under-
stand the pecularities of the to_replace parameter:
When one uses a dict as the to_replace value, it is like the value(s) in the dict are equal to the value parame-
ter. s.replace({'a': None}) is equivalent to s.replace(to_replace={'a': None},
value=None, method=None):
When value=None and to_replace is a scalar, list or tuple, replace uses the method parameter (default
‘pad’) to do the replacement. So this is why the ‘a’ values are being replaced by 10 in rows 1 and 2
and ‘b’ in row 4 in this case. The command s.replace('a', None) is actually equivalent to s.
replace(to_replace='a', value=None, method='pad'):
pandas.Series.resample
Notes
Examples
Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin.
>>> series.resample('3T').sum()
2000-01-01 00:00:00 3
2000-01-01 00:03:00 12
2000-01-01 00:06:00 21
Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the
left. Please note that the value in the bucket used as the label is not included in the bucket, which it labels.
For example, in the original series the bucket 2000-01-01 00:03:00 contains the value 3, but the
summed value in the resampled bucket with the label 2000-01-01 00:03:00 does not include 3 (if
it did, the summed value would be 6, not 3). To include this value close the right side of the bin interval
as illustrated in the example below this one.
Downsample the series into 3 minute bins as above, but close the right side of the bin interval.
Upsample the series into 30 second bins and fill the NaN values using the pad method.
>>> series.resample('30S').pad()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 0
2000-01-01 00:01:00 1
2000-01-01 00:01:30 1
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
Upsample the series into 30 second bins and fill the NaN values using the bfill method.
>>> series.resample('30S').bfill()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 1
2000-01-01 00:01:00 1
2000-01-01 00:01:30 2
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
>>> series.resample('3T').apply(custom_resampler)
2000-01-01 00:00:00 8
2000-01-01 00:03:00 17
2000-01-01 00:06:00 26
Freq: 3T, dtype: int64
For a Series with a PeriodIndex, the keyword convention can be used to control whether to use the start or
end of rule.
>>> s = pd.Series([1, 2], index=pd.period_range('2012-01-01',
freq='A',
periods=2))
>>> s
2012 1
2013 2
Freq: A-DEC, dtype: int64
Resample by month using ‘start’ convention. Values are assigned to the first month of the period.
>>> s.resample('M', convention='start').asfreq().head()
2012-01 1.0
2012-02 NaN
2012-03 NaN
2012-04 NaN
(continues on next page)
Resample by month using ‘end’ convention. Values are assigned to the last month of the period.
For DataFrame objects, the keyword on can be used to specify the column instead of the index for resam-
pling.
For a DataFrame with MultiIndex, the keyword level can be used to specify on level the resampling
needs to take place.
pandas.Series.reset_index
For a Series with a MultiIndex, only remove the specified levels from the index.
Removes all levels by default.
drop : bool, default False
Just reset the index, without inserting it as a column in the new DataFrame.
name : object, optional
The name to use for the column containing the original Series values. Uses
self.name by default. This argument is ignored when drop is True.
inplace : bool, default False
Modify the Series in place (do not create a new object).
Returns Series or DataFrame
When drop is False (the default), a DataFrame is returned. The newly created
columns will come first in the DataFrame, followed by the original Series values.
When drop is True, a Series is returned. In either case, if inplace=True, no
value is returned.
See also:
Examples
>>> s.reset_index()
idx foo
0 a 1
1 b 2
2 c 3
3 d 4
>>> s.reset_index(name='values')
idx values
0 a 1
1 b 2
2 c 3
3 d 4
>>> s.reset_index(drop=True)
0 1
1 2
2 3
3 4
Name: foo, dtype: int64
To update the Series in place, without generating a new one set inplace to True. Note that it also requires
drop=True.
>>> s2.reset_index(level='a')
a foo
b
one bar 0
two bar 1
one baz 2
two baz 3
If level is not set, all levels are removed from the Index.
>>> s2.reset_index()
a b foo
0 bar one 0
1 bar two 1
2 baz one 2
3 baz two 3
pandas.Series.rfloordiv
Broadcast across a level, matching Index values on the passed MultiIndex level
Returns
result [Series]
See also:
Series.floordiv
Examples
pandas.Series.rmod
See also:
Series.mod
Examples
pandas.Series.rmul
Examples
pandas.Series.rolling
Returns
a Window or Rolling sub-classed for the particular operation
See also:
Notes
By default, the result is set to the right edge of the window. This can be changed to the center of the
window by setting center=True.
To learn more about the offsets & frequency strings, please see this link.
The recognized win_types are:
• boxcar
• triang
• blackman
• hamming
• bartlett
• parzen
• bohman
• blackmanharris
• nuttall
• barthann
• kaiser (needs beta)
• gaussian (needs std)
• general_gaussian (needs power, width)
• slepian (needs width).
If win_type=None all points are evenly weighted. To learn more about different window types see
scipy.signal window functions.
Examples
Rolling sum with a window length of 2, using the ‘triang’ window type.
Rolling sum with a window length of 2, min_periods defaults to the window length.
>>> df.rolling(2).sum()
B
0 NaN
1 1.0
2 3.0
3 NaN
4 NaN
>>> df
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
Contrasting to an integer rolling window, this will roll a variable length window corresponding to the time
period. The default for min_periods is 1.
>>> df.rolling('2s').sum()
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
pandas.Series.round
pandas.Series.rpow
Examples
pandas.Series.rsub
Examples
pandas.Series.rtruediv
Examples
pandas.Series.sample
Returns
A new object of same type as caller.
Examples
>>> s = pd.Series(np.random.randn(50))
>>> s.head()
0 -0.038497
1 1.820773
2 -0.972766
3 -1.598270
4 -1.095526
dtype: float64
>>> df = pd.DataFrame(np.random.randn(50, 4), columns=list('ABCD'))
>>> df.head()
A B C D
0 0.016443 -2.318952 -0.566372 -1.028078
1 -1.051921 0.438836 0.658280 -0.175797
2 -1.243569 -0.364626 -0.215065 0.057736
3 1.768216 0.404512 -0.385604 -1.457834
4 1.072446 -1.137172 0.314194 -0.046661
>>> s.sample(n=3)
27 -0.994689
55 -1.049016
67 -0.224565
dtype: float64
>>> df.sample(random_state=1)
A B C D
37 -2.027662 0.103611 0.237496 -0.165867
43 -0.259323 -0.583426 1.516140 -0.479118
12 -1.686325 -0.579510 0.985195 -0.460286
8 1.167946 0.429082 1.215742 -1.636041
9 1.197475 -0.864188 1.554031 -1.505264
pandas.Series.searchsorted
Notes
Examples
>>> x.searchsorted(4)
array([3])
>>> x.searchsorted('bread')
array([1]) # Note: an array, not a scalar
pandas.Series.select
Series.select(crit, axis=0)
Return data corresponding to axis labels matching criteria
Deprecated since version 0.21.0: Use df.loc[df.index.map(crit)] to select via labels
Parameters crit : function
To be called on each index (label). Should return True or False
axis [int]
Returns
selection [type of caller]
pandas.Series.sem
pandas.Series.set_axis
Examples
Series
>>> s
0 1
1 2
2 3
dtype: int64
DataFrame
pandas.Series.set_value
pandas.Series.shift
Returns
shifted [Series]
Notes
If freq is specified then the index values are shifted but the data is not realigned. That is, use freq if you
would like to extend the index when shifting and preserve the original data.
pandas.Series.skew
pandas.Series.slice_shift
Series.slice_shift(periods=1, axis=0)
Equivalent to shift without copying data. The shifted data will not include the dropped periods and the
shifted axis will be smaller than the original.
Parameters periods : int
Notes
While the slice_shift is faster than shift, you may pay for it later during alignment.
pandas.Series.sort_index
Examples
Sort Descending
>>> s.sort_index(ascending=False)
4 d
3 a
2 b
1 c
dtype: object
Sort Inplace
>>> s.sort_index(inplace=True)
>>> s
1 c
2 b
3 a
4 d
dtype: object
By default NaNs are put at the end, but use na_position to place them at the beginning
>>> s = pd.Series(['a', 'b', 'c', 'd'], index=[3, 2, 1, np.nan])
>>> s.sort_index(na_position='first')
NaN d
1.0 c
2.0 b
3.0 a
dtype: object
pandas.Series.sort_values
Examples
>>> s.sort_values(ascending=True)
1 1.0
2 3.0
4 5.0
3 10.0
0 NaN
dtype: float64
>>> s.sort_values(ascending=False)
3 10.0
4 5.0
2 3.0
1 1.0
0 NaN
dtype: float64
>>> s.sort_values(na_position='first')
0 NaN
1 1.0
2 3.0
4 5.0
3 10.0
dtype: float64
>>> s.sort_values()
3 a
1 b
4 c
2 d
0 z
dtype: object
pandas.Series.sortlevel
pandas.Series.squeeze
Series.squeeze(axis=None)
Squeeze length 1 dimensions.
Parameters axis : None, integer or string axis name, optional
The axis to squeeze if 1-sized.
New in version 0.20.0.
Returns
scalar if 1-sized, else original object
pandas.Series.std
pandas.Series.str
Series.str()
Vectorized string functions for Series and Index. NAs stay NA unless handled otherwise by a particular
method. Patterned after Python’s string methods, with some inspiration from R’s stringr package.
Examples
>>> s.str.split('_')
>>> s.str.replace('_', '')
pandas.Series.sub
Examples
pandas.Series.subtract
Examples
pandas.Series.sum
Examples
This can be controlled with the min_count parameter. For example, if you’d like the sum of an empty
series to be NaN, pass min_count=1.
>>> pd.Series([]).sum(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and empty series identically.
>>> pd.Series([np.nan]).sum()
0.0
>>> pd.Series([np.nan]).sum(min_count=1)
nan
pandas.Series.swapaxes
pandas.Series.swaplevel
pandas.Series.tail
Series.tail(n=5)
Return the last n rows.
This function returns last n rows from the object based on position. It is useful for quickly verifying data,
for example, after sorting or appending rows.
Examples
>>> df.tail()
animal
4 monkey
5 parrot
6 shark
7 whale
8 zebra
>>> df.tail(3)
animal
6 shark
7 whale
8 zebra
pandas.Series.take
Examples
We may take elements using negative integers for positive indices, starting from the end of the object, just
like with Python lists.
pandas.Series.to_clipboard
Notes
Examples
We can omit the the index by passing the keyword index and setting it to false.
pandas.Series.to_csv
A string representing the compression to use in the output file. Allowed values
are ‘gzip’, ‘bz2’, ‘zip’, ‘xz’. This input is only used when the first argument is a
filename.
date_format: string, default None
Format string for datetime objects.
decimal: string, default ‘.’
Character recognized as decimal separator. E.g. use ‘,’ for European data
pandas.Series.to_dense
Series.to_dense()
Return dense representation of NDFrame (as opposed to sparse)
pandas.Series.to_dict
Series.to_dict(into=<class ’dict’>)
Convert Series to {label -> value} dict or dict-like object.
Parameters into : class, default dict
The collections.Mapping subclass to use as the return object. Can be the actual
class or an empty instance of the mapping type you want. If you want a collec-
tions.defaultdict, you must pass it initialized.
New in version 0.21.0.
Returns
value_dict [collections.Mapping]
Examples
pandas.Series.to_excel
Notes
If passing an existing ExcelWriter object, then the sheet will be added to the existing workbook. This can
be used to save different DataFrames to one workbook:
For compatibility with to_csv, to_excel serializes lists and dicts to strings before writing.
pandas.Series.to_frame
Series.to_frame(name=None)
Convert Series to DataFrame
Parameters name : object, default None
The passed name should substitute for the series name (if it has one).
Returns
data_frame [DataFrame]
pandas.Series.to_hdf
Possible values:
• ‘fixed’: Fixed format. Fast writing/reading. Not-appendable, nor searchable.
• ‘table’: Table format. Write as a PyTables Table structure which may perform
worse but allow more flexible operations like searching / selecting subsets of
the data.
append : bool, default False
For Table formats, append the input data to the existing.
data_columns : list of columns or True, optional
List of columns to create as indexed data columns for on-disk queries, or True to
use all columns. By default only the axes of the object are indexed. See Query
via Data Columns. Applicable only to format=’table’.
complevel : {0-9}, optional
Specifies a compression level for data. A value of 0 disables compression.
complib : {‘zlib’, ‘lzo’, ‘bzip2’, ‘blosc’}, default ‘zlib’
Specifies the compression library to be used. As of v0.20.2 these addi-
tional compressors for Blosc are supported (default if no compressor speci-
fied: ‘blosc:blosclz’): {‘blosc:blosclz’, ‘blosc:lz4’, ‘blosc:lz4hc’, ‘blosc:snappy’,
‘blosc:zlib’, ‘blosc:zstd’}. Specifying a compression library which is not avail-
able issues a ValueError.
fletcher32 : bool, default False
If applying compression use the fletcher32 checksum.
dropna : bool, default False
If true, ALL nan rows will not be written to store.
errors : str, default ‘strict’
Specifies how encoding and decoding errors are to be handled. See the errors
argument for open() for a full list of options.
See also:
Examples
>>> import os
>>> os.remove('data.h5')
pandas.Series.to_json
Examples
Encoding/decoding a Dataframe using 'records' formatted JSON. Note that index labels are not pre-
served with this encoding.
>>> df.to_json(orient='records')
'[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
pandas.Series.to_latex
escape [boolean, default will be read from the pandas config module] Default: True. When set to False
prevents from escaping latex special characters in column names.
encoding [str, default None] A string representing the encoding to use in the output file, defaults to ‘ascii’
on Python 2 and ‘utf-8’ on Python 3.
decimal [string, default ‘.’] Character recognized as decimal separator, e.g. ‘,’ in Europe.
New in version 0.18.0.
multicolumn [boolean, default True] Use multicolumn to enhance MultiIndex columns. The default will
be read from the config module.
New in version 0.20.0.
multicolumn_format [str, default ‘l’] The alignment for multicolumns, similar to column_format The
default will be read from the config module.
New in version 0.20.0.
multirow [boolean, default False] Use multirow to enhance MultiIndex rows. Requires adding a \usepa-
ckage{multirow} to your LaTeX preamble. Will print centered labels (instead of top-aligned) across
the contained rows, separating groups via clines. The default will be read from the pandas config
module.
New in version 0.20.0.
pandas.Series.to_msgpack
pandas.Series.to_period
Series.to_period(freq=None, copy=True)
Convert Series from DatetimeIndex to PeriodIndex with desired frequency (inferred from index if not
passed)
Parameters
freq [string, default]
Returns
ts [Series with PeriodIndex]
pandas.Series.to_pickle
read_pickle Load pickled pandas object (or any object) from file.
DataFrame.to_hdf Write DataFrame to an HDF5 file.
DataFrame.to_sql Write DataFrame to a SQL database.
DataFrame.to_parquet Write a DataFrame to the binary parquet format.
Examples
>>> import os
>>> os.remove("./dummy.pkl")
pandas.Series.to_sparse
Series.to_sparse(kind=’block’, fill_value=None)
Convert Series to SparseSeries
Parameters
kind [{‘block’, ‘integer’}]
fill_value [float, defaults to NaN (missing)]
Returns
sp [SparseSeries]
pandas.Series.to_sql
Rows will be written in batches of this size at a time. By default, all rows will be
written at once.
dtype : dict, optional
Specifying the datatype for columns. The keys should be the column names
and the values should be the SQLAlchemy types or strings for the sqlite3 legacy
mode.
Raises ValueError
When the table already exists and if_exists is ‘fail’ (the default).
See also:
References
[R28], [R29]
Examples
Specify the dtype (especially useful for integers with missing values). Notice that while pandas is forced
to store the data as floating point, the database supports nullable integers. When fetching the data with
Python, we get back integer scalars.
pandas.Series.to_string
pandas.Series.to_timestamp
pandas.Series.to_xarray
Series.to_xarray()
Return an xarray object from the pandas object.
Returns
a DataArray for a Series
a Dataset for a DataFrame
a DataArray for higher dims
Notes
Examples
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (index: 3)
Coordinates:
* index (index) int64 0 1 2
Data variables:
A (index) int64 1 1 2
B (index) object 'foo' 'bar' 'foo'
C (index) float64 4.0 5.0 6.0
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (A: 2, B: 2)
Coordinates:
* B (B) object 'bar' 'foo'
* A (A) int64 1 2
Data variables:
C (B, A) float64 5.0 nan 4.0 6.0
>>> p = pd.Panel(np.arange(24).reshape(4,3,2),
items=list('ABCD'),
major_axis=pd.date_range('20130101', periods=3),
minor_axis=['first', 'second'])
>>> p
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 3 (major_axis) x 2 (minor_axis)
Items axis: A to D
Major_axis axis: 2013-01-01 00:00:00 to 2013-01-03 00:00:00
Minor_axis axis: first to second
>>> p.to_xarray()
<xarray.DataArray (items: 4, major_axis: 3, minor_axis: 2)>
array([[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 6, 7],
[ 8, 9],
[10, 11]],
[[12, 13],
[14, 15],
[16, 17]],
[[18, 19],
[20, 21],
[22, 23]]])
Coordinates:
* items (items) object 'A' 'B' 'C' 'D'
* major_axis (major_axis) datetime64[ns] 2013-01-01 2013-01-02 2013-01-03
˓→ # noqa
pandas.Series.tolist
Series.tolist()
Return a list of the values.
These are each a scalar type, which is a Python scalar (for str, int, float) or a pandas scalar (for Times-
tamp/Timedelta/Interval/Period)
See also:
numpy.ndarray.tolist
pandas.Series.transform
Examples
pandas.Series.transpose
Series.transpose(*args, **kwargs)
return the transpose, which is by definition self
pandas.Series.truediv
Examples
pandas.Series.truncate
Notes
If the index being truncated contains only datetime values, before and after may be specified as strings
instead of Timestamps.
Examples
>>> df.truncate(before=pd.Timestamp('2016-01-05'),
... after=pd.Timestamp('2016-01-10')).tail()
A
2016-01-09 23:59:56 1
2016-01-09 23:59:57 1
2016-01-09 23:59:58 1
2016-01-09 23:59:59 1
2016-01-10 00:00:00 1
Because the index is a DatetimeIndex containing only dates, we can specify before and after as strings.
They will be coerced to Timestamps before truncation.
Note that truncate assumes a 0 value for any unspecified time component (midnight). This differs
from partial string slicing, which returns any partially matching dates.
pandas.Series.tshift
Notes
If freq is not specified then tries to use the freq or inferred_freq attributes of the index. If neither of those
attributes exist, a ValueError is thrown
pandas.Series.tz_convert
pandas.Series.tz_localize
Parameters
tz [string or pytz.timezone object]
axis [the axis to localize]
level : int, str, default None
If axis ia a MultiIndex, localize a specific level. Otherwise must be None
copy : boolean, default True
Also make a copy of the underlying data
ambiguous : ‘infer’, bool-ndarray, ‘NaT’, default ‘raise’
• ‘infer’ will attempt to infer fall dst-transition hours based on order
• bool-ndarray where True signifies a DST time, False designates a non-DST time (note
that this flag is only applicable for ambiguous times)
• ‘NaT’ will return NaT where there are ambiguous times
• ‘raise’ will raise an AmbiguousTimeError if there are ambiguous times
Raises TypeError
If the TimeSeries is tz-aware and tz is not None.
pandas.Series.unique
Series.unique()
Return unique values of Series object.
Uniques are returned in order of appearance. Hash table-based unique, therefore does NOT sort.
Returns ndarray or Categorical
The unique values returned as a NumPy array. In case of categorical data type,
returned as a Categorical.
See also:
Examples
>>> pd.Series(pd.Categorical(list('baabc'))).unique()
[b, a, c]
Categories (3, object): [b, a, c]
pandas.Series.unstack
Series.unstack(level=-1, fill_value=None)
Unstack, a.k.a. pivot, Series with MultiIndex to produce DataFrame. The level involved will automatically
get sorted.
Parameters level : int, string, or list of these, default last level
Level(s) to unstack, can pass level name
fill_value : replace NaN with this value if the unstack produces
missing values
New in version 0.18.0.
Returns
unstacked [DataFrame]
Examples
>>> s.unstack(level=-1)
a b
one 1 2
two 3 4
>>> s.unstack(level=0)
one two
a 1 3
b 2 4
pandas.Series.update
Series.update(other)
Modify Series in place using non-NA values from passed Series. Aligns on index
Parameters
other [Series]
Examples
If other contains NaNs the corresponding values are not updated in the original Series.
pandas.Series.valid
Series.valid(inplace=False, **kwargs)
Return Series without null values.
Deprecated since version 0.23.0: Use Series.dropna() instead.
pandas.Series.value_counts
pandas.Series.var
Returns
var [scalar or Series (if level specified)]
pandas.Series.view
Series.view(dtype=None)
Create a new view of the Series.
This function will return a new Series with a view of the same underlying values in memory, optionally
reinterpreted with a new data type. The new data type must preserve the same size in bytes as to not cause
index misalignment.
Parameters dtype : data type
Data type object or one of their string representations.
Returns Series
A new Series object as a view of the same data in memory.
See also:
numpy.ndarray.view Equivalent numpy function to create a new view of the same data in memory.
Notes
Series are instantiated with dtype=float64 by default. While numpy.ndarray.view() will re-
turn a view with the same data type as the original array, Series.view() (without specified dtype)
will try using float64 and may fail if the original data type size in bytes is not the same.
Examples
The 8 bit signed integer representation of -1 is 0b11111111, but the same bytes represent 255 if read as
an 8 bit unsigned integer:
>>> us = s.view('uint8')
>>> us
0 254
1 255
2 0
3 1
4 2
dtype: uint8
pandas.Series.where
Notes
The where method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond is True the element is used; otherwise the corresponding element from the DataFrame other is
used.
The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).
For further details and examples see the where documentation in indexing.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
pandas.Series.xs
Notes
Examples
>>> df
A B C
a 4 5 2
b 4 0 9
c 9 7 3
>>> df.xs('a')
A 4
B 5
C 2
(continues on next page)
>>> df
A B C D
first second third
bar one 1 4 1 8 9
two 1 7 5 5 0
baz one 1 6 6 8 0
three 2 5 3 5 3
>>> df.xs(('baz', 'three'))
A B C D
third
2 5 3 5 3
>>> df.xs('one', level=1)
A B C D
first third
bar 1 4 1 8 9
baz 1 6 6 8 0
>>> df.xs(('baz', 2), level=[0, 'third'])
A B C D
second
three 5 3 5 3
34.3.2 Attributes
Axes
34.3.2.1 pandas.Series.empty
Series.empty
34.3.2.2 pandas.Series.is_copy
Series.is_copy
34.3.2.3 pandas.Series.name
Series.name
34.3.3 Conversion
Series.get(key[, default]) Get item from object for given key (DataFrame column,
Panel slice, etc.).
Series.at Access a single value for a row/column label pair.
Series.iat Access a single value for a row/column pair by integer
position.
Continued on next page
34.3.4.1 pandas.Series.__iter__
Series.__iter__()
Return an iterator of the values.
These are each a scalar type, which is a Python scalar (for str, int, float) or a pandas scalar (for Times-
tamp/Timedelta/Interval/Period)
For more information on .at, .iat, .loc, and .iloc, see the indexing documentation.
Series.add(other[, level, fill_value, axis]) Addition of series and other, element-wise (binary oper-
ator add).
Series.sub(other[, level, fill_value, axis]) Subtraction of series and other, element-wise (binary
operator sub).
Series.mul(other[, level, fill_value, axis]) Multiplication of series and other, element-wise (binary
operator mul).
Series.div(other[, level, fill_value, axis]) Floating division of series and other, element-wise (bi-
nary operator truediv).
Series.truediv(other[, level, fill_value, axis]) Floating division of series and other, element-wise (bi-
nary operator truediv).
Series.floordiv(other[, level, fill_value, axis]) Integer division of series and other, element-wise (bi-
nary operator floordiv).
Series.mod(other[, level, fill_value, axis]) Modulo of series and other, element-wise (binary oper-
ator mod).
Series.pow(other[, level, fill_value, axis]) Exponential power of series and other, element-wise
(binary operator pow).
Series.radd(other[, level, fill_value, axis]) Addition of series and other, element-wise (binary oper-
ator radd).
Series.rsub(other[, level, fill_value, axis]) Subtraction of series and other, element-wise (binary
operator rsub).
Series.rmul(other[, level, fill_value, axis]) Multiplication of series and other, element-wise (binary
operator rmul).
Series.rdiv(other[, level, fill_value, axis]) Floating division of series and other, element-wise (bi-
nary operator rtruediv).
Continued on next page
Series.align(other[, join, axis, level, . . . ]) Align two objects on their axes with the specified join
method for each axis Index
Series.drop([labels, axis, index, columns, . . . ]) Return Series with specified index labels removed.
Series.drop_duplicates([keep, inplace]) Return Series with duplicate values removed.
Series.duplicated([keep]) Indicate duplicate Series values.
Series.equals(other) Determines if two NDFrame objects contain the same
elements.
Series.first(offset) Convenience method for subsetting initial periods of
time series data based on a date offset.
Series.head([n]) Return the first n rows.
Series.idxmax([axis, skipna]) Return the row label of the maximum value.
Series.idxmin([axis, skipna]) Return the row label of the minimum value.
Series.isin(values) Check whether values are contained in Series.
Series.last(offset) Convenience method for subsetting final periods of time
series data based on a date offset.
Series.reindex([index]) Conform Series to new index with optional filling logic,
placing NA/NaN in locations having no value in the pre-
vious index.
Continued on next page
Series.dt can be used to access the values of the series as datetimelike and return several properties. These can be
accessed like Series.dt.<property>.
Datetime Properties
34.3.13.1 pandas.Series.dt.date
Series.dt.date
Returns numpy array of python datetime.date objects (namely, the date part of Timestamps without timezone
information).
34.3.13.2 pandas.Series.dt.time
Series.dt.time
Returns numpy array of datetime.time. The time part of the Timestamps.
34.3.13.3 pandas.Series.dt.year
Series.dt.year
The year of the datetime
34.3.13.4 pandas.Series.dt.month
Series.dt.month
The month as January=1, December=12
34.3.13.5 pandas.Series.dt.day
Series.dt.day
The days of the datetime
34.3.13.6 pandas.Series.dt.hour
Series.dt.hour
The hours of the datetime
34.3.13.7 pandas.Series.dt.minute
Series.dt.minute
The minutes of the datetime
34.3.13.8 pandas.Series.dt.second
Series.dt.second
The seconds of the datetime
34.3.13.9 pandas.Series.dt.microsecond
Series.dt.microsecond
The microseconds of the datetime
34.3.13.10 pandas.Series.dt.nanosecond
Series.dt.nanosecond
The nanoseconds of the datetime
34.3.13.11 pandas.Series.dt.week
Series.dt.week
The week ordinal of the year
34.3.13.12 pandas.Series.dt.weekofyear
Series.dt.weekofyear
The week ordinal of the year
34.3.13.13 pandas.Series.dt.dayofweek
Series.dt.dayofweek
The day of the week with Monday=0, Sunday=6
34.3.13.14 pandas.Series.dt.weekday
Series.dt.weekday
The day of the week with Monday=0, Sunday=6
34.3.13.15 pandas.Series.dt.dayofyear
Series.dt.dayofyear
The ordinal day of the year
34.3.13.16 pandas.Series.dt.quarter
Series.dt.quarter
The quarter of the date
34.3.13.17 pandas.Series.dt.is_month_start
Series.dt.is_month_start
Logical indicating if first day of month (defined by frequency)
34.3.13.18 pandas.Series.dt.is_month_end
Series.dt.is_month_end
Indicator for whether the date is the last day of the month.
Returns Series or array
For Series, returns a Series with boolean values. For DatetimeIndex, returns a
boolean array.
See also:
is_month_start Indicator for whether the date is the first day of the month.
Examples
This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex.
34.3.13.19 pandas.Series.dt.is_quarter_start
Series.dt.is_quarter_start
Indicator for whether the date is the first day of a quarter.
Returns is_quarter_start : Series or DatetimeIndex
The same type as the original data with boolean values. Series will have the same
name and index. DatetimeIndex will have the same name.
See also:
Examples
This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex.
>>> idx.is_quarter_start
array([False, False, True, False])
34.3.13.20 pandas.Series.dt.is_quarter_end
Series.dt.is_quarter_end
Indicator for whether the date is the last day of a quarter.
Returns is_quarter_end : Series or DatetimeIndex
The same type as the original data with boolean values. Series will have the same
name and index. DatetimeIndex will have the same name.
See also:
Examples
This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex.
>>> idx.is_quarter_end
array([False, True, False, False])
34.3.13.21 pandas.Series.dt.is_year_start
Series.dt.is_year_start
Indicate whether the date is the first day of a year.
Returns Series or DatetimeIndex
The same type as the original data with boolean values. Series will have the same
name and index. DatetimeIndex will have the same name.
See also:
Examples
This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex.
>>> dates.dt.is_year_start
0 False
1 False
2 True
dtype: bool
>>> idx.is_year_start
array([False, False, True])
34.3.13.22 pandas.Series.dt.is_year_end
Series.dt.is_year_end
Indicate whether the date is the last day of the year.
Returns Series or DatetimeIndex
The same type as the original data with boolean values. Series will have the same
name and index. DatetimeIndex will have the same name.
See also:
Examples
This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex.
>>> dates.dt.is_year_end
0 False
1 True
2 False
dtype: bool
>>> idx.is_year_end
array([False, True, False])
34.3.13.23 pandas.Series.dt.is_leap_year
Series.dt.is_leap_year
Boolean indicator if the date belongs to a leap year.
A leap year is a year, which has 366 days (instead of 365) including 29th of February as an intercalary day. Leap
years are years which are multiples of four with the exception of years divisible by 100 but not by 400.
Returns Series or ndarray
Booleans indicating if dates belong to a leap year.
Examples
This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex.
34.3.13.24 pandas.Series.dt.daysinmonth
Series.dt.daysinmonth
The number of days in the month
34.3.13.25 pandas.Series.dt.days_in_month
Series.dt.days_in_month
The number of days in the month
34.3.13.26 pandas.Series.dt.tz
Series.dt.tz
34.3.13.27 pandas.Series.dt.freq
Series.dt.freq
Datetime Methods
34.3.13.28 pandas.Series.dt.to_period
Series.dt.to_period(*args, **kwargs)
Cast to PeriodIndex at a particular frequency.
Converts DatetimeIndex to PeriodIndex.
Parameters freq : string or Offset, optional
One of pandas’ offset strings or an Offset object. Will be inferred by default.
Returns
PeriodIndex
Raises ValueError
When converting a DatetimeIndex with non-regular values, so that a frequency can-
not be inferred.
See also:
Examples
34.3.13.29 pandas.Series.dt.to_pydatetime
Series.dt.to_pydatetime()
Return the data as an array of native Python datetime objects
Timezone information is retained if present.
Warning: Python’s datetime uses microsecond resolution, which is lower than pandas (nanosecond). The
values are truncated.
Returns numpy.ndarray
object dtype array containing native Python datetime objects.
See also:
Examples
>>> s.dt.to_pydatetime()
array([datetime.datetime(2018, 3, 10, 0, 0),
datetime.datetime(2018, 3, 11, 0, 0)], dtype=object)
>>> s.dt.to_pydatetime()
array([datetime.datetime(2018, 3, 10, 0, 0),
datetime.datetime(2018, 3, 10, 0, 0)], dtype=object)
34.3.13.30 pandas.Series.dt.tz_localize
Series.dt.tz_localize(*args, **kwargs)
Localize tz-naive DatetimeIndex to tz-aware DatetimeIndex.
This method takes a time zone (tz) naive DatetimeIndex object and makes this time zone aware. It does not
move the time to another time zone. Time zone localization helps to switch from time zone aware to time zone
unaware objects.
Parameters tz : string, pytz.timezone, dateutil.tz.tzfile or None
Time zone to convert timestamps to. Passing None will remove the time zone infor-
mation preserving local time.
ambiguous : str {‘infer’, ‘NaT’, ‘raise’} or bool array, default ‘raise’
• ‘infer’ will attempt to infer fall dst-transition hours based on order
• bool-ndarray where True signifies a DST time, False signifies a non-DST time (note that
this flag is only applicable for ambiguous times)
• ‘NaT’ will return NaT where there are ambiguous times
• ‘raise’ will raise an AmbiguousTimeError if there are ambiguous times
errors : {‘raise’, ‘coerce’}, default ‘raise’
• ‘raise’ will raise a NonExistentTimeError if a timestamp is not valid in the
specified time zone (e.g. due to a transition from or to DST time)
• ‘coerce’ will return NaT if the timestamp can not be converted to the specified
time zone
New in version 0.19.0.
Returns DatetimeIndex
Index converted to the specified time zone.
Raises TypeError
If the DatetimeIndex is tz-aware and tz is not None.
See also:
Examples
With the tz=None, we can remove the time zone information while keeping the local time (not converted to
UTC):
>>> tz_aware.tz_localize(None)
DatetimeIndex(['2018-03-01 09:00:00', '2018-03-02 09:00:00',
'2018-03-03 09:00:00'],
dtype='datetime64[ns]', freq='D')
34.3.13.31 pandas.Series.dt.tz_convert
Series.dt.tz_convert(*args, **kwargs)
Convert tz-aware DatetimeIndex from one time zone to another.
Parameters tz : string, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time. Corresponding timestamps would be converted to this time zone
of the DatetimeIndex. A tz of None will convert to UTC and remove the timezone
information.
Returns
normalized [DatetimeIndex]
Raises TypeError
If DatetimeIndex is tz-naive.
See also:
Examples
With the tz parameter, we can change the DatetimeIndex to other time zones:
>>> dti
DatetimeIndex(['2014-08-01 09:00:00+02:00',
'2014-08-01 10:00:00+02:00',
'2014-08-01 11:00:00+02:00'],
dtype='datetime64[ns, Europe/Berlin]', freq='H')
>>> dti.tz_convert('US/Central')
DatetimeIndex(['2014-08-01 02:00:00-05:00',
'2014-08-01 03:00:00-05:00',
'2014-08-01 04:00:00-05:00'],
dtype='datetime64[ns, US/Central]', freq='H')
With the tz=None, we can remove the timezone (after converting to UTC if necessary):
>>> dti
DatetimeIndex(['2014-08-01 09:00:00+02:00',
'2014-08-01 10:00:00+02:00',
'2014-08-01 11:00:00+02:00'],
dtype='datetime64[ns, Europe/Berlin]', freq='H')
>>> dti.tz_convert(None)
DatetimeIndex(['2014-08-01 07:00:00',
'2014-08-01 08:00:00',
'2014-08-01 09:00:00'],
dtype='datetime64[ns]', freq='H')
34.3.13.32 pandas.Series.dt.normalize
Series.dt.normalize(*args, **kwargs)
Convert times to midnight.
The time component of the date-timeise converted to midnight i.e. 00:00:00. This is useful in cases, when the
time does not matter. Length is unaltered. The timezones are unaffected.
This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex.
Returns DatetimeIndex or Series
The same type as the original data. Series will have the same name and index.
DatetimeIndex will have the same name.
See also:
Examples
34.3.13.33 pandas.Series.dt.strftime
Series.dt.strftime(*args, **kwargs)
Convert to Index using specified date_format.
Return an Index of formatted strings specified by date_format, which supports the same string format as the
python standard library. Details of the string format can be found in python string format doc
Parameters date_format : str
Date format string (e.g. “%Y-%m-%d”).
Returns Index
Index of formatted strings
See also:
Examples
34.3.13.34 pandas.Series.dt.round
Series.dt.round(*args, **kwargs)
round the data to the specified freq.
Parameters freq : str or Offset
The frequency level to round the index to. Must be a fixed frequency like ‘S’ (second)
not ‘ME’ (month end). See frequency aliases for a list of possible freq values.
Returns DatetimeIndex, TimedeltaIndex, or Series
Index of the same type for a DatetimeIndex or TimedeltaIndex, or a Series with the
same index for a Series.
Raises
ValueError if the ‘freq‘ cannot be converted.
Examples
DatetimeIndex
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
(continues on next page)
Series
>>> pd.Series(rng).dt.round("H")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 12:00:00
dtype: datetime64[ns]
34.3.13.35 pandas.Series.dt.floor
Series.dt.floor(*args, **kwargs)
floor the data to the specified freq.
Parameters freq : str or Offset
The frequency level to floor the index to. Must be a fixed frequency like ‘S’ (second)
not ‘ME’ (month end). See frequency aliases for a list of possible freq values.
Returns DatetimeIndex, TimedeltaIndex, or Series
Index of the same type for a DatetimeIndex or TimedeltaIndex, or a Series with the
same index for a Series.
Raises
ValueError if the ‘freq‘ cannot be converted.
Examples
DatetimeIndex
Series
>>> pd.Series(rng).dt.floor("H")
0 2018-01-01 11:00:00
1 2018-01-01 12:00:00
2 2018-01-01 12:00:00
dtype: datetime64[ns]
34.3.13.36 pandas.Series.dt.ceil
Series.dt.ceil(*args, **kwargs)
ceil the data to the specified freq.
Parameters freq : str or Offset
The frequency level to ceil the index to. Must be a fixed frequency like ‘S’ (second)
not ‘ME’ (month end). See frequency aliases for a list of possible freq values.
Returns DatetimeIndex, TimedeltaIndex, or Series
Index of the same type for a DatetimeIndex or TimedeltaIndex, or a Series with the
same index for a Series.
Raises
ValueError if the ‘freq‘ cannot be converted.
Examples
DatetimeIndex
Series
>>> pd.Series(rng).dt.ceil("H")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 13:00:00
dtype: datetime64[ns]
34.3.13.37 pandas.Series.dt.month_name
Series.dt.month_name(*args, **kwargs)
Return the month names of the DateTimeIndex with specified locale.
Parameters locale : string, default None (English locale)
locale determining the language in which to return the month name
Returns month_names : Index
Index of month names
.. versionadded:: 0.23.0
34.3.13.38 pandas.Series.dt.day_name
Series.dt.day_name(*args, **kwargs)
Return the day names of the DateTimeIndex with specified locale.
Parameters locale : string, default None (English locale)
locale determining the language in which to return the day name
Returns month_names : Index
Index of day names
.. versionadded:: 0.23.0
Timedelta Properties
34.3.13.39 pandas.Series.dt.days
Series.dt.days
Number of days for each element.
34.3.13.40 pandas.Series.dt.seconds
Series.dt.seconds
Number of seconds (>= 0 and less than 1 day) for each element.
34.3.13.41 pandas.Series.dt.microseconds
Series.dt.microseconds
Number of microseconds (>= 0 and less than 1 second) for each element.
34.3.13.42 pandas.Series.dt.nanoseconds
Series.dt.nanoseconds
Number of nanoseconds (>= 0 and less than 1 microsecond) for each element.
34.3.13.43 pandas.Series.dt.components
Series.dt.components
Return a dataframe of the components (days, hours, minutes, seconds, milliseconds, microseconds, nanosec-
34.3.13.44 pandas.Series.dt.to_pytimedelta
Series.dt.to_pytimedelta()
Return an array of native datetime.timedelta objects.
Python’s standard datetime library uses a different representation timedelta’s. This method converts a Series of
pandas Timedeltas to datetime.timedelta format with the same length as the original Series.
Returns a : numpy.ndarray
1D array containing data with datetime.timedelta type.
See also:
datetime.timedelta
Examples
>>> s.dt.to_pytimedelta()
array([datetime.timedelta(0), datetime.timedelta(1),
datetime.timedelta(2), datetime.timedelta(3),
datetime.timedelta(4)], dtype=object)
34.3.13.45 pandas.Series.dt.total_seconds
Series.dt.total_seconds(*args, **kwargs)
Return total duration of each element expressed in seconds.
This method is available directly on TimedeltaIndex and on Series containing timedelta values under the .dt
namespace.
Returns seconds : Float64Index or Series
When the calling object is a TimedeltaIndex, the return type is a Float64Index. When
the calling object is a Series, the return type is Series of type float64 whose index is
the same as the original.
See also:
Examples
Series
>>> s.dt.total_seconds()
0 0.0
1 86400.0
2 172800.0
3 259200.0
4 345600.0
dtype: float64
TimedeltaIndex
>>> idx.total_seconds()
Float64Index([0.0, 86400.0, 172800.0, 259200.00000000003, 345600.0],
dtype='float64')
Series.str can be used to access the values of the series as strings and apply several methods to it. These can be
accessed like Series.str.<function/property>.
34.3.14.1 pandas.Series.str.capitalize
Series.str.capitalize()
Convert strings in the Series/Index to be capitalized.
Equivalent to str.capitalize().
Returns
Series/Index of objects
See also:
Examples
>>> s.str.lower()
0 lower
1 capitals
2 this is a sentence
3 swapcase
dtype: object
>>> s.str.upper()
0 LOWER
1 CAPITALS
2 THIS IS A SENTENCE
3 SWAPCASE
dtype: object
>>> s.str.title()
0 Lower
1 Capitals
2 This Is A Sentence
3 Swapcase
dtype: object
>>> s.str.capitalize()
0 Lower
1 Capitals
2 This is a sentence
3 Swapcase
dtype: object
>>> s.str.swapcase()
0 LOWER
1 capitals
2 THIS IS A SENTENCE
3 sWaPcAsE
dtype: object
34.3.14.2 pandas.Series.str.cat
Examples
When not passing others, all values are concatenated into a single string:
By default, NA values in the Series are ignored. Using na_rep, they can be given a representation:
If others is specified, corresponding values are concatenated with the separator. Result will be a Series of strings.
Missing values will remain missing in the result, but can again be represented using na_rep
Series with different indexes can be aligned before concatenation. The join-keyword works as in other methods.
34.3.14.3 pandas.Series.str.center
Series.str.center(width, fillchar=’ ’)
Filling left and right side of strings in the Series/Index with an additional character. Equivalent to str.
center().
Parameters width : int
Minimum width of resulting string; additional characters will be filled with
fillchar
fillchar : str
Additional character for filling, default is whitespace
Returns
filled [Series/Index of objects]
34.3.14.4 pandas.Series.str.contains
Examples
Specifying na to be False instead of NaN replaces NaN values with False. If Series or Index does not contain
NaN values the resultant dtype will be bool, otherwise, an object dtype.
>>> import re
>>> s1.str.contains('PARROT', flags=re.IGNORECASE, regex=True)
0 False
1 False
2 True
3 False
4 NaN
dtype: object
Ensure pat is a not a literal pattern when regex is set to True. Note in the following example one might expect
only s2[1] and s2[3] to return True. However, ‘.0’ as a regex matches any character followed by a 0.
>>> s2 = pd.Series(['40','40.0','41','41.0','35'])
>>> s2.str.contains('.0', regex=True)
0 True
1 True
2 False
3 True
4 False
dtype: bool
34.3.14.5 pandas.Series.str.count
Notes
Some characters need to be escaped when passing in pat. eg. '$' has a special meaning in regex and must be
escaped when finding this literal character.
Examples
34.3.14.6 pandas.Series.str.decode
Series.str.decode(encoding, errors=’strict’)
Decode character string in the Series/Index using indicated encoding. Equivalent to str.decode() in
python2 and bytes.decode() in python3.
Parameters
encoding [str]
34.3.14.7 pandas.Series.str.encode
Series.str.encode(encoding, errors=’strict’)
Encode character string in the Series/Index using indicated encoding. Equivalent to str.encode().
Parameters
encoding [str]
errors [str, optional]
Returns
encoded [Series/Index of objects]
34.3.14.8 pandas.Series.str.endswith
Series.str.endswith(pat, na=nan)
Test if the end of each string element matches a pattern.
Equivalent to str.endswith().
Parameters pat : str
Character sequence. Regular expressions are not accepted.
na : object, default NaN
Object shown if element tested is not a string.
Returns Series or Index of bool
A Series of booleans indicating whether the given pattern matches the end of each
string element.
See also:
Examples
>>> s.str.endswith('t')
0 True
1 False
2 False
3 NaN
dtype: object
34.3.14.9 pandas.Series.str.extract
Examples
A pattern with two groups will return a DataFrame with two columns. Non-matches will be NaN.
>>> s.str.extract(r'([ab])?(\d)')
0 1
0 a 1
1 b 2
2 NaN 3
>>> s.str.extract(r'(?P<letter>[ab])(?P<digit>\d)')
letter digit
0 a 1
1 b 2
2 NaN NaN
A pattern with one group will return a DataFrame with one column if expand=True.
34.3.14.10 pandas.Series.str.extractall
Series.str.extractall(pat, flags=0)
For each subject string in the Series, extract groups from all matches of regular expression pat. When each
subject string in the Series has exactly one match, extractall(pat).xs(0, level=’match’) is the same as extract(pat).
New in version 0.18.0.
Parameters pat : string
Regular expression pattern with capturing groups
flags : int, default 0 (no flags)
re module flags, e.g. re.IGNORECASE
Returns
A DataFrame with one row for each match, and one column for each
group. Its rows have a MultiIndex with first levels that come from
the subject Series. The last level is named ‘match’ and indicates
the order in the subject. Any capture group names in regular
expression pat will be used for column names; otherwise capture
group numbers will be used.
See also:
Examples
A pattern with one group will return a DataFrame with one column. Indices with no matches will not appear in
the result.
Capture group names are used for column names of the result.
>>> s.str.extractall(r"[ab](?P<digit>\d)")
digit
match
A 0 1
1 2
B 0 1
A pattern with two groups will return a DataFrame with two columns.
>>> s.str.extractall(r"(?P<letter>[ab])(?P<digit>\d)")
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
>>> s.str.extractall(r"(?P<letter>[ab])?(?P<digit>\d)")
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
C 0 NaN 1
34.3.14.11 pandas.Series.str.find
34.3.14.12 pandas.Series.str.findall
count Count occurrences of pattern or regular expression in each string of the Series/Index.
extractall For each string in the Series, extract groups from all matches of regular expression and return a
DataFrame with one row for each match and one column for each group.
re.findall The equivalent re function to all non-overlapping matches of pattern or regular expression in
string, as a list of strings.
Examples
>>> s.str.findall('Monkey')
0 []
1 [Monkey]
2 []
dtype: object
On the other hand, the search for the pattern ‘MONKEY’ doesn’t return any match:
>>> s.str.findall('MONKEY')
0 []
1 []
2 []
dtype: object
Flags can be added to the pattern or regular expression. For instance, to find the pattern ‘MONKEY’ ignoring
the case:
>>> import re
>>> s.str.findall('MONKEY', flags=re.IGNORECASE)
0 []
1 [Monkey]
2 []
dtype: object
When the pattern matches more than one string in the Series, all matches are returned:
>>> s.str.findall('on')
0 [on]
1 [on]
2 []
dtype: object
Regular expressions are supported too. For instance, the search for all the strings ending with the word ‘on’ is
shown next:
>>> s.str.findall('on$')
0 [on]
1 []
2 []
dtype: object
If the pattern is found more than once in the same string, then a list of multiple strings is returned:
>>> s.str.findall('b')
0 []
1 []
2 [b, b]
dtype: object
34.3.14.13 pandas.Series.str.get
Series.str.get(i)
Extract element from each component at specified position.
Extract element from lists, tuples, or strings in each element in the Series/Index.
Parameters i : int
Examples
>>> s = pd.Series(["String",
(1, 2, 3),
["a", "b", "c"],
123, -456,
{1:"Hello", "2":"World"}])
>>> s
0 String
1 (1, 2, 3)
2 [a, b, c]
3 123
4 -456
5 {1: 'Hello', '2': 'World'}
dtype: object
>>> s.str.get(1)
0 t
1 2
2 b
3 NaN
4 NaN
5 Hello
dtype: object
>>> s.str.get(-1)
0 g
1 3
2 c
3 NaN
4 NaN
5 NaN
dtype: object
34.3.14.14 pandas.Series.str.index
34.3.14.15 pandas.Series.str.join
Series.str.join(sep)
Join lists contained as elements in the Series/Index with passed delimiter.
If the elements of a Series are lists themselves, join the content of these lists using the delimiter passed to the
function. This function is an equivalent to str.join().
Parameters sep : str
Delimiter to use between list entries.
Returns
Series/Index: object
See also:
Notes
If any of the lists does not contain string objects the result of the join will be NaN.
Examples
Join all lists using an ‘-‘, the lists containing object(s) of types other than str will become a NaN.
>>> s.str.join('-')
0 lion-elephant-zebra
1 NaN
2 NaN
3 NaN
4 NaN
dtype: object
34.3.14.16 pandas.Series.str.len
Series.str.len()
Compute length of each string in the Series/Index.
Returns
lengths [Series/Index of integer values]
34.3.14.17 pandas.Series.str.ljust
Series.str.ljust(width, fillchar=’ ’)
Filling right side of strings in the Series/Index with an additional character. Equivalent to str.ljust().
Parameters width : int
Minimum width of resulting string; additional characters will be filled with
fillchar
fillchar : str
Additional character for filling, default is whitespace
Returns
filled [Series/Index of objects]
34.3.14.18 pandas.Series.str.lower
Series.str.lower()
Convert strings in the Series/Index to lowercase.
Equivalent to str.lower().
Returns
Series/Index of objects
See also:
Examples
>>> s.str.lower()
0 lower
1 capitals
2 this is a sentence
3 swapcase
dtype: object
>>> s.str.upper()
0 LOWER
1 CAPITALS
2 THIS IS A SENTENCE
3 SWAPCASE
dtype: object
>>> s.str.title()
0 Lower
1 Capitals
2 This Is A Sentence
3 Swapcase
dtype: object
>>> s.str.capitalize()
0 Lower
1 Capitals
2 This is a sentence
3 Swapcase
dtype: object
>>> s.str.swapcase()
0 LOWER
1 capitals
2 THIS IS A SENTENCE
3 sWaPcAsE
dtype: object
34.3.14.19 pandas.Series.str.lstrip
Series.str.lstrip(to_strip=None)
Strip whitespace (including newlines) from each string in the Series/Index from left side. Equivalent to str.
lstrip().
Returns
stripped [Series/Index of objects]
34.3.14.20 pandas.Series.str.match
as_indexer
Deprecated since version 0.21.0.
Returns
Series/array of boolean values
See also:
34.3.14.21 pandas.Series.str.normalize
Series.str.normalize(form)
Return the Unicode normal form for the strings in the Series/Index. For more information on the forms, see the
unicodedata.normalize().
Parameters form : {‘NFC’, ‘NFKC’, ‘NFD’, ‘NFKD’}
Unicode form
Returns
normalized [Series/Index of objects]
34.3.14.22 pandas.Series.str.pad
fillchar : str
Additional character for filling, default is whitespace
Returns
padded [Series/Index of objects]
34.3.14.23 pandas.Series.str.partition
Series.str.partition(pat=’ ’, expand=True)
Split the string at the first occurrence of sep, and return 3 elements containing the part before the separator, the
separator itself, and the part after the separator. If the separator is not found, return 3 elements containing the
string itself, followed by two empty strings.
Parameters pat : string, default whitespace
String to split on.
expand : bool, default True
• If True, return DataFrame/MultiIndex expanding dimensionality.
• If False, return Series/Index.
Returns
split [DataFrame/MultiIndex or Series/Index of objects]
See also:
Examples
>>> s.str.partition('_')
0 1 2
0 A _ B_C
1 D _ E_F
2 X
>>> s.str.rpartition('_')
0 1 2
0 A_B _ C
1 D_E _ F
2 X
34.3.14.24 pandas.Series.str.repeat
Series.str.repeat(repeats)
Duplicate each string in the Series/Index by indicated number of times.
Parameters repeats : int or array
Same value for all (int) or different value per (array)
Returns
repeated [Series/Index of objects]
34.3.14.25 pandas.Series.str.replace
Notes
When pat is a compiled regex, all flags should be included in the compiled regex. Use of case, flags, or
regex=False with a compiled regex will raise an error.
Examples
When pat is a string and regex is True (the default), the given pat is compiled as a regex. When repl is a string,
it replaces matching regex patterns as with re.sub(). NaN value(s) in the Series are left as is:
When pat is a string and regex is False, every pat is replaced with repl as with str.replace():
When repl is a callable, it is called on every pat using re.sub(). The callable should expect one positional
argument (a regex object) and return a string.
To get the idea:
34.3.14.26 pandas.Series.str.rfind
34.3.14.27 pandas.Series.str.rindex
34.3.14.28 pandas.Series.str.rjust
Series.str.rjust(width, fillchar=’ ’)
Filling left side of strings in the Series/Index with an additional character. Equivalent to str.rjust().
Parameters width : int
Minimum width of resulting string; additional characters will be filled with
fillchar
fillchar : str
Additional character for filling, default is whitespace
Returns
filled [Series/Index of objects]
34.3.14.29 pandas.Series.str.rpartition
Series.str.rpartition(pat=’ ’, expand=True)
Split the string at the last occurrence of sep, and return 3 elements containing the part before the separator, the
separator itself, and the part after the separator. If the separator is not found, return 3 elements containing two
empty strings, followed by the string itself.
Parameters pat : string, default whitespace
String to split on.
expand : bool, default True
• If True, return DataFrame/MultiIndex expanding dimensionality.
• If False, return Series/Index.
Returns
split [DataFrame/MultiIndex or Series/Index of objects]
See also:
Examples
>>> s.str.partition('_')
0 1 2
0 A _ B_C
1 D _ E_F
2 X
>>> s.str.rpartition('_')
0 1 2
0 A_B _ C
1 D_E _ F
2 X
34.3.14.30 pandas.Series.str.rstrip
Series.str.rstrip(to_strip=None)
Strip whitespace (including newlines) from each string in the Series/Index from right side. Equivalent to str.
rstrip().
Returns
stripped [Series/Index of objects]
34.3.14.31 pandas.Series.str.slice
34.3.14.32 pandas.Series.str.slice_replace
Examples
Specify just start, meaning replace start until the end of the string with repl.
Specify just stop, meaning the start of the string to stop is replaced with repl, and the rest of the string is included.
Specify start and stop, meaning the slice from start to stop is replaced with repl. Everything before or after start
and stop is included as is.
34.3.14.33 pandas.Series.str.split
Notes
Examples
By default, split will return an object of the same size having lists containing the split elements
>>> s.str.split()
0 [this, is, good, text]
1 [but, this, is, even, better]
dtype: object
>>> s.str.split("random")
0 [this is good text]
1 [but this is even better]
dtype: object
When using expand=True, the split elements will expand out into separate columns.
For Series object, output return type is DataFrame.
>>> s.str.split(expand=True)
0 1 2 3 4
0 this is good text None
1 but this is even better
>>> s.str.split(" is ", expand=True)
0 1
0 this good text
1 but this even better
>>> i = pd.Index(["ba 100 001", "ba 101 002", "ba 102 003"])
>>> i.str.split(expand=True)
MultiIndex(levels=[['ba'], ['100', '101', '102'], ['001', '002', '003']],
labels=[[0, 0, 0], [0, 1, 2], [0, 1, 2]])
34.3.14.34 pandas.Series.str.rsplit
34.3.14.35 pandas.Series.str.startswith
Series.str.startswith(pat, na=nan)
Test if the start of each string element matches a pattern.
Equivalent to str.startswith().
Parameters pat : str
Character sequence. Regular expressions are not accepted.
Examples
>>> s.str.startswith('b')
0 True
1 False
2 False
3 NaN
dtype: object
34.3.14.36 pandas.Series.str.strip
Series.str.strip(to_strip=None)
Strip whitespace (including newlines) from each string in the Series/Index from left and right sides. Equivalent
to str.strip().
Returns
stripped [Series/Index of objects]
34.3.14.37 pandas.Series.str.swapcase
Series.str.swapcase()
Convert strings in the Series/Index to be swapcased.
Equivalent to str.swapcase().
Returns
Series/Index of objects
See also:
Examples
>>> s.str.lower()
0 lower
1 capitals
2 this is a sentence
3 swapcase
dtype: object
>>> s.str.upper()
0 LOWER
1 CAPITALS
2 THIS IS A SENTENCE
3 SWAPCASE
dtype: object
>>> s.str.title()
0 Lower
1 Capitals
2 This Is A Sentence
3 Swapcase
dtype: object
>>> s.str.capitalize()
0 Lower
1 Capitals
(continues on next page)
>>> s.str.swapcase()
0 LOWER
1 capitals
2 THIS IS A SENTENCE
3 sWaPcAsE
dtype: object
34.3.14.38 pandas.Series.str.title
Series.str.title()
Convert strings in the Series/Index to titlecase.
Equivalent to str.title().
Returns
Series/Index of objects
See also:
Examples
>>> s.str.lower()
0 lower
1 capitals
2 this is a sentence
3 swapcase
dtype: object
>>> s.str.upper()
0 LOWER
1 CAPITALS
2 THIS IS A SENTENCE
(continues on next page)
>>> s.str.title()
0 Lower
1 Capitals
2 This Is A Sentence
3 Swapcase
dtype: object
>>> s.str.capitalize()
0 Lower
1 Capitals
2 This is a sentence
3 Swapcase
dtype: object
>>> s.str.swapcase()
0 LOWER
1 capitals
2 THIS IS A SENTENCE
3 sWaPcAsE
dtype: object
34.3.14.39 pandas.Series.str.translate
Series.str.translate(table, deletechars=None)
Map all characters in the string through the given mapping table. Equivalent to standard str.translate().
Note that the optional argument deletechars is only valid if you are using python 2. For python 3, character
deletion should be specified via the table argument.
Parameters table : dict (python 3), str or None (python 2)
In python 3, table is a mapping of Unicode ordinals to Unicode ordinals, strings,
or None. Unmapped characters are left untouched. Characters mapped to None are
deleted. str.maketrans() is a helper function for making translation tables.
In python 2, table is either a string of length 256 or None. If the table argument is
None, no translation is applied and the operation simply removes the characters in
deletechars. string.maketrans() is a helper function for making translation
tables.
deletechars : str, optional (python 2)
A string of characters to delete. This argument is only valid in python 2.
Returns
translated [Series/Index of objects]
34.3.14.40 pandas.Series.str.upper
Series.str.upper()
Convert strings in the Series/Index to uppercase.
Equivalent to str.upper().
Returns
Series/Index of objects
See also:
Examples
>>> s.str.lower()
0 lower
1 capitals
2 this is a sentence
3 swapcase
dtype: object
>>> s.str.upper()
0 LOWER
1 CAPITALS
2 THIS IS A SENTENCE
3 SWAPCASE
dtype: object
>>> s.str.title()
0 Lower
1 Capitals
2 This Is A Sentence
3 Swapcase
dtype: object
>>> s.str.capitalize()
0 Lower
1 Capitals
2 This is a sentence
3 Swapcase
dtype: object
>>> s.str.swapcase()
0 LOWER
(continues on next page)
34.3.14.41 pandas.Series.str.wrap
Series.str.wrap(width, **kwargs)
Wrap long strings in the Series/Index to be formatted in paragraphs with length less than a given width.
This method has the same keyword parameters and defaults as textwrap.TextWrapper.
Parameters width : int
Maximum line-width
expand_tabs : bool, optional
If true, tab characters will be expanded to spaces (default: True)
replace_whitespace : bool, optional
If true, each whitespace character (as defined by string.whitespace) remaining after
tab expansion will be replaced by a single space (default: True)
drop_whitespace : bool, optional
If true, whitespace that, after wrapping, happens to end up at the beginning or end of
a line is dropped (default: True)
break_long_words : bool, optional
If true, then words longer than width will be broken in order to ensure that no lines
are longer than width. If it is false, long words will not be broken, and some lines
may be longer than width. (default: True)
break_on_hyphens : bool, optional
If true, wrapping will occur preferably on whitespace and right after hyphens in com-
pound words, as it is customary in English. If false, only whitespaces will be consid-
ered as potentially good places for line breaks, but you need to set break_long_words
to false if you want truly insecable words. (default: True)
Returns
wrapped [Series/Index of objects]
Notes
Internally, this method uses a textwrap.TextWrapper instance with default settings. To achieve behavior
matching R’s stringr library str_wrap function, use the arguments:
• expand_tabs = False
• replace_whitespace = True
• drop_whitespace = True
• break_long_words = False
• break_on_hyphens = False
Examples
34.3.14.42 pandas.Series.str.zfill
Series.str.zfill(width)
Filling left side of strings in the Series/Index with 0. Equivalent to str.zfill().
Parameters width : int
Minimum width of resulting string; additional characters will be filled with 0
Returns
filled [Series/Index of objects]
34.3.14.43 pandas.Series.str.isalnum
Series.str.isalnum()
Check whether all characters in each string in the Series/Index are alphanumeric. Equivalent to str.
isalnum().
Returns
is [Series/array of boolean values]
34.3.14.44 pandas.Series.str.isalpha
Series.str.isalpha()
Check whether all characters in each string in the Series/Index are alphabetic. Equivalent to str.isalpha().
Returns
is [Series/array of boolean values]
34.3.14.45 pandas.Series.str.isdigit
Series.str.isdigit()
Check whether all characters in each string in the Series/Index are digits. Equivalent to str.isdigit().
Returns
is [Series/array of boolean values]
34.3.14.46 pandas.Series.str.isspace
Series.str.isspace()
Check whether all characters in each string in the Series/Index are whitespace. Equivalent to str.
isspace().
Returns
34.3.14.47 pandas.Series.str.islower
Series.str.islower()
Check whether all characters in each string in the Series/Index are lowercase. Equivalent to str.islower().
Returns
is [Series/array of boolean values]
34.3.14.48 pandas.Series.str.isupper
Series.str.isupper()
Check whether all characters in each string in the Series/Index are uppercase. Equivalent to str.isupper().
Returns
is [Series/array of boolean values]
34.3.14.49 pandas.Series.str.istitle
Series.str.istitle()
Check whether all characters in each string in the Series/Index are titlecase. Equivalent to str.istitle().
Returns
is [Series/array of boolean values]
34.3.14.50 pandas.Series.str.isnumeric
Series.str.isnumeric()
Check whether all characters in each string in the Series/Index are numeric. Equivalent to str.
isnumeric().
Returns
is [Series/array of boolean values]
34.3.14.51 pandas.Series.str.isdecimal
Series.str.isdecimal()
Check whether all characters in each string in the Series/Index are decimal. Equivalent to str.isdecimal().
Returns
is [Series/array of boolean values]
34.3.14.52 pandas.Series.str.get_dummies
Series.str.get_dummies(sep=’|’)
Split each string in the Series by sep and return a frame of dummy/indicator variables.
Parameters sep : string, default “|”
String to split on.
Returns
dummies [DataFrame]
See also:
pandas.get_dummies
Examples
34.3.15 Categorical
Pandas defines a custom data type for representing data that can take only a limited, fixed set of values. The dtype of
a Categorical can be described by a pandas.api.types.CategoricalDtype.
api.types.CategoricalDtype([categories, or- Type for categorical data with the categories and or-
dered]) deredness
34.3.15.1 pandas.api.types.CategoricalDtype
See also:
pandas.Categorical
Notes
This class is useful for specifying the type of a Categorical independent of the values. See CategoricalDtype
for more.
Examples
Attributes
pandas.api.types.CategoricalDtype.categories
CategoricalDtype.categories
An Index containing the unique categories allowed.
pandas.api.types.CategoricalDtype.ordered
CategoricalDtype.ordered
Whether the categories have an ordered relationship
Methods
None
34.3.15.2 pandas.Categorical
All values of the Categorical are either in categories or np.nan. Assigning values outside of categories will raise
a ValueError. Order is defined by the order of the categories, not lexical order of the values.
Parameters values : list-like
The values of the categorical. If categories are given, values not in categories will be
replaced with NaN.
categories : Index-like (unique), optional
The unique categories for this categorical. If not given, the categories are assumed
to be the unique values of values.
ordered : boolean, (default False)
Whether or not this categorical is treated as a ordered categorical. If not given, the
resulting categorical will not be ordered.
dtype : CategoricalDtype
An instance of CategoricalDtype to use for this categorical
New in version 0.21.0.
Raises ValueError
If the categories do not validate.
TypeError
If an explicit ordered=True is given but no categories and the values are not
sortable.
See also:
Notes
Examples
Ordered Categoricals can be sorted according to the custom order of the categories and can have a min and max
value.
Attributes
pandas.Categorical.categories
Categorical.categories
The categories of this categorical.
Setting assigns new values to each category (effectively a rename of each individual category).
The assigned value has to be a list-like object. All items must be unique and the number of items in the
new categories must be the same as the number of items in the old categories.
Assigning to categories is a inplace operation!
Raises ValueError
If the new categories do not validate as categories or if the number of new cate-
gories is unequal the number of old categories
See also:
rename_categories, reorder_categories, add_categories, remove_categories,
remove_unused_categories, set_categories
pandas.Categorical.codes
Categorical.codes
The category codes of this categorical.
Level codes are an array if integer which are the positions of the real values in the categories array.
There is not setter, use the other categorical methods and the normal item setter to change values in the
categorical.
pandas.Categorical.ordered
Categorical.ordered
Whether the categories have an ordered relationship
pandas.Categorical.dtype
Categorical.dtype
The CategoricalDtype for this instance
Methods
from_codes(codes, categories[, ordered]) Make a Categorical type from codes and categories
arrays.
__array__([dtype]) The numpy array interface.
pandas.Categorical.from_codes
pandas.Categorical.__array__
Categorical.__array__(dtype=None)
The numpy array interface.
Returns values : numpy array
A numpy array of either the specified dtype or, if dtype==None (default), the
same dtype as categorical.categories.dtype
The alternative Categorical.from_codes() constructor can be used when you have the categories and integer
codes already:
Categorical.from_codes(codes, categories[, Make a Categorical type from codes and categories ar-
. . . ]) rays.
np.asarray(categorical) works by implementing the array interface. Be aware, that this converts the Cate-
gorical back to a NumPy array, so categories and order information is not preserved!
A Categorical can be stored in a Series or DataFrame. To create a Series of dtype category, use cat =
s.astype(dtype) or Series(..., dtype=dtype) where dtype is either
• the string 'category'
• an instance of CategoricalDtype.
If the Series is of dtype CategoricalDtype, Series.cat can be used to change the categorical data. This
accessor is similar to the Series.dt or Series.str and has the following usable methods and properties:
34.3.15.3 pandas.Series.cat.categories
Series.cat.categories
The categories of this categorical.
Setting assigns new values to each category (effectively a rename of each individual category).
The assigned value has to be a list-like object. All items must be unique and the number of items in the new
categories must be the same as the number of items in the old categories.
Assigning to categories is a inplace operation!
Raises ValueError
If the new categories do not validate as categories or if the number of new categories
is unequal the number of old categories
See also:
rename_categories, reorder_categories, add_categories, remove_categories,
remove_unused_categories, set_categories
34.3.15.4 pandas.Series.cat.ordered
Series.cat.ordered
Whether the categories have an ordered relationship
34.3.15.5 pandas.Series.cat.codes
Series.cat.codes
34.3.15.6 pandas.Series.cat.rename_categories
Series.cat.rename_categories(*args, **kwargs)
Renames categories.
Parameters new_categories : list-like, dict-like or callable
• list-like: all items must be unique and the number of items in the new categories
must match the existing number of categories.
• dict-like: specifies a mapping from old categories to new. Categories not con-
tained in the mapping are passed through and extra categories in the mapping
are ignored.
New in version 0.21.0.
• callable : a callable that is called on all items in the old categories and whose
return values comprise the new categories.
New in version 0.23.0.
Warning: Currently, Series are considered list like. In a future version of pandas
they’ll be considered dict-like.
Examples
For dict-like new_categories, extra keys are ignored and categories not in the dictionary are passed through
34.3.15.7 pandas.Series.cat.reorder_categories
Series.cat.reorder_categories(*args, **kwargs)
Reorders categories as specified in new_categories.
new_categories need to include all old categories and no new category items.
Parameters new_categories : Index-like
The categories in new order.
ordered : boolean, optional
Whether or not the categorical is treated as a ordered categorical. If not given, do
not change the ordered information.
inplace : boolean (default: False)
Whether or not to reorder the categories inplace or return a copy of this categorical
with reordered categories.
Returns
cat [Categorical with reordered categories or None if inplace.]
Raises ValueError
If the new categories do not contain all old category items or any new ones
See also:
rename_categories, add_categories, remove_categories,
remove_unused_categories, set_categories
34.3.15.8 pandas.Series.cat.add_categories
Series.cat.add_categories(*args, **kwargs)
Add new categories.
new_categories will be included at the last/highest place in the categories and will be unused directly after this
call.
34.3.15.9 pandas.Series.cat.remove_categories
Series.cat.remove_categories(*args, **kwargs)
Removes the specified categories.
removals must be included in the old categories. Values which were in the removed categories will be set to
NaN
Parameters removals : category or list of categories
The categories which should be removed.
inplace : boolean (default: False)
Whether or not to remove the categories inplace or return a copy of this categorical
with removed categories.
Returns
cat [Categorical with removed categories or None if inplace.]
Raises ValueError
If the removals are not contained in the categories
See also:
rename_categories, reorder_categories, add_categories,
remove_unused_categories, set_categories
34.3.15.10 pandas.Series.cat.remove_unused_categories
Series.cat.remove_unused_categories(*args, **kwargs)
Removes categories which are not used.
Parameters inplace : boolean (default: False)
Whether or not to drop unused categories inplace or return a copy of this categorical
with unused categories dropped.
Returns
cat [Categorical with unused categories dropped or None if inplace.]
See also:
rename_categories, reorder_categories, add_categories, remove_categories,
set_categories
34.3.15.11 pandas.Series.cat.set_categories
Series.cat.set_categories(*args, **kwargs)
Sets the categories to the specified new_categories.
new_categories can include new categories (which will result in unused categories) or remove old categories
(which results in values set to NaN). If rename==True, the categories will simple be renamed (less or more
items than in old categories will result in values set to NaN or in unused categories respectively).
This method can be used to perform more than one action of adding, removing, and reordering simultaneously
and is therefore faster than performing the individual steps via the more specialised methods.
On the other hand this methods does not do checks (e.g., whether the old categories are included in the new
categories on a reorder), which can result in surprising changes, for example when using special string dtypes
on python3, which does not considers a S1 string equal to a single char python string.
Parameters new_categories : Index-like
The categories in new order.
ordered : boolean, (default: False)
Whether or not the categorical is treated as a ordered categorical. If not given, do
not change the ordered information.
rename : boolean (default: False)
Whether or not the new_categories should be considered as a rename of the old
categories or as reordered categories.
inplace : boolean (default: False)
Whether or not to reorder the categories inplace or return a copy of this categorical
with reordered categories.
Returns
cat [Categorical with reordered categories or None if inplace.]
Raises ValueError
If new_categories does not validate as categories
See also:
rename_categories, reorder_categories, add_categories, remove_categories,
remove_unused_categories
34.3.15.12 pandas.Series.cat.as_ordered
Series.cat.as_ordered(*args, **kwargs)
Sets the Categorical to be ordered
Parameters inplace : boolean (default: False)
Whether or not to set the ordered attribute inplace or return a copy of this categorical
with ordered set to True
34.3.15.13 pandas.Series.cat.as_unordered
Series.cat.as_unordered(*args, **kwargs)
Sets the Categorical to be unordered
Parameters inplace : boolean (default: False)
Whether or not to set the ordered attribute inplace or return a copy of this categorical
with ordered set to False
34.3.16 Plotting
Series.plot is both a callable method and a namespace attribute for specific plotting methods of the form
Series.plot.<kind>.
34.3.16.1 pandas.Series.plot.area
Series.plot.area(**kwds)
Area plot
Parameters ‘**kwds‘ : optional
Additional keyword arguments are documented in pandas.Series.plot().
Returns
axes [matplotlib.axes.Axes or numpy.ndarray of them]
34.3.16.2 pandas.Series.plot.bar
Series.plot.bar(**kwds)
Vertical bar plot
Parameters ‘**kwds‘ : optional
Additional keyword arguments are documented in pandas.Series.plot().
Returns
axes [matplotlib.axes.Axes or numpy.ndarray of them]
34.3.16.3 pandas.Series.plot.barh
Series.plot.barh(**kwds)
Horizontal bar plot
Parameters ‘**kwds‘ : optional
Additional keyword arguments are documented in pandas.Series.plot().
Returns
axes [matplotlib.axes.Axes or numpy.ndarray of them]
34.3.16.4 pandas.Series.plot.box
Series.plot.box(**kwds)
Boxplot
Parameters ‘**kwds‘ : optional
Additional keyword arguments are documented in pandas.Series.plot().
Returns
axes [matplotlib.axes.Axes or numpy.ndarray of them]
34.3.16.5 pandas.Series.plot.density
Examples
Given a Series of points randomly sampled from an unknown distribution, estimate its PDF using KDE with
automatic bandwidth determination and plot the results, evaluating them at 1000 equally spaced points (default):
A scalar bandwidth can be specified. Using a small bandwidth value can lead to overfitting, while using a large
bandwidth value may result in underfitting:
>>> ax = s.plot.kde(bw_method=0.3)
>>> ax = s.plot.kde(bw_method=3)
Finally, the ind parameter determines the evaluation points for the plot of the estimated PDF:
34.3.16.6 pandas.Series.plot.hist
Series.plot.hist(bins=10, **kwds)
Histogram
Parameters bins: integer, default 10
Number of histogram bins to be used
‘**kwds‘ : optional
Additional keyword arguments are documented in pandas.Series.plot().
Returns
axes [matplotlib.axes.Axes or numpy.ndarray of them]
34.3.16.7 pandas.Series.plot.kde
Examples
Given a Series of points randomly sampled from an unknown distribution, estimate its PDF using KDE with
automatic bandwidth determination and plot the results, evaluating them at 1000 equally spaced points (default):
A scalar bandwidth can be specified. Using a small bandwidth value can lead to overfitting, while using a large
bandwidth value may result in underfitting:
>>> ax = s.plot.kde(bw_method=0.3)
>>> ax = s.plot.kde(bw_method=3)
Finally, the ind parameter determines the evaluation points for the plot of the estimated PDF:
34.3.16.8 pandas.Series.plot.line
Series.plot.line(**kwds)
Line plot
Parameters ‘**kwds‘ : optional
Additional keyword arguments are documented in pandas.Series.plot().
Returns
axes [matplotlib.axes.Axes or numpy.ndarray of them]
Examples
34.3.16.9 pandas.Series.plot.pie
Series.plot.pie(**kwds)
Pie chart
Series.hist([by, ax, grid, xlabelsize, . . . ]) Draw histogram of the input series using matplotlib
34.3.18 Sparse
34.3.18.1 pandas.SparseSeries.to_coo
Sort the row and column labels before forming the sparse matrix.
Returns
y [scipy.sparse.coo_matrix]
rows [list (row labels)]
columns [list (column labels)]
Examples
34.3.18.2 pandas.SparseSeries.from_coo
Examples
34.4 DataFrame
34.4.1 Constructor
34.4.1.1 pandas.DataFrame
Examples
>>> df.dtypes
col1 int64
col2 int64
dtype: object
Attributes
pandas.DataFrame.T
DataFrame.T
Transpose index and columns.
Reflect the DataFrame over its main diagonal by writing rows as columns and vice-versa. The property T
is an accessor to the method transpose().
Parameters copy : bool, default False
If True, the underlying data is copied. Otherwise (default), no copy is made if
possible.
*args, **kwargs
Additional keywords have no effect but might be accepted for compatibility with
numpy.
Returns DataFrame
The transposed DataFrame.
See also:
Notes
Transposing a DataFrame with mixed dtypes will result in a homogeneous DataFrame with the object
dtype. In such a case, a copy of the data is always made.
Examples
When the dtype is homogeneous in the original DataFrame, we get a transposed DataFrame with the same
dtype:
>>> df1.dtypes
col1 int64
col2 int64
dtype: object
>>> df1_transposed.dtypes
0 int64
1 int64
dtype: object
When the DataFrame has mixed dtypes, we get a transposed DataFrame with the object dtype:
>>> df2.dtypes
name object
score float64
employed bool
kids int64
dtype: object
>>> df2_transposed.dtypes
0 object
1 object
dtype: object
pandas.DataFrame.at
DataFrame.at
Access a single value for a row/column label pair.
Similar to loc, in that both provide label-based lookups. Use at if you only need to get or set a single
value in a DataFrame or Series.
Raises KeyError
When label does not exist in DataFrame
See also:
Examples
>>> df.loc[5].at['B']
4
pandas.DataFrame.axes
DataFrame.axes
Return a list representing the axes of the DataFrame.
It has the row axis labels and column axis labels as the only members. They are returned in that order.
Examples
pandas.DataFrame.blocks
DataFrame.blocks
Internal property, property synonym for as_blocks()
Deprecated since version 0.21.0.
pandas.DataFrame.columns
DataFrame.columns
The column labels of the DataFrame.
pandas.DataFrame.dtypes
DataFrame.dtypes
Return the dtypes in the DataFrame.
This returns a Series with the data type of each column. The result’s index is the original DataFrame’s
columns. Columns with mixed types are stored with the object dtype. See the User Guide for more.
Returns pandas.Series
The data type of each column.
See also:
Examples
pandas.DataFrame.empty
DataFrame.empty
Indicator whether DataFrame is empty.
True if DataFrame is entirely empty (no items), meaning any of the axes are of length 0.
Returns bool
If DataFrame is empty, return True, if not return False.
See also:
pandas.Series.dropna, pandas.DataFrame.dropna
Notes
If DataFrame contains only NaNs, it is still not considered empty. See the example below.
Examples
If we only have NaNs in our DataFrame, it is not considered empty! We will need to drop the NaNs to
make the DataFrame empty:
pandas.DataFrame.ftypes
DataFrame.ftypes
Return the ftypes (indication of sparse/dense and dtype) in DataFrame.
This returns a Series with the data type of each column. The result’s index is the original DataFrame’s
columns. Columns with mixed types are stored with the object dtype. See the User Guide for more.
Returns pandas.Series
The data type and indication of sparse/dense of each column.
See also:
Notes
Sparse data should have the same dtypes as its dense representation.
Examples
>>> pd.SparseDataFrame(arr).ftypes
0 float64:sparse
1 float64:sparse
2 float64:sparse
3 float64:sparse
dtype: object
pandas.DataFrame.iat
DataFrame.iat
Access a single value for a row/column pair by integer position.
Similar to iloc, in that both provide integer-based lookups. Use iat if you only need to get or set a
single value in a DataFrame or Series.
Raises IndexError
When integer position is out of bounds
See also:
Examples
>>> df.iat[1, 2]
1
>>> df.iat[1, 2] = 10
>>> df.iat[1, 2]
10
>>> df.loc[0].iat[1]
2
pandas.DataFrame.iloc
DataFrame.iloc
Purely integer-location based indexing for selection by position.
.iloc[] is primarily integer position based (from 0 to length-1 of the axis), but may also be used
with a boolean array.
Allowed inputs are:
• An integer, e.g. 5.
• A list or array of integers, e.g. [4, 3, 0].
• A slice object with ints, e.g. 1:7.
• A boolean array.
• A callable function with one argument (the calling Series, DataFrame or Panel) and that returns
valid output for indexing (one of the above)
.iloc will raise IndexError if a requested indexer is out-of-bounds, except slice indexers which
allow out-of-bounds indexing (this conforms with python/numpy slice semantics).
See more at Selection by Position
pandas.DataFrame.index
DataFrame.index
The index (row labels) of the DataFrame.
pandas.DataFrame.ix
DataFrame.ix
A primarily label-location based indexer, with integer position fallback.
Warning: Starting in 0.20.0, the .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers.
.ix[] supports mixed integer and label based access. It is primarily label based, but will fall back to
integer positional access unless the corresponding axis is of integer type.
.ix is the most general indexer and will support any of the inputs in .loc and .iloc. .ix also
supports floating point label schemes. .ix is exceptionally useful when dealing with mixed positional
and label based hierarchical indexes.
However, when an axis is integer based, ONLY label based access and not positional access is supported.
Thus, in such cases, it’s usually better to be explicit and use .iloc or .loc.
See more at Advanced Indexing.
pandas.DataFrame.loc
DataFrame.loc
Access a group of rows and columns by label(s) or a boolean array.
.loc[] is primarily label based, but may also be used with a boolean array.
Allowed inputs are:
• A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index, and never as an
integer position along the index).
• A list or array of labels, e.g. ['a', 'b', 'c'].
• A slice object with labels, e.g. 'a':'f'.
Warning: Note that contrary to usual python slices, both the start and the stop are included
• A boolean array of the same length as the axis being sliced, e.g. [True, False, True].
• A callable function with one argument (the calling Series, DataFrame or Panel) and that returns
valid output for indexing (one of the above)
See more at Selection by Label
Raises KeyError:
when any items are not found
See also:
Examples
Getting values
>>> df.loc['viper']
max_speed 4
shield 5
Name: viper, dtype: int64
Slice with labels for row and single label for column. As mentioned above, note that both the start and
stop of the slice are included.
Setting values
Set value for all items matching the list of labels
>>> df.loc['cobra'] = 10
>>> df
max_speed shield
cobra 10 10
viper 4 50
sidewinder 7 50
Slice with integer labels for rows. As mentioned above, note that both the start and stop of the slice are
included.
>>> df.loc[7:9]
max_speed shield
7 1 2
8 4 5
9 7 8
>>> tuples = [
... ('cobra', 'mark i'), ('cobra', 'mark ii'),
... ('sidewinder', 'mark i'), ('sidewinder', 'mark ii'),
... ('viper', 'mark ii'), ('viper', 'mark iii')
... ]
>>> index = pd.MultiIndex.from_tuples(tuples)
>>> values = [[12, 2], [0, 4], [10, 20],
... [1, 4], [7, 1], [16, 36]]
>>> df = pd.DataFrame(values, columns=['max_speed', 'shield'], index=index)
>>> df
max_speed shield
cobra mark i 12 2
mark ii 0 4
sidewinder mark i 10 20
mark ii 1 4
viper mark ii 7 1
mark iii 16 36
>>> df.loc['cobra']
max_speed shield
mark i 12 2
mark ii 0 4
Single label for row and column. Similar to passing in a tuple, this returns a Series.
Single tuple for the index with a single label for the column
pandas.DataFrame.ndim
DataFrame.ndim
Return an int representing the number of axes / array dimensions.
Return 1 if Series. Otherwise return 2 if DataFrame.
See also:
ndarray.ndim
Examples
pandas.DataFrame.shape
DataFrame.shape
Return a tuple representing the dimensionality of the DataFrame.
See also:
ndarray.shape
Examples
pandas.DataFrame.size
DataFrame.size
Return an int representing the number of elements in this object.
Return the number of rows if Series. Otherwise return the number of rows times number of columns if
DataFrame.
See also:
ndarray.size
Examples
pandas.DataFrame.style
DataFrame.style
Property returning a Styler object containing methods for building a styled HTML representation fo the
DataFrame.
See also:
pandas.io.formats.style.Styler
pandas.DataFrame.values
DataFrame.values
Return a Numpy representation of the DataFrame.
Only the values in the DataFrame will be returned, the axes labels will be removed.
Returns numpy.ndarray
The values of the DataFrame.
See also:
Notes
The dtype will be a lower-common-denominator dtype (implicit upcasting); that is to say if the dtypes
(even of numeric types) are mixed, the one that accommodates all will be chosen. Use this with care if
you are not dealing with the blocks.
e.g. If the dtypes are float16 and float32, dtype will be upcast to float32. If dtypes are int32 and uint8,
dtype will be upcast to int32. By numpy.find_common_type() convention, mixing int64 and uint64
will result in a float64 dtype.
Examples
A DataFrame where all columns are the same type (e.g., int64) results in an array of the same type.
A DataFrame with mixed type columns(e.g., str/object, int64, float32) results in an ndarray of the broadest
type that accommodates these mixed types (e.g., object).
is_copy
Methods
pandas.DataFrame.abs
DataFrame.abs()
Return a Series/DataFrame with absolute numeric value of each element.
Notes
√
For complex inputs, 1.2 + 1j, the absolute value is 𝑎2 + 𝑏2 .
Examples
Select rows with data closest to certain value using argsort (from StackOverflow).
>>> df = pd.DataFrame({
... 'a': [4, 5, 6, 7],
... 'b': [10, 20, 30, 40],
... 'c': [100, 50, -30, -50]
... })
>>> df
a b c
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
>>> df.loc[(df.c - 43).abs().argsort()]
a b c
1 5 20 50
(continues on next page)
pandas.DataFrame.add
Notes
Examples
pandas.DataFrame.add_prefix
DataFrame.add_prefix(prefix)
Prefix labels with string prefix.
For Series, the row labels are prefixed. For DataFrame, the column labels are prefixed.
Parameters prefix : str
The string to add before each label.
Returns Series or DataFrame
New Series or DataFrame with updated labels.
See also:
Examples
>>> s.add_prefix('item_')
item_0 1
item_1 2
item_2 3
item_3 4
dtype: int64
>>> df.add_prefix('col_')
col_A col_B
0 1 3
1 2 4
2 3 5
3 4 6
pandas.DataFrame.add_suffix
DataFrame.add_suffix(suffix)
Suffix labels with string suffix.
For Series, the row labels are suffixed. For DataFrame, the column labels are suffixed.
Parameters suffix : str
The string to add after each label.
Returns Series or DataFrame
New Series or DataFrame with updated labels.
See also:
Examples
>>> s.add_suffix('_item')
0_item 1
1_item 2
2_item 3
3_item 4
dtype: int64
>>> df.add_suffix('_col')
A_col B_col
0 1 3
1 2 4
2 3 5
3 4 6
pandas.DataFrame.agg
Notes
Examples
pandas.DataFrame.aggregate
Notes
Examples
pandas.DataFrame.align
Value to use for missing values. Defaults to NaN, but can be any “compatible”
value
pandas.DataFrame.all
Examples
Series
DataFrames
Create a dataframe from a dictionary.
>>> df.all()
col1 True
col2 False
dtype: bool
>>> df.all(axis='columns')
0 True
1 False
dtype: bool
>>> df.all(axis=None)
False
pandas.DataFrame.any
Examples
Series
For Series input, the output is a scalar indicating whether any element is True.
DataFrame
Whether each column contains at least one True element (the default).
>>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]})
>>> df
A B C
0 1 0 0
1 2 2 0
>>> df.any()
A True
B True
C False
dtype: bool
>>> df.any(axis='columns')
0 True
1 True
dtype: bool
>>> df.any(axis='columns')
0 True
1 False
dtype: bool
>>> df.any(axis=None)
True
>>> pd.DataFrame([]).any()
Series([], dtype: bool)
pandas.DataFrame.append
Notes
If a list of dict/series is passed and the keys are all contained in the DataFrame’s index, the order of the
columns in the resulting DataFrame will be unchanged.
Iteratively appending rows to a DataFrame can be more computationally intensive than a single concate-
nate. A better solution is to append those rows to a list and then concatenate the list with the original
DataFrame all at once.
Examples
The following, while not recommended methods for generating DataFrames, show two ways to generate
a DataFrame from multiple data sources.
Less efficient:
>>> df = pd.DataFrame(columns=['A'])
>>> for i in range(5):
... df = df.append({'A': i}, ignore_index=True)
>>> df
A
0 0
1 1
2 2
3 3
4 4
More efficient:
pandas.DataFrame.apply
Notes
In the current implementation apply calls func twice on the first column/row to decide whether it can take
a fast or slow code path. This can lead to unexpected behavior if func has side-effects, as they will take
effect twice for the first column/row.
Examples
Using a numpy universal function (in this case the same as np.sqrt(df)):
>>> df.apply(np.sqrt)
A B
0 2.0 3.0
1 2.0 3.0
2 2.0 3.0
Returning a Series inside the function is similar to passing result_type='expand'. The resulting
column names will be the Series index.
Passing result_type='broadcast' will ensure the same shape result, whether list-like or scalar is
returned by the function, and broadcast it along the axis. The resulting column names will be the originals.
pandas.DataFrame.applymap
DataFrame.applymap(func)
Apply a function to a Dataframe elementwise.
This method applies a function that accepts and returns a scalar to every element of a DataFrame.
Parameters func : callable
Python function, returns a single value from a single value.
Returns DataFrame
Transformed DataFrame.
See also:
Examples
Note that a vectorized version of func often exists, which will be much faster. You could square each
number elementwise.
>>> df ** 2
0 1
0 1.000000 4.494400
1 11.262736 20.857489
pandas.DataFrame.as_blocks
DataFrame.as_blocks(copy=True)
Convert the frame to a dict of dtype -> Constructor Types that each has a homogeneous dtype.
Deprecated since version 0.21.0.
NOTE: the dtypes of the blocks WILL BE PRESERVED HERE (unlike in as_matrix)
Parameters
copy [boolean, default True]
Returns
values [a dict of dtype -> Constructor Types]
pandas.DataFrame.as_matrix
DataFrame.as_matrix(columns=None)
Convert the frame to its Numpy-array representation.
Deprecated since version 0.23.0: Use DataFrame.values() instead.
Parameters columns: list, optional, default:None
If None, return all columns, otherwise, returns specified columns.
Returns values : ndarray
If the caller is heterogeneous and contains booleans or objects, the result will be
of dtype=object. See Notes.
See also:
pandas.DataFrame.values
Notes
pandas.DataFrame.asfreq
Notes
To learn more about the frequency strings, please see this link.
Examples
>>> df.asfreq(freq='30S')
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 NaN
2000-01-01 00:01:30 NaN
2000-01-01 00:02:00 2.0
2000-01-01 00:02:30 NaN
2000-01-01 00:03:00 3.0
pandas.DataFrame.asof
DataFrame.asof(where, subset=None)
The last row without any NaN is taken (or the last row without NaN considering only the subset of
columns in the case of a DataFrame)
New in version 0.19.0: For DataFrame
If there is no good value, NaN is returned for a Series a Series of NaN values for a DataFrame
Parameters
where [date or array of dates]
subset : string or list of strings, default None
if not None use these columns for NaN propagation
Returns where is scalar
• value or NaN if input is Series
• Series if input is DataFrame
See also:
merge_asof
Notes
pandas.DataFrame.assign
DataFrame.assign(**kwargs)
Assign new columns to a DataFrame, returning a new object (a copy) with the new columns added to the
original ones. Existing columns that are re-assigned will be overwritten.
Parameters kwargs : keyword, value pairs
keywords are the column names. If the values are callable, they are computed on
the DataFrame and assigned to the new columns. The callable must not change
input DataFrame (though pandas doesn’t check it). If the values are not callable,
(e.g. a Series, scalar, or array), they are simply assigned.
Returns df : DataFrame
A new DataFrame with the new columns in addition to all the existing columns.
Notes
Assigning multiple columns within the same assign is possible. For Python 3.6 and above, later items
in ‘**kwargs’ may refer to newly created or modified columns in ‘df’; items are computed and assigned
into ‘df’ in order. For Python 3.5 and below, the order of keyword arguments is not specified, you cannot
refer to newly created or modified columns. All items are computed first, and then assigned in alphabetical
order.
Changed in version 0.23.0: Keyword argument order is maintained for Python 3.6 and later.
Examples
pandas.DataFrame.astype
Returns
casted [type of caller]
See also:
Examples
>>> ser.astype('category')
0 1
1 2
dtype: category
Categories (2, int64): [1, 2]
Note that using copy=False and changing data on a new pandas object may propagate changes:
>>> s1 = pd.Series([1,2])
>>> s2 = s1.astype('int64', copy=False)
>>> s2[0] = 10
>>> s1 # note that s1[0] has changed too
0 10
1 2
dtype: int64
pandas.DataFrame.at_time
DataFrame.at_time(time, asof=False)
Select values at particular time of day (e.g. 9:30AM).
Parameters
time [datetime.time or string]
Returns
values_at_time [type of caller]
Raises TypeError
If the index is not a DatetimeIndex
See also:
Examples
>>> ts.at_time('12:00')
A
2018-04-09 12:00:00 2
2018-04-10 12:00:00 4
pandas.DataFrame.between_time
Examples
You get the times that are not between two times by setting start_time later than end_time:
pandas.DataFrame.bfill
pandas.DataFrame.bool
DataFrame.bool()
Return the bool of a single element PandasObject.
This must be a boolean scalar value, either True or False. Raise a ValueError if the PandasObject does not
have exactly 1 element, or that element is not boolean
pandas.DataFrame.boxplot
Notes
Use return_type='dict' when you want to tweak the appearance of the lines after plotting. In this
case a dict containing the Lines making up the boxes, caps, fliers, medians, and whiskers is returned.
Examples
Boxplots can be created for every column in the dataframe by df.boxplot() or indicating the columns
to be used:
>>> np.random.seed(1234)
>>> df = pd.DataFrame(np.random.randn(10,4),
... columns=['Col1', 'Col2', 'Col3', 'Col4'])
>>> boxplot = df.boxplot(column=['Col1', 'Col2', 'Col3'])
Boxplots of variables distributions grouped by the values of a third variable can be created using the option
by. For instance:
A list of strings (i.e. ['X', 'Y']) can be passed to boxplot in order to group the data by combination
of the variables in the x-axis:
>>> df = pd.DataFrame(np.random.randn(10,3),
... columns=['Col1', 'Col2', 'Col3'])
>>> df['X'] = pd.Series(['A', 'A', 'A', 'A', 'A',
... 'B', 'B', 'B', 'B', 'B'])
>>> df['Y'] = pd.Series(['A', 'B', 'A', 'B', 'A',
... 'B', 'A', 'B', 'A', 'B'])
>>> boxplot = df.boxplot(column=['Col1', 'Col2'], by=['X', 'Y'])
Additional formatting can be done to the boxplot, like suppressing the grid (grid=False), rotating the
labels in the x-axis (i.e. rot=45) or changing the fontsize (i.e. fontsize=15):
The parameter return_type can be used to select the type of element returned by boxplot. When
return_type='axes' is selected, the matplotlib axes on which the boxplot is drawn are returned:
If return_type is None, a NumPy array of axes with the same shape as layout is returned:
pandas.DataFrame.clip
Examples
>>> data = {'col_0': [9, -3, 0, -1, 5], 'col_1': [-2, -7, 6, 8, -5]}
>>> df = pd.DataFrame(data)
>>> df
col_0 col_1
0 9 -2
1 -3 -7
2 0 6
3 -1 8
4 5 -5
Clips using specific lower and upper thresholds per column element:
>>> t = pd.Series([2, -4, -1, 6, 3])
>>> t
0 2
1 -4
2 -1
3 6
4 3
dtype: int64
pandas.DataFrame.clip_lower
Series.clip Return copy of input with values below and above thresholds truncated.
Series.clip_upper Return copy of input with values above threshold truncated.
Examples
Series clipping element-wise using an array of thresholds. threshold should be the same length as the
Series.
>>> df.clip_lower(3)
A B
0 3 3
1 3 4
2 5 6
Or to an array of values. By default, threshold should be the same shape as the DataFrame.
Control how threshold is broadcast with axis. In this case threshold should be the same length as the axis
specified by axis.
pandas.DataFrame.clip_upper
pandas.DataFrame.combine
Function that takes two series as inputs and return a Series or a scalar
Examples
pandas.DataFrame.combine_first
DataFrame.combine_first(other)
Combine two DataFrame objects and default to non-null values in frame calling the method. Result index
columns will be the union of the respective indexes and columns
Parameters
other [DataFrame]
Returns
combined [DataFrame]
See also:
Examples
pandas.DataFrame.compound
pandas.DataFrame.consolidate
DataFrame.consolidate(inplace=False)
Compute NDFrame with “consolidated” internals (data of each dtype grouped together in a single ndar-
ray).
Deprecated since version 0.20.0: Consolidate will be an internal implementation only.
pandas.DataFrame.convert_objects
Returns
converted [same as input object]
See also:
pandas.DataFrame.copy
DataFrame.copy(deep=True)
Make a copy of this object’s indices and data.
When deep=True (default), a new object will be created with a copy of the calling object’s data and
indices. Modifications to the data or indices of the copy will not be reflected in the original object (see
notes below).
When deep=False, a new object will be created without copying the calling object’s data or index
(only references to the data and index are copied). Any changes to the data of the original will be reflected
in the shallow copy (and vice versa).
Parameters deep : bool, default True
Make a deep copy, including a copy of the data and the indices. With
deep=False neither the indices nor the data are copied.
Returns copy : Series, DataFrame or Panel
Object type matches caller.
Notes
When deep=True, data is copied but actual Python objects will not be copied recursively, only the
reference to the object. This is in contrast to copy.deepcopy in the Standard Library, which recursively
copies object data (see examples below).
While Index objects are copied when deep=True, the underlying numpy array is not copied for per-
formance reasons. Since Index is immutable, the underlying data can be safely shared and a copy is not
needed.
Examples
>>> s is shallow
False
>>> s.values is shallow.values and s.index is shallow.index
True
>>> s is deep
False
>>> s.values is deep.values or s.index is deep.index
False
Updates to the data shared by shallow copy and original is reflected in both; deep copy remains unchanged.
>>> s[0] = 3
>>> shallow[1] = 4
>>> s
a 3
b 4
dtype: int64
>>> shallow
a 3
b 4
dtype: int64
>>> deep
a 1
b 2
dtype: int64
Note that when copying an object containing Python objects, a deep copy will copy the data, but will not
do so recursively. Updating a nested data object will be reflected in the deep copy.
pandas.DataFrame.corr
DataFrame.corr(method=’pearson’, min_periods=1)
Compute pairwise correlation of columns, excluding NA/null values
Parameters method : {‘pearson’, ‘kendall’, ‘spearman’}
• pearson : standard correlation coefficient
• kendall : Kendall Tau correlation coefficient
• spearman : Spearman rank correlation
min_periods : int, optional
Minimum number of observations required per pair of columns to have a valid
result. Currently only available for pearson and spearman correlation
Returns
y [DataFrame]
pandas.DataFrame.corrwith
pandas.DataFrame.count
Examples
>>> df = pd.DataFrame({"Person":
... ["John", "Myla", None, "John", "Myla"],
... "Age": [24., np.nan, 21., 33, 26],
... "Single": [False, True, True, True, False]})
>>> df
Person Age Single
0 John 24.0 False
1 Myla NaN True
2 None 21.0 True
3 John 33.0 True
4 Myla 26.0 False
>>> df.count()
Person 4
Age 4
Single 5
dtype: int64
>>> df.count(axis='columns')
0 3
1 2
2 2
3 3
4 3
dtype: int64
pandas.DataFrame.cov
DataFrame.cov(min_periods=None)
Compute pairwise covariance of columns, excluding NA/null values.
Compute the pairwise covariance among the series of a DataFrame. The returned data frame is the co-
variance matrix of the columns of the DataFrame.
Both NA and null values are automatically excluded from the calculation. (See the note below about bias
from missing values.) A threshold can be set for the minimum number of observations for each value
created. Comparisons with observations below this threshold will be returned as NaN.
This method is generally used for the analysis of time series data to understand the relationship between
different measures across time.
Parameters min_periods : int, optional
Minimum number of observations required per pair of columns to have a valid
result.
Returns DataFrame
The covariance matrix of the series of the DataFrame.
See also:
Notes
Returns the covariance matrix of the DataFrame’s time series. The covariance is normalized by N-1.
For DataFrames that have Series that are missing data (assuming that data is missing at random) the re-
turned covariance matrix will be an unbiased estimate of the variance and covariance between the member
Series.
However, for many applications this estimate may not be acceptable because the estimate covariance
matrix is not guaranteed to be positive semi-definite. This could lead to estimate correlations having
absolute values which are greater than one, and/or a non-invertible covariance matrix. See Estimation of
covariance matrices for more details.
Examples
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(1000, 5),
... columns=['a', 'b', 'c', 'd', 'e'])
>>> df.cov()
a b c d e
a 0.998438 -0.020161 0.059277 -0.008943 0.014144
b -0.020161 1.059352 -0.008543 -0.024738 0.009826
c 0.059277 -0.008543 1.010670 -0.001486 -0.000271
d -0.008943 -0.024738 -0.001486 0.921297 -0.013692
e 0.014144 0.009826 -0.000271 -0.013692 0.977795
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(20, 3),
... columns=['a', 'b', 'c'])
>>> df.loc[df.index[:5], 'a'] = np.nan
>>> df.loc[df.index[5:10], 'b'] = np.nan
>>> df.cov(min_periods=12)
a b c
a 0.316741 NaN -0.150812
b NaN 1.248003 0.191417
c -0.150812 0.191417 0.895202
pandas.DataFrame.cummax
Examples
Series
>>> s.cummax()
0 2.0
1 NaN
2 5.0
3 5.0
4 5.0
dtype: float64
>>> s.cummax(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
By default, iterates over rows and finds the maximum in each column. This is equivalent to axis=None
or axis='index'.
>>> df.cummax()
A B
0 2.0 1.0
1 3.0 NaN
2 3.0 1.0
To iterate over columns and find the maximum in each row, use axis=1
>>> df.cummax(axis=1)
A B
0 2.0 2.0
1 3.0 NaN
2 1.0 1.0
pandas.DataFrame.cummin
Examples
Series
>>> s.cummin()
0 2.0
1 NaN
2 2.0
3 -1.0
4 -1.0
dtype: float64
>>> s.cummin(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
By default, iterates over rows and finds the minimum in each column. This is equivalent to axis=None
or axis='index'.
>>> df.cummin()
A B
0 2.0 1.0
1 2.0 NaN
2 1.0 0.0
To iterate over columns and find the minimum in each row, use axis=1
>>> df.cummin(axis=1)
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
pandas.DataFrame.cumprod
Examples
Series
>>> s.cumprod()
0 2.0
1 NaN
2 10.0
3 -10.0
4 -0.0
dtype: float64
>>> s.cumprod(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
By default, iterates over rows and finds the product in each column. This is equivalent to axis=None or
axis='index'.
>>> df.cumprod()
A B
0 2.0 1.0
1 6.0 NaN
2 6.0 0.0
To iterate over columns and find the product in each row, use axis=1
>>> df.cumprod(axis=1)
A B
0 2.0 2.0
1 3.0 NaN
2 1.0 0.0
pandas.DataFrame.cumsum
Examples
Series
>>> s.cumsum()
0 2.0
1 NaN
2 7.0
3 6.0
4 6.0
dtype: float64
>>> s.cumsum(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
By default, iterates over rows and finds the sum in each column. This is equivalent to axis=None or
axis='index'.
>>> df.cumsum()
A B
0 2.0 1.0
1 5.0 NaN
2 6.0 1.0
To iterate over columns and find the sum in each row, use axis=1
>>> df.cumsum(axis=1)
A B
0 2.0 3.0
1 3.0 NaN
2 1.0 1.0
pandas.DataFrame.describe
Notes
For numeric data, the result’s index will include count, mean, std, min, max as well as lower, 50 and
upper percentiles. By default the lower percentile is 25 and the upper percentile is 75. The 50 percentile
is the same as the median.
For object data (e.g. strings or timestamps), the result’s index will include count, unique, top, and
freq. The top is the most common value. The freq is the most common value’s frequency. Times-
tamps also include the first and last items.
If multiple object values have the highest count, then the count and top results will be arbitrarily chosen
from among those with the highest count.
For mixed data types provided via a DataFrame, the default is to return only an analysis of numeric
columns. If the dataframe consists only of object and categorical data without any numeric columns,
the default is to return an analysis of both the object and categorical columns. If include='all' is
provided as an option, the result will include a union of attributes of each type.
The include and exclude parameters can be used to limit which columns in a DataFrame are analyzed
for the output. The parameters are ignored when analyzing a Series.
Examples
>>> df.describe(include='all')
categorical numeric object
count 3 3.0 3
unique 3 NaN 3
top f NaN c
freq 1 NaN 1
mean NaN 2.0 NaN
std NaN 1.0 NaN
min NaN 1.0 NaN
25% NaN 1.5 NaN
50% NaN 2.0 NaN
75% NaN 2.5 NaN
max NaN 3.0 NaN
>>> df.numeric.describe()
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Name: numeric, dtype: float64
>>> df.describe(include=[np.number])
numeric
count 3.0
(continues on next page)
>>> df.describe(include=[np.object])
object
count 3
unique 3
top c
freq 1
>>> df.describe(include=['category'])
categorical
count 3
unique 3
top f
freq 1
>>> df.describe(exclude=[np.number])
categorical object
count 3 3
unique 3 3
top f c
freq 1 1
>>> df.describe(exclude=[np.object])
categorical numeric
count 3 3.0
unique 3 NaN
top f NaN
freq 1 NaN
mean NaN 2.0
std NaN 1.0
min NaN 1.0
25% NaN 1.5
50% NaN 2.0
75% NaN 2.5
max NaN 3.0
pandas.DataFrame.diff
DataFrame.diff(periods=1, axis=0)
First discrete difference of element.
Calculates the difference of a DataFrame element compared with another element in the DataFrame (de-
fault is the element in the same column of the previous row).
Parameters periods : int, default 1
Periods to shift for calculating difference, accepts negative values.
axis : {0 or ‘index’, 1 or ‘columns’}, default 0
Take difference over rows (0) or columns (1).
New in version 0.16.1..
Returns
diffed [DataFrame]
See also:
Examples
>>> df.diff()
a b c
0 NaN NaN NaN
1 1.0 0.0 3.0
2 1.0 1.0 5.0
3 1.0 1.0 7.0
4 1.0 2.0 9.0
5 1.0 3.0 11.0
>>> df.diff(axis=1)
a b c
0 NaN 0.0 0.0
1 NaN -1.0 3.0
2 NaN -1.0 7.0
3 NaN -1.0 13.0
4 NaN 0.0 20.0
5 NaN 2.0 28.0
>>> df.diff(periods=3)
a b c
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 3.0 2.0 15.0
4 3.0 4.0 21.0
5 3.0 6.0 27.0
>>> df.diff(periods=-1)
a b c
0 -1.0 0.0 -3.0
1 -1.0 -1.0 -5.0
2 -1.0 -1.0 -7.0
3 -1.0 -2.0 -9.0
4 -1.0 -3.0 -11.0
5 NaN NaN NaN
pandas.DataFrame.div
Notes
Examples
None
pandas.DataFrame.divide
Notes
Examples
None
pandas.DataFrame.dot
DataFrame.dot(other)
Matrix multiplication with DataFrame or Series objects. Can also be called using self @ other in Python
>= 3.5.
Parameters
other [DataFrame or Series]
Returns
pandas.DataFrame.drop
Examples
>>> df = pd.DataFrame(np.arange(12).reshape(3,4),
... columns=['A', 'B', 'C', 'D'])
>>> df
A B C D
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
Drop columns
pandas.DataFrame.drop_duplicates
pandas.DataFrame.dropna
Examples
>>> df.dropna()
name toy born
1 Batman Batmobile 1940-04-25
>>> df.dropna(axis='columns')
name
0 Alfred
1 Batman
2 Catwoman
>>> df.dropna(how='all')
name toy born
0 Alfred NaN NaT
1 Batman Batmobile 1940-04-25
2 Catwoman Bullwhip NaT
>>> df.dropna(thresh=2)
name toy born
1 Batman Batmobile 1940-04-25
2 Catwoman Bullwhip NaT
>>> df.dropna(inplace=True)
>>> df
name toy born
1 Batman Batmobile 1940-04-25
pandas.DataFrame.duplicated
DataFrame.duplicated(subset=None, keep=’first’)
Return boolean Series denoting duplicate rows, optionally only considering certain columns
Parameters subset : column label or sequence of labels, optional
Only consider certain columns for identifying duplicates, by default use all of the
columns
keep : {‘first’, ‘last’, False}, default ‘first’
• first : Mark duplicates as True except for the first occurrence.
• last : Mark duplicates as True except for the last occurrence.
• False : Mark all duplicates as True.
Returns
duplicated [Series]
pandas.DataFrame.eq
pandas.DataFrame.equals
DataFrame.equals(other)
Determines if two NDFrame objects contain the same elements. NaNs in the same location are considered
equal.
pandas.DataFrame.eval
Notes
For more details see the API documentation for eval(). For detailed examples see enhancing perfor-
mance with eval.
Examples
pandas.DataFrame.ewm
Notes
Exactly one of center of mass, span, half-life, and alpha must be provided. Allowed values and relation-
ship between the parameters are specified in the parameter descriptions above; see the link at the end of
this section for a detailed explanation.
When adjust is True (default), weighted averages are calculated using weights (1-alpha)**(n-1), (1-
alpha)**(n-2), . . . , 1-alpha, 1.
When adjust is False, weighted averages are calculated recursively as: weighted_average[0] =
arg[0]; weighted_average[i] = (1-alpha)*weighted_average[i-1] + alpha*arg[i].
When ignore_na is False (default), weights are based on absolute positions. For example, the weights of
x and y used in calculating the final weighted average of [x, None, y] are (1-alpha)**2 and 1 (if adjust is
True), and (1-alpha)**2 and alpha (if adjust is False).
When ignore_na is True (reproducing pre-0.15.0 behavior), weights are based on relative positions. For
example, the weights of x and y used in calculating the final weighted average of [x, None, y] are 1-alpha
and 1 (if adjust is True), and 1-alpha and alpha (if adjust is False).
More details can be found at http://pandas.pydata.org/pandas-docs/stable/computation.html#
exponentially-weighted-windows
Examples
>>> df.ewm(com=0.5).mean()
B
0 0.000000
1 0.750000
2 1.615385
3 1.615385
4 3.670213
pandas.DataFrame.expanding
Returns
a Window sub-classed for the particular operation
See also:
Notes
By default, the result is set to the right edge of the window. This can be changed to the center of the
window by setting center=True.
Examples
>>> df.expanding(2).sum()
B
0 NaN
1 1.0
2 3.0
3 3.0
4 7.0
pandas.DataFrame.ffill
pandas.DataFrame.fillna
reindex, asfreq
Examples
>>> df.fillna(0)
A B C D
0 0.0 2.0 0.0 0
1 3.0 4.0 0.0 1
2 0.0 0.0 0.0 5
3 0.0 3.0 0.0 4
>>> df.fillna(method='ffill')
A B C D
0 NaN 2.0 NaN 0
1 3.0 4.0 NaN 1
2 3.0 4.0 NaN 5
3 3.0 3.0 NaN 4
Replace all NaN elements in column ‘A’, ‘B’, ‘C’, and ‘D’, with 0, 1, 2, and 3 respectively.
pandas.DataFrame.filter
Notes
The items, like, and regex parameters are enforced to be mutually exclusive.
axis defaults to the info axis that is used when indexing with [].
Examples
>>> df
one two three
mouse 1 2 3
rabbit 4 5 6
pandas.DataFrame.first
DataFrame.first(offset)
Convenience method for subsetting initial periods of time series data based on a date offset.
Parameters
offset [string, DateOffset, dateutil.relativedelta]
Returns
subset [type of caller]
Raises TypeError
If the index is not a DatetimeIndex
See also:
Examples
>>> ts.first('3D')
A
2018-04-09 1
2018-04-11 2
Notice the data for 3 first calender days were returned, not the first 3 days observed in the dataset, and
therefore data for 2018-04-13 was not returned.
pandas.DataFrame.first_valid_index
DataFrame.first_valid_index()
Return index for first non-NA/null value.
Returns
scalar [type of index]
Notes
If all elements are non-NA/null, returns None. Also returns None for empty NDFrame.
pandas.DataFrame.floordiv
Notes
Examples
None
pandas.DataFrame.from_csv
pandas.DataFrame.from_dict
Examples
>>> data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']}
>>> pd.DataFrame.from_dict(data)
col_1 col_2
0 3 a
1 2 b
2 1 c
3 0 d
>>> data = {'row_1': [3, 2, 1, 0], 'row_2': ['a', 'b', 'c', 'd']}
>>> pd.DataFrame.from_dict(data, orient='index')
0 1 2 3
row_1 3 2 1 0
row_2 a b c d
When using the ‘index’ orientation, the column names can be specified manually:
pandas.DataFrame.from_items
pandas.DataFrame.from_records
pandas.DataFrame.ge
pandas.DataFrame.get
DataFrame.get(key, default=None)
Get item from object for given key (DataFrame column, Panel slice, etc.). Returns default value if not
found.
Parameters
key [object]
Returns
value [type of items contained in object]
pandas.DataFrame.get_dtype_counts
DataFrame.get_dtype_counts()
Return counts of unique dtypes in this object.
Returns dtype : Series
Series with the count of columns with each dtype.
See also:
Examples
>>> df.get_dtype_counts()
float64 1
int64 1
object 1
dtype: int64
pandas.DataFrame.get_ftype_counts
DataFrame.get_ftype_counts()
Return counts of unique ftypes in this object.
Deprecated since version 0.23.0.
Examples
>>> df.get_ftype_counts()
float64:dense 1
int64:dense 1
object:dense 1
dtype: int64
pandas.DataFrame.get_value
pandas.DataFrame.get_values
DataFrame.get_values()
Return an ndarray after converting sparse values to dense.
This is the same as .values for non-sparse data. For sparse data contained in a pandas.SparseArray,
the data are first converted to a dense representation.
Returns numpy.ndarray
Numpy representation of DataFrame
See also:
Examples
>>> df.get_values()
array([[1, True, 1.0], [2, False, 2.0]], dtype=object)
>>> df.get_values()
array([[ 1., 1.],
[nan, 2.],
[nan, 3.]])
pandas.DataFrame.groupby
resample Convenience method for frequency conversion and resampling of time series.
Notes
Examples
DataFrame results
pandas.DataFrame.gt
pandas.DataFrame.head
DataFrame.head(n=5)
Return the first n rows.
This function returns the first n rows for the object based on position. It is useful for quickly testing if
your object has the right type of data in it.
Parameters n : int, default 5
Number of rows to select.
Returns obj_head : type of caller
The first n rows of the caller object.
See also:
Examples
>>> df.head()
animal
0 alligator
1 bee
2 falcon
3 lion
4 monkey
>>> df.head(3)
animal
0 alligator
1 bee
2 falcon
pandas.DataFrame.hist
Examples
This example draws a histogram based on the length and width of some animals, displayed in three bins
>>> df = pd.DataFrame({
... 'length': [1.5, 0.5, 1.2, 0.9, 3],
... 'width': [0.7, 0.2, 0.15, 0.2, 1.1]
... }, index= ['pig', 'rabbit', 'duck', 'chicken', 'horse'])
>>> hist = df.hist(bins=3)
pandas.DataFrame.idxmax
DataFrame.idxmax(axis=0, skipna=True)
Return index of first occurrence of maximum over requested axis. NA/null values are excluded.
Parameters axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
Returns
idxmax [Series]
Raises ValueError
• If the row/column is empty
See also:
Series.idxmax
Notes
pandas.DataFrame.idxmin
DataFrame.idxmin(axis=0, skipna=True)
Return index of first occurrence of minimum over requested axis. NA/null values are excluded.
Parameters axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
Notes
pandas.DataFrame.infer_objects
DataFrame.infer_objects()
Attempt to infer better dtypes for object columns.
Attempts soft conversion of object-dtyped columns, leaving non-object and unconvertible columns un-
changed. The inference rules are the same as during normal Series/DataFrame construction.
New in version 0.21.0.
Returns
converted [same type as input object]
See also:
Examples
>>> df.dtypes
A object
dtype: object
>>> df.infer_objects().dtypes
A int64
dtype: object
pandas.DataFrame.info
Examples
Prints a summary of columns count and its dtypes but not per column information:
>>> df.info(verbose=False)
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Columns: 3 entries, int_col to float_col
dtypes: float64(1), int64(1), object(1)
memory usage: 200.0+ bytes
Pipe output of DataFrame.info to buffer instead of sys.stdout, get buffer content and writes to a text file:
>>> import io
>>> buffer = io.StringIO()
>>> df.info(buf=buffer)
>>> s = buffer.getvalue()
>>> with open("df_info.txt", "w", encoding="utf-8") as f:
... f.write(s)
260
The memory_usage parameter allows deep introspection mode, specially useful for big DataFrames and
fine-tune memory optimization:
>>> random_strings_array = np.random.choice(['a', 'b', 'c'], 10 ** 6)
>>> df = pd.DataFrame({
... 'column_1': np.random.choice(['a', 'b', 'c'], 10 ** 6),
... 'column_2': np.random.choice(['a', 'b', 'c'], 10 ** 6),
... 'column_3': np.random.choice(['a', 'b', 'c'], 10 ** 6)
... })
>>> df.info()
<class 'pandas.core.frame.DataFrame'>
(continues on next page)
>>> df.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 3 columns):
column_1 1000000 non-null object
column_2 1000000 non-null object
column_3 1000000 non-null object
dtypes: object(3)
memory usage: 188.8 MB
pandas.DataFrame.insert
pandas.DataFrame.interpolate
• ‘linear’: ignore the index and treat the values as equally spaced. This is the
only method supported on MultiIndexes. default
New in version 0.18.1: Added support for the ‘akima’ method Added interpolate
method ‘from_derivatives’ which replaces ‘piecewise_polynomial’ in scipy 0.18;
backwards-compatible with scipy < 0.18
axis : {0, 1}, default 0
• 0: fill column-by-column
• 1: fill row-by-row
limit : int, default None.
Maximum number of consecutive NaNs to fill. Must be greater than 0.
Returns
Series or DataFrame of same shape interpolated at the NaNs
See also:
reindex, replace, fillna
Examples
Filling in NaNs
pandas.DataFrame.isin
DataFrame.isin(values)
Return boolean DataFrame showing whether each element in the DataFrame is contained in values.
Parameters values : iterable, Series, DataFrame or dictionary
The result will only be true at a location if all the labels match. If values is a
Series, that’s the index. If values is a dictionary, the keys must be the column
names, which must match. If values is a DataFrame, then both the index and
column labels must match.
Returns
DataFrame of booleans
Examples
pandas.DataFrame.isna
DataFrame.isna()
Detect missing values.
Return a boolean same-sized object indicating if the values are NA. NA values, such as None or numpy.
NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty
strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.
use_inf_as_na = True).
Returns DataFrame
Mask of bool values for each element in DataFrame that indicates whether an
element is not an NA value.
See also:
Examples
>>> df.isna()
age born name toy
0 False True False True
1 False False False False
2 True False False False
>>> ser.isna()
0 False
1 False
2 True
dtype: bool
pandas.DataFrame.isnull
DataFrame.isnull()
Detect missing values.
Return a boolean same-sized object indicating if the values are NA. NA values, such as None or numpy.
NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty
strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.
use_inf_as_na = True).
Returns DataFrame
Mask of bool values for each element in DataFrame that indicates whether an
element is not an NA value.
See also:
Examples
>>> df.isna()
age born name toy
0 False True False True
1 False False False False
2 True False False False
>>> ser.isna()
0 False
1 False
2 True
dtype: bool
pandas.DataFrame.items
DataFrame.items()
Iterator over (column name, Series) pairs.
See also:
pandas.DataFrame.iteritems
DataFrame.iteritems()
Iterator over (column name, Series) pairs.
See also:
pandas.DataFrame.iterrows
DataFrame.iterrows()
Iterate over DataFrame rows as (index, Series) pairs.
Returns it : generator
A generator that iterates over the rows of the frame.
See also:
Notes
1. Because iterrows returns a Series for each row, it does not preserve dtypes across the rows
(dtypes are preserved across columns for DataFrames). For example,
To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns
namedtuples of the values and which is generally faster than iterrows.
2. You should never modify something you are iterating over. This is not guaranteed to work in all
cases. Depending on the data types, the iterator returns a copy and not a view, and writing to it will
have no effect.
pandas.DataFrame.itertuples
DataFrame.itertuples(index=True, name=’Pandas’)
Iterate over DataFrame rows as namedtuples, with index value as first element of the tuple.
Parameters index : boolean, default True
If True, return the index as the first element of the tuple.
name : string, default “Pandas”
The name of the returned namedtuples or None to return regular tuples.
See also:
Notes
The column names will be renamed to positional names if they are invalid Python identifiers, repeated, or
start with an underscore. With a large number of columns (>255), regular tuples are returned.
Examples
pandas.DataFrame.join
Notes
on, lsuffix, and rsuffix options are not supported when passing a list of DataFrame objects
Support for specifying index levels as the on parameter was added in version 0.23.0
Examples
>>> caller
A key
0 A0 K0
1 A1 K1
2 A2 K2
3 A3 K3
4 A4 K4
5 A5 K5
>>> other
B key
0 B0 K0
1 B1 K1
2 B2 K2
If we want to join using the key columns, we need to set key to be the index in both caller and other. The
joined DataFrame will have key as its index.
>>> caller.set_index('key').join(other.set_index('key'))
>>> A B
key
K0 A0 B0
K1 A1 B1
K2 A2 B2
K3 A3 NaN
K4 A4 NaN
K5 A5 NaN
Another option to join using the key columns is to use the on parameter. DataFrame.join always uses
other’s index but we can use any column in the caller. This method preserves the original caller’s index in
the result.
>>> caller.join(other.set_index('key'), on='key')
>>> A key B
0 A0 K0 B0
1 A1 K1 B1
2 A2 K2 B2
3 A3 K3 NaN
4 A4 K4 NaN
5 A5 K5 NaN
pandas.DataFrame.keys
DataFrame.keys()
Get the ‘info axis’ (see Indexing for more)
This is index for Series, columns for DataFrame and major_axis for Panel.
pandas.DataFrame.kurt
pandas.DataFrame.kurtosis
pandas.DataFrame.last
DataFrame.last(offset)
Convenience method for subsetting final periods of time series data based on a date offset.
Parameters
offset [string, DateOffset, dateutil.relativedelta]
Returns
subset [type of caller]
Raises TypeError
If the index is not a DatetimeIndex
See also:
Examples
>>> ts.last('3D')
A
2018-04-13 3
2018-04-15 4
Notice the data for 3 last calender days were returned, not the last 3 observed days in the dataset, and
therefore data for 2018-04-11 was not returned.
pandas.DataFrame.last_valid_index
DataFrame.last_valid_index()
Return index for last non-NA/null value.
Returns
scalar [type of index]
Notes
If all elements are non-NA/null, returns None. Also returns None for empty NDFrame.
pandas.DataFrame.le
pandas.DataFrame.lookup
DataFrame.lookup(row_labels, col_labels)
Label-based “fancy indexing” function for DataFrame. Given equal-length arrays of row and column
labels, return an array of the values corresponding to each (row, col) pair.
Parameters row_labels : sequence
The row labels to use for lookup
col_labels : sequence
The column labels to use for lookup
Notes
Akin to:
result = []
for row, col in zip(row_labels, col_labels):
result.append(df.get_value(row, col))
Examples
pandas.DataFrame.lt
pandas.DataFrame.mad
pandas.DataFrame.mask
Notes
The mask method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond is False the element is used; otherwise the corresponding element from the DataFrame other
is used.
The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).
For further details and examples see the mask documentation in indexing.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
pandas.DataFrame.max
Parameters
axis [{index (0), columns (1)}]
skipna : boolean, default True
Exclude NA/null values when computing the result.
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing
into a Series
numeric_only : boolean, default None
Include only float, int, boolean columns. If None, will attempt to use everything,
then use only numeric data. Not implemented for Series.
Returns
max [Series or DataFrame (if level specified)]
pandas.DataFrame.mean
pandas.DataFrame.median
pandas.DataFrame.melt
This function is useful to massage a DataFrame into a format where one or more columns are identifier
variables (id_vars), while all other columns, considered measured variables (value_vars), are “unpivoted”
to the row axis, leaving just two non-identifier columns, ‘variable’ and ‘value’.
New in version 0.20.0.
Parameters
frame [DataFrame]
id_vars : tuple, list, or ndarray, optional
Column(s) to use as identifier variables.
value_vars : tuple, list, or ndarray, optional
Column(s) to unpivot. If not specified, uses all columns that are not set as id_vars.
var_name : scalar
Name to use for the ‘variable’ column. If None it uses frame.columns.name
or ‘variable’.
value_name : scalar, default ‘value’
Name to use for the ‘value’ column.
col_level : int or string, optional
If columns are a MultiIndex then use this level to melt.
See also:
melt, pivot_table, DataFrame.pivot
Examples
pandas.DataFrame.memory_usage
DataFrame.memory_usage(index=True, deep=False)
Return the memory usage of each column in bytes.
The memory usage can optionally include the contribution of the index and elements of object dtype.
This value is displayed in DataFrame.info by default. This can be suppressed by setting pandas.
options.display.memory_usage to False.
Parameters index : bool, default True
Specifies whether to include the memory usage of the DataFrame’s index in re-
turned Series. If index=True the memory usage of the index the first item in
the output.
deep : bool, default False
If True, introspect the data deeply by interrogating object dtypes for system-level
memory consumption, and include it in the returned values.
Returns sizes : Series
A Series whose index is the original column names and whose values is the mem-
ory usage of each column in bytes.
See also:
Examples
>>> df.memory_usage()
Index 80
int64 40000
float64 40000
complex128 80000
object 40000
bool 5000
dtype: int64
>>> df.memory_usage(index=False)
int64 40000
float64 40000
complex128 80000
object 40000
bool 5000
dtype: int64
>>> df.memory_usage(deep=True)
Index 80
int64 40000
float64 40000
complex128 80000
object 160000
bool 5000
dtype: int64
Use a Categorical for efficient storage of an object-dtype column with many repeated values.
>>> df['object'].astype('category').memory_usage(deep=True)
5168
pandas.DataFrame.merge
Notes
Support for specifying index levels as the on, left_on, and right_on parameters was added in version 0.23.0
Examples
>>> A >>> B
lkey value rkey value
0 foo 1 0 foo 5
1 bar 2 1 bar 6
2 baz 3 2 qux 7
3 foo 4 3 bar 8
pandas.DataFrame.min
Parameters
axis [{index (0), columns (1)}]
skipna : boolean, default True
Exclude NA/null values when computing the result.
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing
into a Series
numeric_only : boolean, default None
Include only float, int, boolean columns. If None, will attempt to use everything,
then use only numeric data. Not implemented for Series.
Returns
min [Series or DataFrame (if level specified)]
pandas.DataFrame.mod
Notes
Examples
None
pandas.DataFrame.mode
DataFrame.mode(axis=0, numeric_only=False)
Gets the mode(s) of each element along the axis selected. Adds a row for each mode per label, fills in
gaps with nan.
Note that there could be multiple values returned for the selected axis (when more than one item share
the maximum frequency), which is the reason why a dataframe is returned. If you want to impute missing
values with the mode in a dataframe df, you can just do this: df.fillna(df.mode().iloc[0])
Parameters axis : {0 or ‘index’, 1 or ‘columns’}, default 0
• 0 or ‘index’ : get mode of each column
• 1 or ‘columns’ : get mode of each row
numeric_only : boolean, default False
if True, only apply to numeric columns
Returns
modes [DataFrame (sorted)]
Examples
pandas.DataFrame.mul
Notes
Examples
None
pandas.DataFrame.multiply
Notes
Examples
None
pandas.DataFrame.ne
pandas.DataFrame.nlargest
Notes
This function cannot be used with all column types. For example, when specifying columns with object
or category dtypes, TypeError is raised.
Examples
In the following example, we will use nlargest to select the three rows having the largest values in
column “a”.
To order by the largest values in column “a” and then “c”, we can specify multiple columns like in the
next example.
pandas.DataFrame.notna
DataFrame.notna()
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to
True. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set
pandas.options.mode.use_inf_as_na = True). NA values, such as None or numpy.NaN,
get mapped to False values.
Returns DataFrame
Mask of bool values for each element in DataFrame that indicates whether an
element is not an NA value.
See also:
Examples
>>> df.notna()
age born name toy
0 True False True False
1 True True True True
2 False True True True
>>> ser.notna()
0 True
1 True
2 False
dtype: bool
pandas.DataFrame.notnull
DataFrame.notnull()
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to
True. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set
Examples
>>> df.notna()
age born name toy
0 True False True False
1 True True True True
2 False True True True
>>> ser.notna()
0 True
1 True
2 False
dtype: bool
pandas.DataFrame.nsmallest
Examples
pandas.DataFrame.nunique
DataFrame.nunique(axis=0, dropna=True)
Return Series with number of distinct observations over requested axis.
New in version 0.20.0.
Parameters
axis [{0 or ‘index’, 1 or ‘columns’}, default 0]
dropna : boolean, default True
Don’t include NaN in the counts.
Returns
nunique [Series]
Examples
>>> df.nunique(axis=1)
0 1
1 2
2 2
pandas.DataFrame.pct_change
Examples
Series
>>> s.pct_change()
0 NaN
1 0.011111
2 -0.065934
dtype: float64
>>> s.pct_change(periods=2)
0 NaN
1 NaN
2 -0.055556
dtype: float64
See the percentage change in a Series where filling NAs with last valid observation forward to next valid.
>>> s.pct_change(fill_method='ffill')
0 NaN
1 0.011111
2 0.000000
3 -0.065934
dtype: float64
DataFrame
Percentage change in French franc, Deutsche Mark, and Italian lira from 1980-01-01 to 1980-03-01.
>>> df = pd.DataFrame({
... 'FR': [4.0405, 4.0963, 4.3149],
... 'GR': [1.7246, 1.7482, 1.8519],
... 'IT': [804.74, 810.01, 860.13]},
... index=['1980-01-01', '1980-02-01', '1980-03-01'])
>>> df
FR GR IT
1980-01-01 4.0405 1.7246 804.74
1980-02-01 4.0963 1.7482 810.01
1980-03-01 4.3149 1.8519 860.13
>>> df.pct_change()
FR GR IT
1980-01-01 NaN NaN NaN
1980-02-01 0.013810 0.013684 0.006549
1980-03-01 0.053365 0.059318 0.061876
Percentage of change in GOOG and APPL stock volume. Shows computing the percentage change be-
tween columns.
>>> df = pd.DataFrame({
... '2016': [1769950, 30586265],
... '2015': [1500923, 40912316],
(continues on next page)
>>> df.pct_change(axis='columns')
2016 2015 2014
GOOG NaN -0.151997 -0.086016
APPL NaN 0.337604 0.012002
pandas.DataFrame.pipe
Notes
Use .pipe when chaining together functions that expect Series, DataFrames or GroupBy objects. Instead
of writing
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe(f, arg2=b, arg3=c)
... )
If you have a function that takes the data as (say) the second argument, pass a tuple indicating which
keyword expects the data. For example, suppose f takes its data as arg2:
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe((f, 'arg2'), arg1=a, arg3=c)
... )
pandas.DataFrame.pivot
DataFrame.pivot_table generalization of pivot that can handle duplicate values for one in-
dex/column pair.
DataFrame.unstack pivot based on the index values instead of a column.
Notes
For finer-tuned control, see hierarchical indexing documentation along with the related stack/unstack
methods.
Examples
Notice that the first two rows are the same for our index and columns arguments.
pandas.DataFrame.pivot_table
Parameters
values [column to aggregate, optional]
index : column, Grouper, array, or list of the previous
If an array is passed, it must be the same length as the data. The list can contain
any of the other types (except list). Keys to group by on the pivot table index. If
an array is passed, it is being used as the same manner as column values.
columns : column, Grouper, array, or list of the previous
If an array is passed, it must be the same length as the data. The list can contain
any of the other types (except list). Keys to group by on the pivot table column.
If an array is passed, it is being used as the same manner as column values.
aggfunc : function, list of functions, dict, default numpy.mean
If list of functions passed, the resulting pivot table will have hierarchical columns
whose top level are the function names (inferred from the function objects them-
selves) If dict is passed, the key is column to aggregate and value is function or
list of functions
fill_value : scalar, default None
Value to replace missing values with
margins : boolean, default False
Add all row / columns (e.g. for subtotal / grand totals)
dropna : boolean, default True
Do not include columns whose entries are all NaN
margins_name : string, default ‘All’
Name of the row / column that will contain the totals when margins is True.
Returns
table [DataFrame]
See also:
Examples
pandas.DataFrame.plot
xlim [2-tuple/list]
ylim [2-tuple/list]
Notes
pandas.DataFrame.pop
DataFrame.pop(item)
Return item and drop from frame. Raise KeyError if not found.
Parameters item : str
Column label to be popped
Returns
popped [Series]
Examples
>>> df.pop('class')
0 bird
1 bird
2 mammal
3 mammal
Name: class, dtype: object
>>> df
name max_speed
0 falcon 389.0
1 parrot 24.0
2 lion 80.5
3 monkey NaN
pandas.DataFrame.pow
Notes
Examples
None
pandas.DataFrame.prod
Examples
>>> pd.Series([]).prod()
1.0
>>> pd.Series([]).prod(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and empty series identically.
>>> pd.Series([np.nan]).prod()
1.0
>>> pd.Series([np.nan]).prod(min_count=1)
nan
pandas.DataFrame.product
Examples
>>> pd.Series([]).prod()
1.0
>>> pd.Series([]).prod(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and empty series identically.
>>> pd.Series([np.nan]).prod()
1.0
>>> pd.Series([np.nan]).prod(min_count=1)
nan
pandas.DataFrame.quantile
Examples
Specifying numeric_only=False will also compute the quantile of datetime and timedelta data.
pandas.DataFrame.query
Notes
The result of the evaluation of this expression is first passed to DataFrame.loc and if that fails be-
cause of a multidimensional key (e.g., a DataFrame) then the result will be passed to DataFrame.
__getitem__().
This method uses the top-level pandas.eval() function to evaluate the passed query.
The query() method uses a slightly modified Python syntax by default. For example, the & and |
(bitwise) operators have the precedence of their boolean cousins, and and or. This is syntactically valid
Python, however the semantics are different.
You can change the semantics of the expression by passing the keyword argument parser='python'.
This enforces the same semantics as evaluation in Python space. Likewise, you can pass
engine='python' to evaluate an expression using Python itself as a backend. This is not recom-
mended as it is inefficient compared to using numexpr as the engine.
The DataFrame.index and DataFrame.columns attributes of the DataFrame instance are
placed in the query namespace by default, which allows you to treat both the index and columns of
the frame as a column in the frame. The identifier index is used for the frame index; you can also use
the name of the index to identify it in a query. Please note that Python keywords may not be used as
identifiers.
For further details and examples see the query documentation in indexing.
Examples
pandas.DataFrame.radd
Notes
Examples
pandas.DataFrame.rank
pandas.DataFrame.rdiv
Notes
Examples
None
pandas.DataFrame.reindex
Examples
Create a new index and reindex the dataframe. By default values in the new index that do not have
corresponding records in the dataframe are assigned NaN.
>>> new_index= ['Safari', 'Iceweasel', 'Comodo Dragon', 'IE10',
... 'Chrome']
>>> df.reindex(new_index)
http_status response_time
Safari 404.0 0.07
Iceweasel NaN NaN
Comodo Dragon NaN NaN
IE10 404.0 0.08
Chrome 200.0 0.02
We can fill in the missing values by passing a value to the keyword fill_value. Because the index is
not monotonically increasing or decreasing, we cannot use arguments to the keyword method to fill the
NaN values.
>>> df.reindex(new_index, fill_value=0)
http_status response_time
Safari 404 0.07
Iceweasel 0 0.00
Comodo Dragon 0 0.00
IE10 404 0.08
Chrome 200 0.02
To further illustrate the filling functionality in reindex, we will create a dataframe with a monotonically
increasing index (for example, a sequence of dates).
>>> date_index = pd.date_range('1/1/2010', periods=6, freq='D')
>>> df2 = pd.DataFrame({"prices": [100, 101, np.nan, 100, 89, 88]},
... index=date_index)
>>> df2
prices
2010-01-01 100
2010-01-02 101
2010-01-03 NaN
2010-01-04 100
2010-01-05 89
2010-01-06 88
The index entries that did not have a value in the original data frame (for example, ‘2009-12-29’) are by
default filled with NaN. If desired, we can fill in the missing values using one of several options.
For example, to backpropagate the last valid value to fill the NaN values, pass bfill as an argument to
the method keyword.
>>> df2.reindex(date_index2, method='bfill')
prices
2009-12-29 100
2009-12-30 100
2009-12-31 100
2010-01-01 100
(continues on next page)
Please note that the NaN value present in the original dataframe (at index value 2010-01-03) will not be
filled by any of the value propagation schemes. This is because filling while reindexing does not look at
dataframe values, but only compares the original and desired indexes. If you do want to fill in the NaN
values present in the original dataframe, use the fillna() method.
See the user guide for more.
pandas.DataFrame.reindex_axis
tuple, array, Series, and must be the same size as the index and its dtype must
exactly match the index’s type.
New in version 0.21.0: (list-like tolerance)
Returns
reindexed [DataFrame]
See also:
reindex, reindex_like
Examples
pandas.DataFrame.reindex_like
Notes
pandas.DataFrame.rename
Examples
pandas.DataFrame.rename_axis
Notes
Prior to version 0.21.0, rename_axis could also be used to change the axis labels by passing a mapping
or scalar. This behavior is deprecated and will be removed in a future version. Use rename instead.
Examples
Series
DataFrame
pandas.DataFrame.reorder_levels
DataFrame.reorder_levels(order, axis=0)
Rearrange index levels using input order. May not drop or duplicate levels
Parameters order : list of int or list of str
List representing new level order. Reference level by number (position) or by key
(label).
axis : int
Where to reorder levels.
Returns
type of caller (new object)
pandas.DataFrame.replace
much for value since there are only a few possible substitution regexes you
can use.
– str, regex and numeric rules apply as above.
• dict:
– Dicts can be used to specify different replacement values for different ex-
isting values. For example, {'a': 'b', 'y': 'z'} replaces the
value ‘a’ with ‘b’ and ‘y’ with ‘z’. To use a dict in this way the value param-
eter should be None.
– For a DataFrame a dict can specify that different values should be replaced
in different columns. For example, {'a': 1, 'b': 'z'} looks for
the value 1 in column ‘a’ and the value ‘z’ in column ‘b’ and replaces these
values with whatever is specified in value. The value parameter should not
be None in this case. You can treat this as a special case of passing two lists
except that you are specifying the column to search in.
– For a DataFrame nested dictionaries, e.g., {'a': {'b': np.nan}},
are read as follows: look in column ‘a’ for the value ‘b’ and replace it with
NaN. The value parameter should be None to use a nested dict in this way.
You can nest regular expressions as well. Note that column names (the top-
level dictionary keys in a nested dictionary) cannot be regular expressions.
• None:
– This means that the regex argument must be a string, compiled regular ex-
pression, or list, dict, ndarray or Series of such elements. If value is also
None then this must be a nested dictionary or Series.
See the examples section for examples of each of these.
value : scalar, dict, list, str, regex, default None
Value to replace any values matching to_replace with. For a DataFrame a dict of
values can be used to specify which value to use for each column (columns not in
the dict will not be filled). Regular expressions, strings and lists or dicts of such
objects are also allowed.
inplace : boolean, default False
If True, in place. Note: this will modify any other views on this object (e.g. a
column from a DataFrame). Returns the caller if this is True.
limit : int, default None
Maximum size gap to forward or backward fill.
regex : bool or same types as to_replace, default False
Whether to interpret to_replace and/or value as regular expressions. If this is
True then to_replace must be a string. Alternatively, this could be a regular
expression or a list, dict, or array of regular expressions in which case to_replace
must be None.
method : {‘pad’, ‘ffill’, ‘bfill’, None}
The method to use when for replacement, when to_replace is a scalar, list or tuple
and value is None.
Changed in version 0.23.0: Added to DataFrame.
Returns DataFrame
Notes
• Regex substitution is performed under the hood with re.sub. The rules for substitution for re.
sub are the same.
• Regular expressions will only substitute on strings, meaning you cannot provide, for example, a
regular expression matching floating point numbers and expect the columns in your frame that have
a numeric dtype to be matched. However, if those floating point numbers are strings, then you can
do this.
• This method has a lot of options. You are encouraged to experiment and play with this method to
gain intuition about how it works.
• When dict is used as the to_replace value, it is like key(s) in the dict are the to_replace part and
value(s) in the dict are the value parameter.
Examples
List-like ‘to_replace‘
dict-like ‘to_replace‘
Note that when replacing multiple bool or datetime64 objects, the data types in the to_replace pa-
rameter must match the data type of the value being replaced:
This raises a TypeError because one of the dict keys is not of the correct type for replacement.
Compare the behavior of s.replace({'a': None}) and s.replace('a', None) to under-
stand the pecularities of the to_replace parameter:
When one uses a dict as the to_replace value, it is like the value(s) in the dict are equal to the value
parameter. s.replace({'a': None}) is equivalent to s.replace(to_replace={'a':
None}, value=None, method=None):
When value=None and to_replace is a scalar, list or tuple, replace uses the method parameter (default
‘pad’) to do the replacement. So this is why the ‘a’ values are being replaced by 10 in rows 1 and 2
and ‘b’ in row 4 in this case. The command s.replace('a', None) is actually equivalent to s.
replace(to_replace='a', value=None, method='pad'):
pandas.DataFrame.resample
Notes
Examples
Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin.
>>> series.resample('3T').sum()
2000-01-01 00:00:00 3
2000-01-01 00:03:00 12
2000-01-01 00:06:00 21
Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the
left. Please note that the value in the bucket used as the label is not included in the bucket, which it labels.
For example, in the original series the bucket 2000-01-01 00:03:00 contains the value 3, but the
summed value in the resampled bucket with the label 2000-01-01 00:03:00 does not include 3 (if
it did, the summed value would be 6, not 3). To include this value close the right side of the bin interval
as illustrated in the example below this one.
Downsample the series into 3 minute bins as above, but close the right side of the bin interval.
Upsample the series into 30 second bins and fill the NaN values using the pad method.
>>> series.resample('30S').pad()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 0
2000-01-01 00:01:00 1
2000-01-01 00:01:30 1
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
Upsample the series into 30 second bins and fill the NaN values using the bfill method.
>>> series.resample('30S').bfill()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 1
2000-01-01 00:01:00 1
2000-01-01 00:01:30 2
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
>>> series.resample('3T').apply(custom_resampler)
2000-01-01 00:00:00 8
2000-01-01 00:03:00 17
2000-01-01 00:06:00 26
Freq: 3T, dtype: int64
For a Series with a PeriodIndex, the keyword convention can be used to control whether to use the start or
end of rule.
Resample by month using ‘start’ convention. Values are assigned to the first month of the period.
Resample by month using ‘end’ convention. Values are assigned to the last month of the period.
For DataFrame objects, the keyword on can be used to specify the column instead of the index for resam-
pling.
For a DataFrame with MultiIndex, the keyword level can be used to specify on level the resampling
needs to take place.
pandas.DataFrame.reset_index
Examples
When we reset the index, the old index is added as a column, and a new sequential index is used:
>>> df.reset_index()
index class max_speed
0 falcon bird 389.0
1 parrot bird 24.0
2 lion mammal 80.5
3 monkey mammal NaN
We can use the drop parameter to avoid the old index being added as a column:
>>> df.reset_index(drop=True)
class max_speed
0 bird 389.0
1 bird 24.0
2 mammal 80.5
3 mammal NaN
>>> df.reset_index(level='class')
class speed species
max type
name
falcon bird 389.0 fly
parrot bird 24.0 fly
lion mammal 80.5 run
monkey mammal NaN jump
If we are not dropping the index, by default, it is placed in the top level. We can place it in another level:
When the index is inserted under another level, we can specify under which one with the parameter
col_fill:
pandas.DataFrame.rfloordiv
Notes
Examples
None
pandas.DataFrame.rmod
Notes
Examples
None
pandas.DataFrame.rmul
Notes
Examples
None
pandas.DataFrame.rolling
Returns
a Window or Rolling sub-classed for the particular operation
See also:
Notes
By default, the result is set to the right edge of the window. This can be changed to the center of the
window by setting center=True.
To learn more about the offsets & frequency strings, please see this link.
The recognized win_types are:
• boxcar
• triang
• blackman
• hamming
• bartlett
• parzen
• bohman
• blackmanharris
• nuttall
• barthann
• kaiser (needs beta)
• gaussian (needs std)
• general_gaussian (needs power, width)
• slepian (needs width).
If win_type=None all points are evenly weighted. To learn more about different window types see
scipy.signal window functions.
Examples
Rolling sum with a window length of 2, using the ‘triang’ window type.
Rolling sum with a window length of 2, min_periods defaults to the window length.
>>> df.rolling(2).sum()
B
0 NaN
1 1.0
2 3.0
3 NaN
4 NaN
>>> df
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
Contrasting to an integer rolling window, this will roll a variable length window corresponding to the time
period. The default for min_periods is 1.
>>> df.rolling('2s').sum()
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
pandas.DataFrame.round
Examples
pandas.DataFrame.rpow
Notes
Examples
None
pandas.DataFrame.rsub
Notes
Examples
pandas.DataFrame.rtruediv
Notes
Examples
None
pandas.DataFrame.sample
Examples
>>> s = pd.Series(np.random.randn(50))
>>> s.head()
0 -0.038497
1 1.820773
2 -0.972766
3 -1.598270
(continues on next page)
pandas.DataFrame.select
DataFrame.select(crit, axis=0)
Return data corresponding to axis labels matching criteria
Deprecated since version 0.21.0: Use df.loc[df.index.map(crit)] to select via labels
Parameters crit : function
To be called on each index (label). Should return True or False
axis [int]
Returns
selection [type of caller]
pandas.DataFrame.select_dtypes
DataFrame.select_dtypes(include=None, exclude=None)
Return a subset of the DataFrame’s columns based on the column dtypes.
Parameters include, exclude : scalar or list-like
A selection of dtypes or strings to be included/excluded. At least one of these
parameters must be supplied.
Returns subset : DataFrame
The subset of the frame including the dtypes in include and excluding the
dtypes in exclude.
Raises ValueError
• If both of include and exclude are empty
• If include and exclude have overlapping elements
• If any kind of string dtype is passed in.
Notes
Examples
>>> df.select_dtypes(include='bool')
b
0 True
(continues on next page)
>>> df.select_dtypes(include=['float64'])
c
0 1.0
1 2.0
2 1.0
3 2.0
4 1.0
5 2.0
>>> df.select_dtypes(exclude=['int'])
b c
0 True 1.0
1 False 2.0
2 True 1.0
3 False 2.0
4 True 1.0
5 False 2.0
pandas.DataFrame.sem
pandas.DataFrame.set_axis
Examples
Series
>>> s
0 1
1 2
(continues on next page)
DataFrame
pandas.DataFrame.set_index
dataframe [DataFrame]
Examples
>>> df.set_index('month')
sale year
month
1 55 2012
4 40 2014
7 84 2013
10 31 2014
pandas.DataFrame.set_value
pandas.DataFrame.shift
Returns
shifted [DataFrame]
Notes
If freq is specified then the index values are shifted but the data is not realigned. That is, use freq if you
would like to extend the index when shifting and preserve the original data.
pandas.DataFrame.skew
pandas.DataFrame.slice_shift
DataFrame.slice_shift(periods=1, axis=0)
Equivalent to shift without copying data. The shifted data will not include the dropped periods and the
shifted axis will be smaller than the original.
Parameters periods : int
Number of periods to move, can be positive or negative
Returns
shifted [same type as caller]
Notes
While the slice_shift is faster than shift, you may pay for it later during alignment.
pandas.DataFrame.sort_index
pandas.DataFrame.sort_values
Examples
>>> df = pd.DataFrame({
... 'col1' : ['A', 'A', 'B', np.nan, 'D', 'C'],
... 'col2' : [2, 1, 9, 8, 7, 4],
... 'col3': [0, 1, 9, 4, 2, 3],
... })
>>> df
col1 col2 col3
0 A 2 0
1 A 1 1
2 B 9 9
3 NaN 8 4
4 D 7 2
5 C 4 3
Sort by col1
>>> df.sort_values(by=['col1'])
col1 col2 col3
0 A 2 0
1 A 1 1
2 B 9 9
5 C 4 3
4 D 7 2
3 NaN 8 4
Sort Descending
pandas.DataFrame.sortlevel
pandas.DataFrame.squeeze
DataFrame.squeeze(axis=None)
Squeeze length 1 dimensions.
Parameters axis : None, integer or string axis name, optional
The axis to squeeze if 1-sized.
New in version 0.20.0.
Returns
scalar if 1-sized, else original object
pandas.DataFrame.stack
DataFrame.stack(level=-1, dropna=True)
Stack the prescribed level(s) from columns to index.
Return a reshaped DataFrame or Series having a multi-level index with one or more new inner-most levels
compared to the current DataFrame. The new inner-most levels are created by pivoting the columns of
the current dataframe:
• if the columns have a single level, the output is a Series;
• if the columns have multiple levels, the new index level(s) is (are) taken from the prescribed level(s)
and the output is a DataFrame.
The new index levels are sorted.
Parameters level : int, str, list, default -1
Level(s) to stack from the column axis onto the index axis, defined as one index
or label, or a list of indices or labels.
dropna : bool, default True
Whether to drop rows in the resulting Frame/Series with missing values. Stacking
a column level onto the index axis can create combinations of index and column
values that are missing from the original dataframe. See Examples section.
Returns DataFrame or Series
Stacked dataframe or series.
See also:
DataFrame.unstack Unstack prescribed level(s) from index axis onto column axis.
DataFrame.pivot Reshape dataframe from long format to wide format.
DataFrame.pivot_table Create a spreadsheet-style pivot table as a DataFrame.
Notes
The function is named by analogy with a collection of books being re-organised from being side by side
on a horizontal position (the columns of the dataframe) to being stacked vertically on top of of each other
(in the index of the dataframe).
Examples
>>> df_single_level_cols
weight height
cat 0 1
dog 2 3
>>> df_single_level_cols.stack()
cat weight 0
height 1
dog weight 2
height 3
dtype: int64
>>> df_multi_level_cols1
weight
kg pounds
cat 1 2
dog 2 4
>>> df_multi_level_cols1.stack()
weight
cat kg 1
pounds 2
dog kg 2
pounds 4
Missing values
It is common to have missing values when stacking a dataframe with multi-level columns, as the stacked
dataframe typically has more values than the original dataframe. Missing values are filled with NaNs:
>>> df_multi_level_cols2
weight height
kg m
cat 1.0 2.0
dog 3.0 4.0
>>> df_multi_level_cols2.stack()
height weight
cat kg NaN 1.0
m 2.0 NaN
dog kg NaN 3.0
m 4.0 NaN
>>> df_multi_level_cols2.stack(0)
kg m
cat height NaN 2.0
weight 1.0 NaN
dog height NaN 4.0
weight 3.0 NaN
>>> df_multi_level_cols2.stack([0, 1])
cat height m 2.0
weight kg 1.0
dog height m 4.0
weight kg 3.0
dtype: float64
Note that rows where all values are missing are dropped by default but this behaviour can be controlled
via the dropna keyword parameter:
>>> df_multi_level_cols3
weight height
kg m
cat NaN 1.0
dog 2.0 3.0
>>> df_multi_level_cols3.stack(dropna=False)
height weight
cat kg NaN NaN
m 1.0 NaN
dog kg NaN 2.0
(continues on next page)
pandas.DataFrame.std
pandas.DataFrame.sub
Notes
Examples
pandas.DataFrame.subtract
Notes
Examples
pandas.DataFrame.sum
Examples
This can be controlled with the min_count parameter. For example, if you’d like the sum of an empty
series to be NaN, pass min_count=1.
>>> pd.Series([]).sum(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and empty series identically.
>>> pd.Series([np.nan]).sum()
0.0
>>> pd.Series([np.nan]).sum(min_count=1)
nan
pandas.DataFrame.swapaxes
pandas.DataFrame.swaplevel
pandas.DataFrame.tail
DataFrame.tail(n=5)
Return the last n rows.
This function returns last n rows from the object based on position. It is useful for quickly verifying data,
for example, after sorting or appending rows.
Parameters n : int, default 5
Number of rows to select.
Returns type of caller
The last n rows of the caller object.
See also:
Examples
>>> df.tail()
animal
4 monkey
5 parrot
6 shark
7 whale
8 zebra
>>> df.tail(3)
animal
6 shark
7 whale
8 zebra
pandas.DataFrame.take
Examples
We may take elements using negative integers for positive indices, starting from the end of the object, just
like with Python lists.
pandas.DataFrame.to_clipboard
Write a text representation of object to the system clipboard. This can be pasted into Excel, for example.
Parameters excel : bool, default True
• True, use the provided separator, writing in a csv format for allowing easy pasting
into excel.
• False, write a string representation of the object to the clipboard.
sep : str, default '\t'
Field delimiter.
**kwargs
These parameters will be passed to DataFrame.to_csv.
See also:
Notes
Examples
We can omit the the index by passing the keyword index and setting it to false.
pandas.DataFrame.to_csv
pandas.DataFrame.to_dense
DataFrame.to_dense()
Return dense representation of NDFrame (as opposed to sparse)
pandas.DataFrame.to_dict
Examples
>>> df.to_dict('split')
{'index': ['a', 'b'], 'columns': ['col1', 'col2'],
'data': [[1.0, 0.5], [2.0, 0.75]]}
>>> df.to_dict('records')
[{'col1': 1.0, 'col2': 0.5}, {'col1': 2.0, 'col2': 0.75}]
>>> df.to_dict('index')
{'a': {'col1': 1.0, 'col2': 0.5}, 'b': {'col1': 2.0, 'col2': 0.75}}
>>> dd = defaultdict(list)
>>> df.to_dict('records', into=dd)
[defaultdict(<class 'list'>, {'col1': 1.0, 'col2': 0.5}),
defaultdict(<class 'list'>, {'col1': 2.0, 'col2': 0.75})]
pandas.DataFrame.to_excel
Notes
If passing an existing ExcelWriter object, then the sheet will be added to the existing workbook. This can
be used to save different DataFrames to one workbook:
For compatibility with to_csv, to_excel serializes lists and dicts to strings before writing.
pandas.DataFrame.to_feather
DataFrame.to_feather(fname)
write out the binary feather-format for DataFrames
New in version 0.20.0.
Parameters fname : str
string file path
pandas.DataFrame.to_gbq
See also:
pandas.DataFrame.to_hdf
Examples
>>> import os
>>> os.remove('data.h5')
pandas.DataFrame.to_html
formatter function to apply to columns’ elements if they are floats, default None.
The result of this function must be a unicode string.
sparsify : bool, optional
Set to False for a DataFrame with a hierarchical index to print every multiindex
key at each row, default True
index_names : bool, optional
Prints the names of the indexes, default True
line_width : int, optional
Width to wrap a line in characters, default no wrap
table_id : str, optional
id for the <table> element create by to_html
New in version 0.23.0.
justify : str, default None
How to justify the column labels. If None uses the option from the print configu-
ration (controlled by set_option), ‘right’ out of the box. Valid values are
• left
• right
• center
• justify
• justify-all
• start
• end
• inherit
• match-parent
• initial
• unset
Returns
formatted [string (or unicode, depending on data and options)]
pandas.DataFrame.to_json
orient : string
Indication of expected JSON string format.
• Series
– default is ‘index’
– allowed values are: {‘split’,’records’,’index’}
• DataFrame
– default is ‘columns’
– allowed values are: {‘split’,’records’,’index’,’columns’,’values’}
• The format of the JSON string
– ‘split’ : dict like {‘index’ -> [index], ‘columns’ -> [columns], ‘data’ -> [val-
ues]}
– ‘records’ : list like [{column -> value}, . . . , {column -> value}]
– ‘index’ : dict like {index -> {column -> value}}
– ‘columns’ : dict like {column -> {index -> value}}
– ‘values’ : just the values array
– ‘table’ : dict like {‘schema’: {schema}, ‘data’: {data}} describing the data,
and the data component is like orient='records'.
Changed in version 0.20.0.
date_format : {None, ‘epoch’, ‘iso’}
Type of date conversion. ‘epoch’ = epoch milliseconds, ‘iso’ = ISO8601. The
default depends on the orient. For orient='table', the default is ‘iso’. For
all other orients, the default is ‘epoch’.
double_precision : int, default 10
The number of decimal places to use when encoding floating point values.
force_ascii : boolean, default True
Force encoded string to be ASCII.
date_unit : string, default ‘ms’ (milliseconds)
The time unit to encode to, governs timestamp and ISO8601 precision. One of
‘s’, ‘ms’, ‘us’, ‘ns’ for second, millisecond, microsecond, and nanosecond re-
spectively.
default_handler : callable, default None
Handler to call if object cannot otherwise be converted to a suitable format for
JSON. Should receive a single argument which is the object to convert and return
a serialisable object.
lines : boolean, default False
If ‘orient’ is ‘records’ write out line delimited json format. Will throw ValueError
if incorrect ‘orient’ since others are not list like.
New in version 0.19.0.
compression : {None, ‘gzip’, ‘bz2’, ‘zip’, ‘xz’}
A string representing the compression to use in the output file, only used when
the first argument is a filename.
New in version 0.21.0.
index : boolean, default True
Whether to include the index values in the JSON string. Not including the index
(index=False) is only supported when orient is ‘split’ or ‘table’.
New in version 0.23.0.
See also:
pandas.read_json
Examples
Encoding/decoding a Dataframe using 'records' formatted JSON. Note that index labels are not pre-
served with this encoding.
>>> df.to_json(orient='records')
'[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
>>> df.to_json(orient='index')
'{"row 1":{"col 1":"a","col 2":"b"},"row 2":{"col 1":"c","col 2":"d"}}'
>>> df.to_json(orient='columns')
'{"col 1":{"row 1":"a","row 2":"c"},"col 2":{"row 1":"b","row 2":"d"}}'
>>> df.to_json(orient='values')
'[["a","b"],["c","d"]]'
>>> df.to_json(orient='table')
'{"schema": {"fields": [{"name": "index", "type": "string"},
{"name": "col 1", "type": "string"},
{"name": "col 2", "type": "string"}],
"primaryKey": "index",
"pandas_version": "0.20.0"},
"data": [{"index": "row 1", "col 1": "a", "col 2": "b"},
{"index": "row 2", "col 1": "c", "col 2": "d"}]}'
pandas.DataFrame.to_latex
pandas.DataFrame.to_msgpack
pandas.DataFrame.to_panel
DataFrame.to_panel()
Transform long (stacked) format (DataFrame) into wide (3D, Panel) format.
Deprecated since version 0.20.0.
Currently the index of the DataFrame must be a 2-level MultiIndex. This may be generalized later
Returns
panel [Panel]
pandas.DataFrame.to_parquet
Notes
Examples
pandas.DataFrame.to_period
pandas.DataFrame.to_pickle
read_pickle Load pickled pandas object (or any object) from file.
DataFrame.to_hdf Write DataFrame to an HDF5 file.
DataFrame.to_sql Write DataFrame to a SQL database.
DataFrame.to_parquet Write a DataFrame to the binary parquet format.
Examples
>>> import os
>>> os.remove("./dummy.pkl")
pandas.DataFrame.to_records
DataFrame.to_records(index=True, convert_datetime64=None)
Convert DataFrame to a NumPy record array.
Index will be put in the ‘index’ field of the record array if requested.
Parameters index : boolean, default True
Include index in resulting record array, stored in ‘index’ field.
convert_datetime64 : boolean, default None
Deprecated since version 0.23.0.
Whether to convert the index to datetime.datetime if it is a DatetimeIndex.
Returns
y [numpy.recarray]
See also:
Examples
>>> df.to_records(index=False)
rec.array([(1, 0.5 ), (2, 0.75)],
dtype=[('A', '<i8'), ('B', '<f8')])
The timestamp conversion can be disabled so NumPy’s datetime64 data type is used instead:
>>> df.to_records(convert_datetime64=False)
rec.array([('2018-01-01T09:00:00.000000000', 1, 0.5 ),
('2018-01-01T09:01:00.000000000', 2, 0.75)],
dtype=[('index', '<M8[ns]'), ('A', '<i8'), ('B', '<f8')])
pandas.DataFrame.to_sparse
DataFrame.to_sparse(fill_value=None, kind=’block’)
Convert to SparseDataFrame
Parameters
fill_value [float, default NaN]
kind [{‘block’, ‘integer’}]
Returns
y [SparseDataFrame]
pandas.DataFrame.to_stata
Examples
>>> data.to_stata('./data_file.dta')
Or with dates
With dates:
pandas.DataFrame.to_string
Returns
formatted [string (or unicode, depending on data and options)]
pandas.DataFrame.to_timestamp
pandas.DataFrame.to_xarray
DataFrame.to_xarray()
Return an xarray object from the pandas object.
Returns
a DataArray for a Series
a Dataset for a DataFrame
a DataArray for higher dims
Notes
Examples
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (index: 3)
Coordinates:
* index (index) int64 0 1 2
Data variables:
A (index) int64 1 1 2
B (index) object 'foo' 'bar' 'foo'
C (index) float64 4.0 5.0 6.0
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (A: 2, B: 2)
Coordinates:
* B (B) object 'bar' 'foo'
* A (A) int64 1 2
Data variables:
C (B, A) float64 5.0 nan 4.0 6.0
>>> p = pd.Panel(np.arange(24).reshape(4,3,2),
items=list('ABCD'),
major_axis=pd.date_range('20130101', periods=3),
minor_axis=['first', 'second'])
>>> p
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 3 (major_axis) x 2 (minor_axis)
Items axis: A to D
Major_axis axis: 2013-01-01 00:00:00 to 2013-01-03 00:00:00
Minor_axis axis: first to second
>>> p.to_xarray()
<xarray.DataArray (items: 4, major_axis: 3, minor_axis: 2)>
array([[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 6, 7],
[ 8, 9],
[10, 11]],
[[12, 13],
[14, 15],
[16, 17]],
[[18, 19],
[20, 21],
[22, 23]]])
Coordinates:
(continues on next page)
pandas.DataFrame.transform
Examples
pandas.DataFrame.transpose
DataFrame.transpose(*args, **kwargs)
Transpose index and columns.
Reflect the DataFrame over its main diagonal by writing rows as columns and vice-versa. The property T
is an accessor to the method transpose().
Parameters copy : bool, default False
If True, the underlying data is copied. Otherwise (default), no copy is made if
possible.
*args, **kwargs
Additional keywords have no effect but might be accepted for compatibility with
numpy.
Returns DataFrame
The transposed DataFrame.
See also:
Notes
Transposing a DataFrame with mixed dtypes will result in a homogeneous DataFrame with the object
dtype. In such a case, a copy of the data is always made.
Examples
When the dtype is homogeneous in the original DataFrame, we get a transposed DataFrame with the same
dtype:
>>> df1.dtypes
col1 int64
col2 int64
dtype: object
>>> df1_transposed.dtypes
(continues on next page)
When the DataFrame has mixed dtypes, we get a transposed DataFrame with the object dtype:
>>> df2.dtypes
name object
score float64
employed bool
kids int64
dtype: object
>>> df2_transposed.dtypes
0 object
1 object
dtype: object
pandas.DataFrame.truediv
Fill existing missing (NaN) values, and any new element needed for successful
DataFrame alignment, with this value before computation. If data in both corre-
sponding DataFrame locations is missing the result will be missing
Returns
result [DataFrame]
See also:
DataFrame.rtruediv
Notes
Examples
None
pandas.DataFrame.truncate
Notes
If the index being truncated contains only datetime values, before and after may be specified as strings
instead of Timestamps.
Examples
>>> df.truncate(before=pd.Timestamp('2016-01-05'),
... after=pd.Timestamp('2016-01-10')).tail()
A
2016-01-09 23:59:56 1
2016-01-09 23:59:57 1
2016-01-09 23:59:58 1
(continues on next page)
Because the index is a DatetimeIndex containing only dates, we can specify before and after as strings.
They will be coerced to Timestamps before truncation.
Note that truncate assumes a 0 value for any unspecified time component (midnight). This differs
from partial string slicing, which returns any partially matching dates.
pandas.DataFrame.tshift
Notes
If freq is not specified then tries to use the freq or inferred_freq attributes of the index. If neither of those
attributes exist, a ValueError is thrown
pandas.DataFrame.tz_convert
Parameters
tz [string or pytz.timezone object]
axis [the axis to convert]
level : int, str, default None
If axis ia a MultiIndex, convert a specific level. Otherwise must be None
copy : boolean, default True
Also make a copy of the underlying data
Raises TypeError
If the axis is tz-naive.
pandas.DataFrame.tz_localize
pandas.DataFrame.unstack
DataFrame.unstack(level=-1, fill_value=None)
Pivot a level of the (necessarily hierarchical) index labels, returning a DataFrame having a new level of
column labels whose inner-most level consists of the pivoted index labels. If the index is not a MultiIndex,
the output will be a Series (the analogue of stack when the columns are not a MultiIndex). The level
involved will automatically get sorted.
Parameters level : int, string, or list of these, default -1 (last level)
Level(s) of index to unstack, can pass level name
Examples
>>> s.unstack(level=-1)
a b
one 1.0 2.0
two 3.0 4.0
>>> s.unstack(level=0)
one two
a 1.0 3.0
b 2.0 4.0
>>> df = s.unstack(level=0)
>>> df.unstack()
one a 1.0
b 2.0
two a 3.0
b 4.0
dtype: float64
pandas.DataFrame.update
Should have at least one matching index/column label with the original
DataFrame. If a Series is passed, its name attribute must be set, and that will
be used as the column name to align with the original DataFrame.
join : {‘left’}, default ‘left’
Only left join is implemented, keeping the index and columns of the original
object.
overwrite : bool, default True
How to handle non-NA values for overlapping keys:
• True: overwrite original DataFrame’s values with values from other.
• False: only update values that are NA in the original DataFrame.
filter_func : callable(1d-array) -> boolean 1d-array, optional
Can choose to replace values other than NA. Return True for values that should
be updated.
raise_conflict : bool, default False
If True, will raise a ValueError if the DataFrame and other both contain non-NA
data in the same place.
Raises ValueError
When raise_conflict is True and there’s overlapping non-NA data.
See also:
Examples
The DataFrame’s length does not increase as a result of the update, only values at matching index/column
labels are updated.
If other contains NaNs the corresponding values are not updated in the original dataframe.
pandas.DataFrame.var
pandas.DataFrame.where
Returns
wh [same type as caller]
See also:
DataFrame.mask()
Notes
The where method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond is True the element is used; otherwise the corresponding element from the DataFrame other is
used.
The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).
For further details and examples see the where documentation in indexing.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
pandas.DataFrame.xs
Notes
Examples
>>> df
A B C
a 4 5 2
b 4 0 9
c 9 7 3
>>> df.xs('a')
A 4
(continues on next page)
>>> df
A B C D
first second third
bar one 1 4 1 8 9
two 1 7 5 5 0
baz one 1 6 6 8 0
three 2 5 3 5 3
>>> df.xs(('baz', 'three'))
A B C D
third
2 5 3 5 3
>>> df.xs('one', level=1)
A B C D
first third
bar 1 4 1 8 9
baz 1 6 6 8 0
>>> df.xs(('baz', 2), level=[0, 'third'])
A B C D
second
three 5 3 5 3
Axes
34.4.2.1 pandas.DataFrame.is_copy
DataFrame.is_copy
34.4.3 Conversion
34.4.4.1 pandas.DataFrame.__iter__
DataFrame.__iter__()
Iterate over infor axis
For more information on .at, .iat, .loc, and .iloc, see the indexing documentation.
DataFrame.add(other[, axis, level, fill_value]) Addition of dataframe and other, element-wise (binary
operator add).
DataFrame.sub(other[, axis, level, fill_value]) Subtraction of dataframe and other, element-wise (bi-
nary operator sub).
DataFrame.mul(other[, axis, level, fill_value]) Multiplication of dataframe and other, element-wise (bi-
nary operator mul).
DataFrame.div(other[, axis, level, fill_value]) Floating division of dataframe and other, element-wise
(binary operator truediv).
DataFrame.truediv(other[, axis, level, . . . ]) Floating division of dataframe and other, element-wise
(binary operator truediv).
DataFrame.floordiv(other[, axis, level, . . . ]) Integer division of dataframe and other, element-wise
(binary operator floordiv).
DataFrame.mod(other[, axis, level, fill_value]) Modulo of dataframe and other, element-wise (binary
operator mod).
DataFrame.pow(other[, axis, level, fill_value]) Exponential power of dataframe and other, element-
wise (binary operator pow).
DataFrame.dot(other) Matrix multiplication with DataFrame or Series objects.
DataFrame.radd(other[, axis, level, fill_value]) Addition of dataframe and other, element-wise (binary
operator radd).
DataFrame.rsub(other[, axis, level, fill_value]) Subtraction of dataframe and other, element-wise (bi-
nary operator rsub).
DataFrame.rmul(other[, axis, level, fill_value]) Multiplication of dataframe and other, element-wise (bi-
nary operator rmul).
DataFrame.rdiv(other[, axis, level, fill_value]) Floating division of dataframe and other, element-wise
(binary operator rtruediv).
DataFrame.rtruediv(other[, axis, level, . . . ]) Floating division of dataframe and other, element-wise
(binary operator rtruediv).
Continued on next page
DataFrame.append(other[, ignore_index, . . . ]) Append rows of other to the end of this frame, returning
a new object.
DataFrame.assign(**kwargs) Assign new columns to a DataFrame, returning a new
object (a copy) with the new columns added to the orig-
inal ones.
DataFrame.join(other[, on, how, lsuffix, . . . ]) Join columns with other DataFrame either on index or
on a key column.
DataFrame.merge(right[, how, on, left_on, . . . ]) Merge DataFrame objects by performing a database-
style join operation by columns or indexes.
DataFrame.update(other[, join, overwrite, . . . ]) Modify in place using non-NA values from another
DataFrame.
34.4.13 Plotting
DataFrame.plot is both a callable method and a namespace attribute for specific plotting methods of the form
DataFrame.plot.<kind>.
34.4.13.1 pandas.DataFrame.plot.area
34.4.13.2 pandas.DataFrame.plot.bar
Examples
Basic plot.
Plot a whole dataframe to a bar plot. Each column is assigned a distinct color, and each row is nested in a group
along the horizontal axis.
Instead of nesting, the figure can be split by column with subplots=True. In this case, a numpy.ndarray
of matplotlib.axes.Axes are returned.
34.4.13.3 pandas.DataFrame.plot.barh
Examples
Basic example
34.4.13.4 pandas.DataFrame.plot.box
DataFrame.plot.box(by=None, **kwds)
Make a box plot of the DataFrame columns.
A box plot is a method for graphically depicting groups of numerical data through their quartiles. The box
extends from the Q1 to Q3 quartile values of the data, with a line at the median (Q2). The whiskers extend from
the edges of box to show the range of the data. The position of the whiskers is set by default to 1.5*IQR (IQR
= Q3 - Q1) from the edges of the box. Outlier points are those past the end of the whiskers.
For further details see Wikipedia’s entry for boxplot.
A consideration when using this chart is that the box and the whiskers can overlap, which is very common when
plotting small sets of data.
Parameters by : string or sequence
Column in the DataFrame to group by.
**kwds : optional
Additional keywords are documented in pandas.DataFrame.plot().
Returns
axes [matplotlib.axes.Axes or numpy.ndarray of them]
See also:
Examples
Draw a box plot from a DataFrame with four columns of randomly generated data.
34.4.13.5 pandas.DataFrame.plot.density
Evaluation points for the estimated PDF. If None (default), 1000 equally spaced
points are used. If ind is a NumPy array, the KDE is evaluated at the points passed.
If ind is an integer, ind number of equally spaced points are used.
**kwds : optional
Additional keyword arguments are documented in pandas.DataFrame.
plot().
Returns
axes [matplotlib.axes.Axes or numpy.ndarray of them]
See also:
Examples
Given several Series of points randomly sampled from unknown distributions, estimate their PDFs using KDE
with automatic bandwidth determination and plot the results, evaluating them at 1000 equally spaced points
(default):
>>> df = pd.DataFrame({
... 'x': [1, 2, 2.5, 3, 3.5, 4, 5],
... 'y': [4, 4, 4.5, 5, 5.5, 6, 6],
... })
>>> ax = df.plot.kde()
A scalar bandwidth can be specified. Using a small bandwidth value can lead to overfitting, while using a large
bandwidth value may result in underfitting:
>>> ax = df.plot.kde(bw_method=0.3)
>>> ax = df.plot.kde(bw_method=3)
Finally, the ind parameter determines the evaluation points for the plot of the estimated PDF:
34.4.13.6 pandas.DataFrame.plot.hexbin
Examples
The following examples are generated with random data from a normal distribution.
>>> n = 10000
>>> df = pd.DataFrame({'x': np.random.randn(n),
... 'y': np.random.randn(n)})
>>> ax = df.plot.hexbin(x='x', y='y', gridsize=20)
The next example uses C and np.sum as reduce_C_function. Note that ‘observations’ values ranges from 1 to 5
but the result plot shows values up to more than 25. This is because of the reduce_C_function.
>>> n = 500
>>> df = pd.DataFrame({
... 'coord_x': np.random.uniform(-3, 3, size=n),
... 'coord_y': np.random.uniform(30, 50, size=n),
... 'observations': np.random.randint(1,5, size=n)
... })
>>> ax = df.plot.hexbin(x='coord_x',
... y='coord_y',
... C='observations',
... reduce_C_function=np.sum,
(continues on next page)
34.4.13.7 pandas.DataFrame.plot.hist
Examples
When we draw a dice 6000 times, we expect to get each value around 1000 times. But when we draw two dices
and sum the result, the distribution is going to be quite different. A histogram illustrates those distributions.
>>> df = pd.DataFrame(
... np.random.randint(1, 7, 6000),
... columns = ['one'])
>>> df['two'] = df['one'] + np.random.randint(1, 7, 6000)
>>> ax = df.plot.hist(bins=12, alpha=0.5)
34.4.13.8 pandas.DataFrame.plot.kde
The method used to calculate the estimator bandwidth. This can be ‘scott’, ‘sil-
verman’, a scalar constant or a callable. If None (default), ‘scott’ is used. See
scipy.stats.gaussian_kde for more information.
ind : NumPy array or integer, optional
Evaluation points for the estimated PDF. If None (default), 1000 equally spaced
points are used. If ind is a NumPy array, the KDE is evaluated at the points passed.
If ind is an integer, ind number of equally spaced points are used.
**kwds : optional
Additional keyword arguments are documented in pandas.DataFrame.
plot().
Returns
axes [matplotlib.axes.Axes or numpy.ndarray of them]
See also:
Examples
Given several Series of points randomly sampled from unknown distributions, estimate their PDFs using KDE
with automatic bandwidth determination and plot the results, evaluating them at 1000 equally spaced points
(default):
>>> df = pd.DataFrame({
... 'x': [1, 2, 2.5, 3, 3.5, 4, 5],
... 'y': [4, 4, 4.5, 5, 5.5, 6, 6],
... })
>>> ax = df.plot.kde()
A scalar bandwidth can be specified. Using a small bandwidth value can lead to overfitting, while using a large
bandwidth value may result in underfitting:
>>> ax = df.plot.kde(bw_method=0.3)
>>> ax = df.plot.kde(bw_method=3)
Finally, the ind parameter determines the evaluation points for the plot of the estimated PDF:
34.4.13.9 pandas.DataFrame.plot.line
Columns to use for the horizontal axis. Either the location or the label of the columns
to be used. By default, it will use the DataFrame indices.
y : int, str, or list of them, optional
The values to be plotted. Either the location or the label of the columns to be used.
By default, it will use the remaining DataFrame numeric columns.
**kwds
Keyword arguments to pass on to pandas.DataFrame.plot().
Returns axes : matplotlib.axes.Axes or numpy.ndarray
Returns an ndarray when subplots=True.
See also:
Examples
The following example shows the populations for some animals over the years.
>>> df = pd.DataFrame({
... 'pig': [20, 18, 489, 675, 1776],
... 'horse': [4, 25, 281, 600, 1900]
... }, index=[1990, 1997, 2003, 2009, 2014])
>>> lines = df.plot.line()
34.4.13.10 pandas.DataFrame.plot.pie
DataFrame.plot.pie(y=None, **kwds)
Generate a pie plot.
A pie plot is a proportional representation of the numerical data in a column. This function wraps
matplotlib.pyplot.pie() for the specified column. If no column reference is passed and
subplots=True a pie plot is drawn for each numerical column independently.
Parameters y : int or label, optional
Label or position of the column to plot. If not provided, subplots=True argu-
ment must be passed.
**kwds
Keyword arguments to pass on to pandas.DataFrame.plot().
Returns axes : matplotlib.axes.Axes or np.ndarray of them.
Examples
In the example below we have a DataFrame with the information about planet’s mass and radius. We pass the
the ‘mass’ column to the pie function to get a pie plot.
34.4.13.11 pandas.DataFrame.plot.scatter
• A column name or position whose values will be used to color the marker points
according to a colormap.
**kwds
Keyword arguments to pass on to pandas.DataFrame.plot().
Returns
axes [matplotlib.axes.Axes or numpy.ndarray of them]
See also:
Examples
Let’s see how to draw a scatter plot using coordinates from the values in a DataFrame’s columns.
>>> df = pd.DataFrame([[5.1, 3.5, 0], [4.9, 3.0, 0], [7.0, 3.2, 1],
... [6.4, 3.2, 1], [5.9, 3.0, 2]],
... columns=['length', 'width', 'species'])
>>> ax1 = df.plot.scatter(x='length',
... y='width',
... c='DarkBlue')
34.4.15 Sparse
34.4.15.1 pandas.SparseDataFrame.to_coo
SparseDataFrame.to_coo()
Return the contents of the frame as a sparse SciPy COO matrix.
New in version 0.20.0.
Returns coo_matrix : scipy.sparse.spmatrix
If the caller is heterogeneous and contains booleans or objects, the result will be of
dtype=object. See Notes.
Notes
The dtype will be the lowest-common-denominator type (implicit upcasting); that is to say if the dtypes (even
of numeric types) are mixed, the one that accommodates all will be chosen.
e.g. If the dtypes are float16 and float32, dtype will be upcast to float32. By numpy.find_common_type conven-
tion, mixing int64 and and uint64 will result in a float64 dtype.
34.5 Panel
34.5.1 Constructor
Panel([data, items, major_axis, minor_axis, . . . ]) (DEPRECATED) Represents wide format panel data,
stored as 3-dimensional array
34.5.1.1 pandas.Panel
Attributes
pandas.Panel.at
Panel.at
Access a single value for a row/column label pair.
Similar to loc, in that both provide label-based lookups. Use at if you only need to get or set a single
value in a DataFrame or Series.
Raises KeyError
When label does not exist in DataFrame
See also:
Examples
>>> df.loc[5].at['B']
4
pandas.Panel.axes
Panel.axes
Return index label(s) of the internal NDFrame
pandas.Panel.blocks
Panel.blocks
Internal property, property synonym for as_blocks()
Deprecated since version 0.21.0.
pandas.Panel.dtypes
Panel.dtypes
Return the dtypes in the DataFrame.
This returns a Series with the data type of each column. The result’s index is the original DataFrame’s
columns. Columns with mixed types are stored with the object dtype. See the User Guide for more.
Returns pandas.Series
The data type of each column.
See also:
Examples
pandas.Panel.empty
Panel.empty
Indicator whether DataFrame is empty.
True if DataFrame is entirely empty (no items), meaning any of the axes are of length 0.
Returns bool
If DataFrame is empty, return True, if not return False.
See also:
pandas.Series.dropna, pandas.DataFrame.dropna
Notes
If DataFrame contains only NaNs, it is still not considered empty. See the example below.
Examples
If we only have NaNs in our DataFrame, it is not considered empty! We will need to drop the NaNs to
make the DataFrame empty:
pandas.Panel.ftypes
Panel.ftypes
Return the ftypes (indication of sparse/dense and dtype) in DataFrame.
This returns a Series with the data type of each column. The result’s index is the original DataFrame’s
columns. Columns with mixed types are stored with the object dtype. See the User Guide for more.
Returns pandas.Series
The data type and indication of sparse/dense of each column.
See also:
Notes
Sparse data should have the same dtypes as its dense representation.
Examples
>>> pd.SparseDataFrame(arr).ftypes
0 float64:sparse
1 float64:sparse
2 float64:sparse
3 float64:sparse
dtype: object
pandas.Panel.iat
Panel.iat
Access a single value for a row/column pair by integer position.
Similar to iloc, in that both provide integer-based lookups. Use iat if you only need to get or set a
single value in a DataFrame or Series.
Raises IndexError
When integer position is out of bounds
See also:
Examples
>>> df.iat[1, 2]
1
>>> df.iat[1, 2] = 10
>>> df.iat[1, 2]
10
>>> df.loc[0].iat[1]
2
pandas.Panel.iloc
Panel.iloc
Purely integer-location based indexing for selection by position.
.iloc[] is primarily integer position based (from 0 to length-1 of the axis), but may also be used
with a boolean array.
Allowed inputs are:
• An integer, e.g. 5.
• A list or array of integers, e.g. [4, 3, 0].
• A slice object with ints, e.g. 1:7.
• A boolean array.
• A callable function with one argument (the calling Series, DataFrame or Panel) and that returns
valid output for indexing (one of the above)
.iloc will raise IndexError if a requested indexer is out-of-bounds, except slice indexers which
allow out-of-bounds indexing (this conforms with python/numpy slice semantics).
See more at Selection by Position
pandas.Panel.items
Panel.items
pandas.Panel.ix
Panel.ix
A primarily label-location based indexer, with integer position fallback.
Warning: Starting in 0.20.0, the .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers.
.ix[] supports mixed integer and label based access. It is primarily label based, but will fall back to
integer positional access unless the corresponding axis is of integer type.
.ix is the most general indexer and will support any of the inputs in .loc and .iloc. .ix also
supports floating point label schemes. .ix is exceptionally useful when dealing with mixed positional
and label based hierarchical indexes.
However, when an axis is integer based, ONLY label based access and not positional access is supported.
Thus, in such cases, it’s usually better to be explicit and use .iloc or .loc.
See more at Advanced Indexing.
pandas.Panel.loc
Panel.loc
Access a group of rows and columns by label(s) or a boolean array.
.loc[] is primarily label based, but may also be used with a boolean array.
Allowed inputs are:
• A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index, and never as an
integer position along the index).
• A list or array of labels, e.g. ['a', 'b', 'c'].
• A slice object with labels, e.g. 'a':'f'.
Warning: Note that contrary to usual python slices, both the start and the stop are included
• A boolean array of the same length as the axis being sliced, e.g. [True, False, True].
• A callable function with one argument (the calling Series, DataFrame or Panel) and that returns
valid output for indexing (one of the above)
See more at Selection by Label
Raises KeyError:
when any items are not found
See also:
Examples
Getting values
>>> df.loc['viper']
max_speed 4
shield 5
Name: viper, dtype: int64
Slice with labels for row and single label for column. As mentioned above, note that both the start and
stop of the slice are included.
Setting values
Set value for all items matching the list of labels
>>> df.loc['cobra'] = 10
>>> df
max_speed shield
cobra 10 10
viper 4 50
sidewinder 7 50
Slice with integer labels for rows. As mentioned above, note that both the start and stop of the slice are
included.
>>> df.loc[7:9]
max_speed shield
7 1 2
8 4 5
9 7 8
>>> df.loc['cobra']
max_speed shield
mark i 12 2
mark ii 0 4
Single label for row and column. Similar to passing in a tuple, this returns a Series.
Single tuple for the index with a single label for the column
pandas.Panel.major_axis
Panel.major_axis
pandas.Panel.minor_axis
Panel.minor_axis
pandas.Panel.ndim
Panel.ndim
Return an int representing the number of axes / array dimensions.
Return 1 if Series. Otherwise return 2 if DataFrame.
See also:
ndarray.ndim
Examples
pandas.Panel.shape
Panel.shape
Return a tuple of axis dimensions
pandas.Panel.size
Panel.size
Return an int representing the number of elements in this object.
Return the number of rows if Series. Otherwise return the number of rows times number of columns if
DataFrame.
See also:
ndarray.size
Examples
pandas.Panel.values
Panel.values
Return a Numpy representation of the DataFrame.
Only the values in the DataFrame will be returned, the axes labels will be removed.
Returns numpy.ndarray
The values of the DataFrame.
See also:
Notes
The dtype will be a lower-common-denominator dtype (implicit upcasting); that is to say if the dtypes
(even of numeric types) are mixed, the one that accommodates all will be chosen. Use this with care if
you are not dealing with the blocks.
e.g. If the dtypes are float16 and float32, dtype will be upcast to float32. If dtypes are int32 and uint8,
dtype will be upcast to int32. By numpy.find_common_type() convention, mixing int64 and uint64
will result in a float64 dtype.
Examples
A DataFrame where all columns are the same type (e.g., int64) results in an array of the same type.
A DataFrame with mixed type columns(e.g., str/object, int64, float32) results in an ndarray of the broadest
type that accommodates these mixed types (e.g., object).
is_copy
Methods
pandas.Panel.abs
Panel.abs()
Return a Series/DataFrame with absolute numeric value of each element.
This function only applies to elements that are all numeric.
Returns abs
Series/DataFrame containing the absolute value of each element.
See also:
Notes
√
For complex inputs, 1.2 + 1j, the absolute value is 𝑎2 + 𝑏2 .
Examples
Select rows with data closest to certain value using argsort (from StackOverflow).
>>> df = pd.DataFrame({
... 'a': [4, 5, 6, 7],
... 'b': [10, 20, 30, 40],
... 'c': [100, 50, -30, -50]
... })
>>> df
a b c
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
>>> df.loc[(df.c - 43).abs().argsort()]
a b c
1 5 20 50
0 4 10 100
2 6 30 -30
3 7 40 -50
pandas.Panel.add
Panel.add(other, axis=0)
Addition of series and other, element-wise (binary operator add). Equivalent to panel + other.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.radd
pandas.Panel.add_prefix
Panel.add_prefix(prefix)
Prefix labels with string prefix.
For Series, the row labels are prefixed. For DataFrame, the column labels are prefixed.
Parameters prefix : str
The string to add before each label.
Returns Series or DataFrame
New Series or DataFrame with updated labels.
See also:
Examples
>>> s.add_prefix('item_')
item_0 1
item_1 2
item_2 3
item_3 4
dtype: int64
>>> df.add_prefix('col_')
col_A col_B
0 1 3
1 2 4
2 3 5
3 4 6
pandas.Panel.add_suffix
Panel.add_suffix(suffix)
Suffix labels with string suffix.
For Series, the row labels are suffixed. For DataFrame, the column labels are suffixed.
Parameters suffix : str
Examples
>>> s.add_suffix('_item')
0_item 1
1_item 2
2_item 3
3_item 4
dtype: int64
>>> df.add_suffix('_col')
A_col B_col
0 1 3
1 2 4
2 3 5
3 4 6
pandas.Panel.align
Panel.align(other, **kwargs)
Align two objects on their axes with the specified join method for each axis Index
Parameters
other [DataFrame or Series]
join [{‘outer’, ‘inner’, ‘left’, ‘right’}, default ‘outer’]
axis : allowed axis of the other object, default None
pandas.Panel.all
Examples
Series
DataFrames
Create a dataframe from a dictionary.
>>> df.all()
col1 True
col2 False
dtype: bool
>>> df.all(axis='columns')
0 True
1 False
dtype: bool
>>> df.all(axis=None)
False
pandas.Panel.any
Unlike DataFrame.all(), this performs an or operation. If any of the values along the specified axis
is True, this will return True.
Parameters axis : {0 or ‘index’, 1 or ‘columns’, None}, default 0
Indicate which axis or axes should be reduced.
• 0 / ‘index’ : reduce the index, return a Series whose index is the original column
labels.
• 1 / ‘columns’ : reduce the columns, return a Series whose index is the original
index.
• None : reduce all axes, return a scalar.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing
into a DataFrame.
bool_only : boolean, default None
Include only boolean columns. If None, will attempt to use everything, then use
only boolean data. Not implemented for Series.
**kwargs : any, default None
Additional keywords have no effect but might be accepted for compatibility with
NumPy.
Returns
any [DataFrame or Panel (if level specified)]
See also:
Examples
Series
For Series input, the output is a scalar indicating whether any element is True.
DataFrame
Whether each column contains at least one True element (the default).
>>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]})
>>> df
A B C
0 1 0 0
1 2 2 0
>>> df.any()
A True
B True
C False
dtype: bool
>>> df.any(axis='columns')
0 True
1 True
dtype: bool
>>> df.any(axis='columns')
0 True
1 False
dtype: bool
>>> df.any(axis=None)
True
>>> pd.DataFrame([]).any()
Series([], dtype: bool)
pandas.Panel.apply
Returns
result [Panel, DataFrame, or Series]
Examples
Equivalent to previous:
>>> p.apply(lambda x: x.sum(), axis='major')
Return the shapes of each DataFrame over axis 2 (i.e the shapes of items x major), as a Series
>>> p.apply(lambda x: x.shape, axis=(0,1))
pandas.Panel.as_blocks
Panel.as_blocks(copy=True)
Convert the frame to a dict of dtype -> Constructor Types that each has a homogeneous dtype.
Deprecated since version 0.21.0.
NOTE: the dtypes of the blocks WILL BE PRESERVED HERE (unlike in as_matrix)
Parameters
copy [boolean, default True]
Returns
values [a dict of dtype -> Constructor Types]
pandas.Panel.as_matrix
Panel.as_matrix()
Convert the frame to its Numpy-array representation.
Deprecated since version 0.23.0: Use DataFrame.values() instead.
Parameters columns: list, optional, default:None
If None, return all columns, otherwise, returns specified columns.
Returns values : ndarray
If the caller is heterogeneous and contains booleans or objects, the result will be
of dtype=object. See Notes.
See also:
pandas.DataFrame.values
Notes
pandas.Panel.asfreq
Notes
To learn more about the frequency strings, please see this link.
Examples
pandas.Panel.asof
Panel.asof(where, subset=None)
The last row without any NaN is taken (or the last row without NaN considering only the subset of
columns in the case of a DataFrame)
New in version 0.19.0: For DataFrame
If there is no good value, NaN is returned for a Series a Series of NaN values for a DataFrame
Parameters
where [date or array of dates]
subset : string or list of strings, default None
if not None use these columns for NaN propagation
Returns where is scalar
• value or NaN if input is Series
• Series if input is DataFrame
See also:
merge_asof
Notes
pandas.Panel.astype
Returns
casted [type of caller]
See also:
Examples
>>> ser.astype('category')
0 1
1 2
dtype: category
Categories (2, int64): [1, 2]
Note that using copy=False and changing data on a new pandas object may propagate changes:
>>> s1 = pd.Series([1,2])
>>> s2 = s1.astype('int64', copy=False)
>>> s2[0] = 10
>>> s1 # note that s1[0] has changed too
0 10
1 2
dtype: int64
pandas.Panel.at_time
Panel.at_time(time, asof=False)
Select values at particular time of day (e.g. 9:30AM).
Parameters
time [datetime.time or string]
Returns
values_at_time [type of caller]
Raises TypeError
If the index is not a DatetimeIndex
See also:
Examples
>>> ts.at_time('12:00')
A
2018-04-09 12:00:00 2
2018-04-10 12:00:00 4
pandas.Panel.between_time
See also:
Examples
You get the times that are not between two times by setting start_time later than end_time:
pandas.Panel.bfill
pandas.Panel.bool
Panel.bool()
Return the bool of a single element PandasObject.
This must be a boolean scalar value, either True or False. Raise a ValueError if the PandasObject does not
have exactly 1 element, or that element is not boolean
pandas.Panel.clip
Assigns values outside boundary to boundary values. Thresholds can be singular values or array like, and
in the latter case the clipping is performed element-wise in the specified axis.
Parameters lower : float or array_like, default None
Minimum threshold value. All values below this threshold will be set to it.
upper : float or array_like, default None
Maximum threshold value. All values above this threshold will be set to it.
axis : int or string axis name, optional
Align object with lower and upper along the given axis.
inplace : boolean, default False
Whether to perform the operation in place on the data.
New in version 0.21.0.
*args, **kwargs
Additional keywords have no effect but might be accepted for compatibility with
numpy.
Returns Series or DataFrame
Same type as calling object with the values outside the clip boundaries replaced
See also:
Examples
>>> data = {'col_0': [9, -3, 0, -1, 5], 'col_1': [-2, -7, 6, 8, -5]}
>>> df = pd.DataFrame(data)
>>> df
col_0 col_1
0 9 -2
1 -3 -7
2 0 6
3 -1 8
4 5 -5
>>> df.clip(-4, 6)
col_0 col_1
0 6 -2
1 -3 -4
2 0 6
3 -1 6
4 5 -4
Clips using specific lower and upper thresholds per column element:
pandas.Panel.clip_lower
Series.clip Return copy of input with values below and above thresholds truncated.
Series.clip_upper Return copy of input with values above threshold truncated.
Examples
Series clipping element-wise using an array of thresholds. threshold should be the same length as the
Series.
>>> df.clip_lower(3)
A B
0 3 3
1 3 4
2 5 6
Or to an array of values. By default, threshold should be the same shape as the DataFrame.
Control how threshold is broadcast with axis. In this case threshold should be the same length as the axis
specified by axis.
pandas.Panel.clip_upper
pandas.Panel.compound
pandas.Panel.conform
Panel.conform(frame, axis=’items’)
Conform input DataFrame to align with chosen axis pair.
Parameters
frame [DataFrame]
axis : {‘items’, ‘major’, ‘minor’}
Axis the input corresponds to. E.g., if axis=’major’, then the frame’s columns
would be items, and the index would be values of the minor axis
Returns
DataFrame
pandas.Panel.consolidate
Panel.consolidate(inplace=False)
Compute NDFrame with “consolidated” internals (data of each dtype grouped together in a single ndar-
ray).
Deprecated since version 0.20.0: Consolidate will be an internal implementation only.
pandas.Panel.convert_objects
pandas.Panel.copy
Panel.copy(deep=True)
Make a copy of this object’s indices and data.
When deep=True (default), a new object will be created with a copy of the calling object’s data and
indices. Modifications to the data or indices of the copy will not be reflected in the original object (see
notes below).
When deep=False, a new object will be created without copying the calling object’s data or index
(only references to the data and index are copied). Any changes to the data of the original will be reflected
in the shallow copy (and vice versa).
Parameters deep : bool, default True
Make a deep copy, including a copy of the data and the indices. With
deep=False neither the indices nor the data are copied.
Returns copy : Series, DataFrame or Panel
Object type matches caller.
Notes
When deep=True, data is copied but actual Python objects will not be copied recursively, only the
reference to the object. This is in contrast to copy.deepcopy in the Standard Library, which recursively
copies object data (see examples below).
While Index objects are copied when deep=True, the underlying numpy array is not copied for per-
formance reasons. Since Index is immutable, the underlying data can be safely shared and a copy is not
needed.
Examples
>>> s is shallow
False
>>> s.values is shallow.values and s.index is shallow.index
True
>>> s is deep
False
>>> s.values is deep.values or s.index is deep.index
False
Updates to the data shared by shallow copy and original is reflected in both; deep copy remains unchanged.
>>> s[0] = 3
>>> shallow[1] = 4
>>> s
a 3
b 4
dtype: int64
>>> shallow
a 3
b 4
dtype: int64
>>> deep
a 1
b 2
dtype: int64
Note that when copying an object containing Python objects, a deep copy will copy the data, but will not
do so recursively. Updating a nested data object will be reflected in the deep copy.
pandas.Panel.count
Panel.count(axis=’major’)
Return number of observations over requested axis.
Parameters
axis [{‘items’, ‘major’, ‘minor’} or {0, 1, 2}]
Returns
count [DataFrame]
pandas.Panel.cummax
Examples
Series
>>> s.cummax()
0 2.0
1 NaN
2 5.0
3 5.0
4 5.0
dtype: float64
>>> s.cummax(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
By default, iterates over rows and finds the maximum in each column. This is equivalent to axis=None
or axis='index'.
>>> df.cummax()
A B
0 2.0 1.0
1 3.0 NaN
2 3.0 1.0
To iterate over columns and find the maximum in each row, use axis=1
>>> df.cummax(axis=1)
A B
0 2.0 2.0
1 3.0 NaN
2 1.0 1.0
pandas.Panel.cummin
Examples
Series
>>> s.cummin()
0 2.0
1 NaN
2 2.0
3 -1.0
4 -1.0
dtype: float64
>>> s.cummin(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
By default, iterates over rows and finds the minimum in each column. This is equivalent to axis=None
or axis='index'.
>>> df.cummin()
A B
0 2.0 1.0
1 2.0 NaN
2 1.0 0.0
To iterate over columns and find the minimum in each row, use axis=1
>>> df.cummin(axis=1)
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
pandas.Panel.cumprod
Examples
Series
>>> s.cumprod()
0 2.0
1 NaN
2 10.0
3 -10.0
4 -0.0
dtype: float64
>>> s.cumprod(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
By default, iterates over rows and finds the product in each column. This is equivalent to axis=None or
axis='index'.
>>> df.cumprod()
A B
0 2.0 1.0
1 6.0 NaN
2 6.0 0.0
To iterate over columns and find the product in each row, use axis=1
>>> df.cumprod(axis=1)
A B
0 2.0 2.0
1 3.0 NaN
2 1.0 0.0
pandas.Panel.cumsum
Examples
Series
>>> s.cumsum()
0 2.0
1 NaN
2 7.0
3 6.0
4 6.0
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the sum in each column. This is equivalent to axis=None or
axis='index'.
>>> df.cumsum()
A B
0 2.0 1.0
1 5.0 NaN
2 6.0 1.0
To iterate over columns and find the sum in each row, use axis=1
>>> df.cumsum(axis=1)
A B
0 2.0 3.0
1 3.0 NaN
2 1.0 1.0
pandas.Panel.describe
The percentiles to include in the output. All should fall between 0 and 1. The de-
fault is [.25, .5, .75], which returns the 25th, 50th, and 75th percentiles.
include : ‘all’, list-like of dtypes or None (default), optional
A white list of data types to include in the result. Ignored for Series. Here are
the options:
• ‘all’ : All columns of the input will be included in the output.
• A list-like of dtypes : Limits the results to the provided data types. To limit the
result to numeric types submit numpy.number. To limit it instead to object
columns submit the numpy.object data type. Strings can also be used in
the style of select_dtypes (e.g. df.describe(include=['O'])).
To select pandas categorical columns, use 'category'
• None (default) : The result will include all numeric columns.
exclude : list-like of dtypes or None (default), optional,
A black list of data types to omit from the result. Ignored for Series. Here are
the options:
• A list-like of dtypes : Excludes the provided data types from the result. To
exclude numeric types submit numpy.number. To exclude object columns
submit the data type numpy.object. Strings can also be used in the style of
select_dtypes (e.g. df.describe(include=['O'])). To exclude
pandas categorical columns, use 'category'
• None (default) : The result will exclude nothing.
Returns
summary: Series/DataFrame of summary statistics
See also:
DataFrame.count, DataFrame.max, DataFrame.min, DataFrame.mean, DataFrame.
std, DataFrame.select_dtypes
Notes
For numeric data, the result’s index will include count, mean, std, min, max as well as lower, 50 and
upper percentiles. By default the lower percentile is 25 and the upper percentile is 75. The 50 percentile
is the same as the median.
For object data (e.g. strings or timestamps), the result’s index will include count, unique, top, and
freq. The top is the most common value. The freq is the most common value’s frequency. Times-
tamps also include the first and last items.
If multiple object values have the highest count, then the count and top results will be arbitrarily chosen
from among those with the highest count.
For mixed data types provided via a DataFrame, the default is to return only an analysis of numeric
columns. If the dataframe consists only of object and categorical data without any numeric columns,
the default is to return an analysis of both the object and categorical columns. If include='all' is
provided as an option, the result will include a union of attributes of each type.
The include and exclude parameters can be used to limit which columns in a DataFrame are analyzed
for the output. The parameters are ignored when analyzing a Series.
Examples
>>> s = pd.Series([
... np.datetime64("2000-01-01"),
... np.datetime64("2010-01-01"),
... np.datetime64("2010-01-01")
... ])
>>> s.describe()
count 3
unique 2
top 2010-01-01 00:00:00
freq 2
first 2000-01-01 00:00:00
last 2010-01-01 00:00:00
dtype: object
>>> df.describe(include='all')
categorical numeric object
count 3 3.0 3
unique 3 NaN 3
top f NaN c
freq 1 NaN 1
mean NaN 2.0 NaN
std NaN 1.0 NaN
min NaN 1.0 NaN
25% NaN 1.5 NaN
50% NaN 2.0 NaN
75% NaN 2.5 NaN
max NaN 3.0 NaN
>>> df.numeric.describe()
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Name: numeric, dtype: float64
>>> df.describe(include=[np.number])
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
>>> df.describe(include=[np.object])
object
count 3
unique 3
top c
freq 1
>>> df.describe(include=['category'])
categorical
count 3
unique 3
(continues on next page)
>>> df.describe(exclude=[np.number])
categorical object
count 3 3
unique 3 3
top f c
freq 1 1
>>> df.describe(exclude=[np.object])
categorical numeric
count 3 3.0
unique 3 NaN
top f NaN
freq 1 NaN
mean NaN 2.0
std NaN 1.0
min NaN 1.0
25% NaN 1.5
50% NaN 2.0
75% NaN 2.5
max NaN 3.0
pandas.Panel.div
Panel.div(other, axis=0)
Floating division of series and other, element-wise (binary operator truediv). Equivalent to panel /
other.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.rtruediv
pandas.Panel.divide
Panel.divide(other, axis=0)
Floating division of series and other, element-wise (binary operator truediv). Equivalent to panel /
other.
Parameters
pandas.Panel.dropna
pandas.Panel.eq
Panel.eq(other, axis=None)
Wrapper for comparison method eq
pandas.Panel.equals
Panel.equals(other)
Determines if two NDFrame objects contain the same elements. NaNs in the same location are considered
equal.
pandas.Panel.ffill
pandas.Panel.fillna
reindex, asfreq
Examples
>>> df.fillna(0)
A B C D
0 0.0 2.0 0.0 0
1 3.0 4.0 0.0 1
2 0.0 0.0 0.0 5
3 0.0 3.0 0.0 4
>>> df.fillna(method='ffill')
A B C D
0 NaN 2.0 NaN 0
1 3.0 4.0 NaN 1
2 3.0 4.0 NaN 5
3 3.0 3.0 NaN 4
Replace all NaN elements in column ‘A’, ‘B’, ‘C’, and ‘D’, with 0, 1, 2, and 3 respectively.
pandas.Panel.filter
Notes
The items, like, and regex parameters are enforced to be mutually exclusive.
axis defaults to the info axis that is used when indexing with [].
Examples
>>> df
one two three
mouse 1 2 3
rabbit 4 5 6
pandas.Panel.first
Panel.first(offset)
Convenience method for subsetting initial periods of time series data based on a date offset.
Parameters
offset [string, DateOffset, dateutil.relativedelta]
Returns
subset [type of caller]
Raises TypeError
If the index is not a DatetimeIndex
See also:
Examples
>>> ts.first('3D')
A
2018-04-09 1
2018-04-11 2
Notice the data for 3 first calender days were returned, not the first 3 days observed in the dataset, and
therefore data for 2018-04-13 was not returned.
pandas.Panel.first_valid_index
Panel.first_valid_index()
Return index for first non-NA/null value.
Returns
scalar [type of index]
Notes
If all elements are non-NA/null, returns None. Also returns None for empty NDFrame.
pandas.Panel.floordiv
Panel.floordiv(other, axis=0)
Integer division of series and other, element-wise (binary operator floordiv). Equivalent to panel //
other.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.rfloordiv
pandas.Panel.fromDict
pandas.Panel.from_dict
pandas.Panel.ge
Panel.ge(other, axis=None)
Wrapper for comparison method ge
pandas.Panel.get
Panel.get(key, default=None)
Get item from object for given key (DataFrame column, Panel slice, etc.). Returns default value if not
found.
Parameters
key [object]
Returns
value [type of items contained in object]
pandas.Panel.get_dtype_counts
Panel.get_dtype_counts()
Return counts of unique dtypes in this object.
Returns dtype : Series
Series with the count of columns with each dtype.
See also:
Examples
>>> df.get_dtype_counts()
float64 1
int64 1
object 1
dtype: int64
pandas.Panel.get_ftype_counts
Panel.get_ftype_counts()
Return counts of unique ftypes in this object.
Deprecated since version 0.23.0.
This is useful for SparseDataFrame or for DataFrames containing sparse arrays.
Returns dtype : Series
Series with the count of columns with each type and sparsity (dense/sparse)
See also:
Examples
>>> df.get_ftype_counts()
float64:dense 1
int64:dense 1
object:dense 1
dtype: int64
pandas.Panel.get_value
Panel.get_value(*args, **kwargs)
Quickly retrieve single value at (item, major, minor) location
Deprecated since version 0.21.0.
Please use .at[] or .iat[] accessors.
Parameters
item [item label (panel item)]
major [major axis label (panel item row)]
minor [minor axis label (panel item column)]
takeable [interpret the passed labels as indexers, default False]
Returns
value [scalar value]
pandas.Panel.get_values
Panel.get_values()
Return an ndarray after converting sparse values to dense.
This is the same as .values for non-sparse data. For sparse data contained in a pandas.SparseArray,
the data are first converted to a dense representation.
Returns numpy.ndarray
Numpy representation of DataFrame
See also:
Examples
>>> df.get_values()
array([[1, True, 1.0], [2, False, 2.0]], dtype=object)
>>> df.get_values()
array([[ 1., 1.],
[nan, 2.],
[nan, 3.]])
pandas.Panel.groupby
Panel.groupby(function, axis=’major’)
Group data on given axis, returning GroupBy object
Parameters function : callable
Mapping function for chosen access
Returns
grouped [PanelGroupBy]
pandas.Panel.gt
Panel.gt(other, axis=None)
Wrapper for comparison method gt
pandas.Panel.head
Panel.head(n=5)
Return the first n rows.
This function returns the first n rows for the object based on position. It is useful for quickly testing if
your object has the right type of data in it.
Parameters n : int, default 5
Number of rows to select.
Returns obj_head : type of caller
The first n rows of the caller object.
See also:
Examples
>>> df.head()
animal
0 alligator
1 bee
2 falcon
3 lion
4 monkey
>>> df.head(3)
animal
0 alligator
1 bee
2 falcon
pandas.Panel.infer_objects
Panel.infer_objects()
Attempt to infer better dtypes for object columns.
Attempts soft conversion of object-dtyped columns, leaving non-object and unconvertible columns un-
changed. The inference rules are the same as during normal Series/DataFrame construction.
New in version 0.21.0.
Returns
converted [same type as input object]
See also:
Examples
>>> df.dtypes
A object
dtype: object
>>> df.infer_objects().dtypes
A int64
dtype: object
pandas.Panel.interpolate
• ‘linear’: ignore the index and treat the values as equally spaced. This is the
only method supported on MultiIndexes. default
• ‘time’: interpolation works on daily and higher resolution data to interpolate
given length of interval
• ‘index’, ‘values’: use the actual numerical values of the index
• ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘barycentric’, ‘polyno-
mial’ is passed to scipy.interpolate.interp1d. Both ‘poly-
nomial’ and ‘spline’ require that you also specify an order (int), e.g.
df.interpolate(method=’polynomial’, order=4). These use the actual numeri-
cal values of the index.
• ‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’ and ‘akima’ are all wrap-
pers around the scipy interpolation methods of similar names. These use the
actual numerical values of the index. For more information on their behavior,
see the scipy documentation and tutorial documentation
• ‘from_derivatives’ refers to BPoly.from_derivatives which replaces ‘piece-
wise_polynomial’ interpolation method in scipy 0.18
New in version 0.18.1: Added support for the ‘akima’ method Added interpolate
method ‘from_derivatives’ which replaces ‘piecewise_polynomial’ in scipy 0.18;
backwards-compatible with scipy < 0.18
axis : {0, 1}, default 0
• 0: fill column-by-column
• 1: fill row-by-row
limit : int, default None.
Maximum number of consecutive NaNs to fill. Must be greater than 0.
Returns
Series or DataFrame of same shape interpolated at the NaNs
See also:
reindex, replace, fillna
Examples
Filling in NaNs
pandas.Panel.isna
Panel.isna()
Detect missing values.
Return a boolean same-sized object indicating if the values are NA. NA values, such as None or numpy.
NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty
strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.
use_inf_as_na = True).
Returns NDFrame
Mask of bool values for each element in NDFrame that indicates whether an
element is not an NA value.
See also:
Examples
>>> df.isna()
age born name toy
0 False True False True
1 False False False False
2 True False False False
>>> ser.isna()
0 False
1 False
2 True
dtype: bool
pandas.Panel.isnull
Panel.isnull()
Detect missing values.
Return a boolean same-sized object indicating if the values are NA. NA values, such as None or numpy.
NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty
strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.
use_inf_as_na = True).
Returns NDFrame
Mask of bool values for each element in NDFrame that indicates whether an
element is not an NA value.
See also:
Examples
>>> df.isna()
age born name toy
0 False True False True
1 False False False False
2 True False False False
>>> ser.isna()
0 False
1 False
2 True
dtype: bool
pandas.Panel.iteritems
Panel.iteritems()
Iterate over (label, values) on info axis
This is index for Series, columns for DataFrame, major_axis for Panel, and so on.
pandas.Panel.join
pandas.Panel.keys
Panel.keys()
Get the ‘info axis’ (see Indexing for more)
This is index for Series, columns for DataFrame and major_axis for Panel.
pandas.Panel.kurt
pandas.Panel.kurtosis
pandas.Panel.last
Panel.last(offset)
Convenience method for subsetting final periods of time series data based on a date offset.
Parameters
offset [string, DateOffset, dateutil.relativedelta]
Returns
subset [type of caller]
Raises TypeError
If the index is not a DatetimeIndex
See also:
Examples
>>> ts.last('3D')
A
2018-04-13 3
2018-04-15 4
Notice the data for 3 last calender days were returned, not the last 3 observed days in the dataset, and
therefore data for 2018-04-11 was not returned.
pandas.Panel.last_valid_index
Panel.last_valid_index()
Return index for last non-NA/null value.
Returns
scalar [type of index]
Notes
If all elements are non-NA/null, returns None. Also returns None for empty NDFrame.
pandas.Panel.le
Panel.le(other, axis=None)
Wrapper for comparison method le
pandas.Panel.lt
Panel.lt(other, axis=None)
Wrapper for comparison method lt
pandas.Panel.mad
pandas.Panel.major_xs
Panel.major_xs(key)
Return slice of panel along major axis
Notes
pandas.Panel.mask
Notes
The mask method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond is False the element is used; otherwise the corresponding element from the DataFrame other
is used.
The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).
For further details and examples see the mask documentation in indexing.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
pandas.Panel.max
Parameters
axis [{items (0), major_axis (1), minor_axis (2)}]
skipna : boolean, default True
Exclude NA/null values when computing the result.
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing
into a DataFrame
numeric_only : boolean, default None
Include only float, int, boolean columns. If None, will attempt to use everything,
then use only numeric data. Not implemented for Series.
Returns
max [DataFrame or Panel (if level specified)]
pandas.Panel.mean
pandas.Panel.median
pandas.Panel.min
Parameters
axis [{items (0), major_axis (1), minor_axis (2)}]
skipna : boolean, default True
Exclude NA/null values when computing the result.
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing
into a DataFrame
numeric_only : boolean, default None
Include only float, int, boolean columns. If None, will attempt to use everything,
then use only numeric data. Not implemented for Series.
Returns
min [DataFrame or Panel (if level specified)]
pandas.Panel.minor_xs
Panel.minor_xs(key)
Return slice of panel along minor axis
Parameters key : object
Minor axis label
Returns y : DataFrame
index -> major axis, columns -> items
Notes
pandas.Panel.mod
Panel.mod(other, axis=0)
Modulo of series and other, element-wise (binary operator mod). Equivalent to panel % other.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.rmod
pandas.Panel.mul
Panel.mul(other, axis=0)
Multiplication of series and other, element-wise (binary operator mul). Equivalent to panel * other.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.rmul
pandas.Panel.multiply
Panel.multiply(other, axis=0)
Multiplication of series and other, element-wise (binary operator mul). Equivalent to panel * other.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.rmul
pandas.Panel.ne
Panel.ne(other, axis=None)
Wrapper for comparison method ne
pandas.Panel.notna
Panel.notna()
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to
True. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set
pandas.options.mode.use_inf_as_na = True). NA values, such as None or numpy.NaN,
get mapped to False values.
Returns NDFrame
Mask of bool values for each element in NDFrame that indicates whether an
element is not an NA value.
See also:
Examples
>>> df.notna()
age born name toy
0 True False True False
1 True True True True
2 False True True True
>>> ser.notna()
0 True
1 True
2 False
dtype: bool
pandas.Panel.notnull
Panel.notnull()
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to
True. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set
pandas.options.mode.use_inf_as_na = True). NA values, such as None or numpy.NaN,
get mapped to False values.
Returns NDFrame
Mask of bool values for each element in NDFrame that indicates whether an
element is not an NA value.
See also:
Examples
>>> df.notna()
age born name toy
0 True False True False
1 True True True True
2 False True True True
>>> ser.notna()
0 True
1 True
2 False
dtype: bool
pandas.Panel.pct_change
Examples
Series
>>> s.pct_change()
0 NaN
1 0.011111
2 -0.065934
dtype: float64
>>> s.pct_change(periods=2)
0 NaN
1 NaN
2 -0.055556
dtype: float64
See the percentage change in a Series where filling NAs with last valid observation forward to next valid.
>>> s.pct_change(fill_method='ffill')
0 NaN
1 0.011111
2 0.000000
3 -0.065934
dtype: float64
DataFrame
Percentage change in French franc, Deutsche Mark, and Italian lira from 1980-01-01 to 1980-03-01.
>>> df = pd.DataFrame({
... 'FR': [4.0405, 4.0963, 4.3149],
... 'GR': [1.7246, 1.7482, 1.8519],
... 'IT': [804.74, 810.01, 860.13]},
... index=['1980-01-01', '1980-02-01', '1980-03-01'])
>>> df
FR GR IT
1980-01-01 4.0405 1.7246 804.74
1980-02-01 4.0963 1.7482 810.01
1980-03-01 4.3149 1.8519 860.13
>>> df.pct_change()
FR GR IT
1980-01-01 NaN NaN NaN
1980-02-01 0.013810 0.013684 0.006549
1980-03-01 0.053365 0.059318 0.061876
Percentage of change in GOOG and APPL stock volume. Shows computing the percentage change be-
tween columns.
>>> df = pd.DataFrame({
... '2016': [1769950, 30586265],
... '2015': [1500923, 40912316],
... '2014': [1371819, 41403351]},
... index=['GOOG', 'APPL'])
>>> df
2016 2015 2014
GOOG 1769950 1500923 1371819
APPL 30586265 40912316 41403351
>>> df.pct_change(axis='columns')
2016 2015 2014
GOOG NaN -0.151997 -0.086016
APPL NaN 0.337604 0.012002
pandas.Panel.pipe
Notes
Use .pipe when chaining together functions that expect Series, DataFrames or GroupBy objects. Instead
of writing
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe(f, arg2=b, arg3=c)
... )
If you have a function that takes the data as (say) the second argument, pass a tuple indicating which
keyword expects the data. For example, suppose f takes its data as arg2:
>>> (df.pipe(h)
... .pipe(g, arg1=a)
... .pipe((f, 'arg2'), arg1=a, arg3=c)
... )
pandas.Panel.pop
Panel.pop(item)
Return item and drop from frame. Raise KeyError if not found.
Parameters item : str
Column label to be popped
Returns
popped [Series]
Examples
>>> df.pop('class')
0 bird
1 bird
2 mammal
3 mammal
Name: class, dtype: object
>>> df
name max_speed
0 falcon 389.0
1 parrot 24.0
2 lion 80.5
3 monkey NaN
pandas.Panel.pow
Panel.pow(other, axis=0)
Exponential power of series and other, element-wise (binary operator pow). Equivalent to panel **
other.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.rpow
pandas.Panel.prod
Examples
>>> pd.Series([]).prod()
1.0
>>> pd.Series([]).prod(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and empty series identically.
>>> pd.Series([np.nan]).prod()
1.0
>>> pd.Series([np.nan]).prod(min_count=1)
nan
pandas.Panel.product
Examples
>>> pd.Series([]).prod()
1.0
>>> pd.Series([]).prod(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and empty series identically.
>>> pd.Series([np.nan]).prod()
1.0
>>> pd.Series([np.nan]).prod(min_count=1)
nan
pandas.Panel.radd
Panel.radd(other, axis=0)
Addition of series and other, element-wise (binary operator radd). Equivalent to other + panel.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.add
pandas.Panel.rank
pandas.Panel.rdiv
Panel.rdiv(other, axis=0)
Floating division of series and other, element-wise (binary operator rtruediv). Equivalent to other /
panel.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.truediv
pandas.Panel.reindex
Panel.reindex(*args, **kwargs)
Conform Panel to new index with optional filling logic, placing NA/NaN in locations having no value in
the previous index. A new object is produced unless the new index is equivalent to the current one and
copy=False
Parameters items, major_axis, minor_axis : array-like, optional (should be specified using
keywords)
New labels / index to conform to. Preferably an Index object to avoid duplicating
data
method : {None, ‘backfill’/’bfill’, ‘pad’/’ffill’, ‘nearest’}, optional
method to use for filling holes in reindexed DataFrame. Please note: this is only
applicable to DataFrames/Series with a monotonically increasing/decreasing in-
dex.
• default: don’t fill gaps
• pad / ffill: propagate last valid observation forward to next valid
• backfill / bfill: use next valid observation to fill gap
• nearest: use nearest valid observations to fill gap
copy : boolean, default True
Return a new object, even if the passed indexes are the same
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level
fill_value : scalar, default np.NaN
Value to use for missing values. Defaults to NaN, but can be any “compatible”
value
limit : int, default None
Maximum number of consecutive elements to forward or backward fill
tolerance : optional
Maximum distance between original and new labels for inexact matches.
The values of the index at the matching locations most satisfy the equation
abs(index[indexer] - target) <= tolerance.
Tolerance may be a scalar value, which applies the same tolerance to all values,
or list-like, which applies variable tolerance per element. List-like includes list,
tuple, array, Series, and must be the same size as the index and its dtype must
exactly match the index’s type.
New in version 0.21.0: (list-like tolerance)
Returns
reindexed [Panel]
Examples
Create a new index and reindex the dataframe. By default values in the new index that do not have
corresponding records in the dataframe are assigned NaN.
We can fill in the missing values by passing a value to the keyword fill_value. Because the index is
not monotonically increasing or decreasing, we cannot use arguments to the keyword method to fill the
NaN values.
To further illustrate the filling functionality in reindex, we will create a dataframe with a monotonically
increasing index (for example, a sequence of dates).
The index entries that did not have a value in the original data frame (for example, ‘2009-12-29’) are by
default filled with NaN. If desired, we can fill in the missing values using one of several options.
For example, to backpropagate the last valid value to fill the NaN values, pass bfill as an argument to
the method keyword.
Please note that the NaN value present in the original dataframe (at index value 2010-01-03) will not be
filled by any of the value propagation schemes. This is because filling while reindexing does not look at
dataframe values, but only compares the original and desired indexes. If you do want to fill in the NaN
values present in the original dataframe, use the fillna() method.
See the user guide for more.
pandas.Panel.reindex_axis
Maximum distance between original and new labels for inexact matches.
The values of the index at the matching locations most satisfy the equation
abs(index[indexer] - target) <= tolerance.
Tolerance may be a scalar value, which applies the same tolerance to all values,
or list-like, which applies variable tolerance per element. List-like includes list,
tuple, array, Series, and must be the same size as the index and its dtype must
exactly match the index’s type.
New in version 0.21.0: (list-like tolerance)
Returns
reindexed [Panel]
See also:
reindex, reindex_like
Examples
pandas.Panel.reindex_like
Notes
pandas.Panel.rename
Examples
Since DataFrame doesn’t have a .name attribute, only mapping-type arguments are allowed.
pandas.Panel.rename_axis
Notes
Prior to version 0.21.0, rename_axis could also be used to change the axis labels by passing a mapping
or scalar. This behavior is deprecated and will be removed in a future version. Use rename instead.
Examples
Series
DataFrame
pandas.Panel.replace
Values of the NDFrame are replaced with other values dynamically. This differs from updating with .loc
or .iloc, which require you to specify a location to update with some value.
Parameters to_replace : str, regex, list, dict, Series, int, float, or None
How to find the values that will be replaced.
• numeric, str or regex:
– numeric: numeric values equal to to_replace will be replaced with value
– str: string exactly matching to_replace will be replaced with value
– regex: regexs matching to_replace will be replaced with value
• list of str, regex, or numeric:
– First, if to_replace and value are both lists, they must be the same length.
– Second, if regex=True then all of the strings in both lists will be in-
terpreted as regexs otherwise they will match directly. This doesn’t matter
much for value since there are only a few possible substitution regexes you
can use.
– str, regex and numeric rules apply as above.
• dict:
– Dicts can be used to specify different replacement values for different ex-
isting values. For example, {'a': 'b', 'y': 'z'} replaces the
value ‘a’ with ‘b’ and ‘y’ with ‘z’. To use a dict in this way the value param-
eter should be None.
– For a DataFrame a dict can specify that different values should be replaced
in different columns. For example, {'a': 1, 'b': 'z'} looks for
the value 1 in column ‘a’ and the value ‘z’ in column ‘b’ and replaces these
values with whatever is specified in value. The value parameter should not
be None in this case. You can treat this as a special case of passing two lists
except that you are specifying the column to search in.
– For a DataFrame nested dictionaries, e.g., {'a': {'b': np.nan}},
are read as follows: look in column ‘a’ for the value ‘b’ and replace it with
NaN. The value parameter should be None to use a nested dict in this way.
You can nest regular expressions as well. Note that column names (the top-
level dictionary keys in a nested dictionary) cannot be regular expressions.
• None:
– This means that the regex argument must be a string, compiled regular ex-
pression, or list, dict, ndarray or Series of such elements. If value is also
None then this must be a nested dictionary or Series.
See the examples section for examples of each of these.
value : scalar, dict, list, str, regex, default None
Value to replace any values matching to_replace with. For a DataFrame a dict of
values can be used to specify which value to use for each column (columns not in
the dict will not be filled). Regular expressions, strings and lists or dicts of such
objects are also allowed.
inplace : boolean, default False
If True, in place. Note: this will modify any other views on this object (e.g. a
column from a DataFrame). Returns the caller if this is True.
limit : int, default None
Maximum size gap to forward or backward fill.
regex : bool or same types as to_replace, default False
Whether to interpret to_replace and/or value as regular expressions. If this is
True then to_replace must be a string. Alternatively, this could be a regular
expression or a list, dict, or array of regular expressions in which case to_replace
must be None.
method : {‘pad’, ‘ffill’, ‘bfill’, None}
The method to use when for replacement, when to_replace is a scalar, list or tuple
and value is None.
Changed in version 0.23.0: Added to DataFrame.
Returns NDFrame
Object after replacement.
Raises AssertionError
• If regex is not a bool and to_replace is not None.
TypeError
• If to_replace is a dict and value is not a list, dict, ndarray, or Series
• If to_replace is None and regex is not compilable into a regular expression or is a list,
dict, ndarray, or Series.
• When replacing multiple bool or datetime64 objects and the arguments to
to_replace does not match the type of the value being replaced
ValueError
• If a list or an ndarray is passed to to_replace and value but they are not the same
length.
See also:
Notes
• Regex substitution is performed under the hood with re.sub. The rules for substitution for re.
sub are the same.
• Regular expressions will only substitute on strings, meaning you cannot provide, for example, a
regular expression matching floating point numbers and expect the columns in your frame that have
a numeric dtype to be matched. However, if those floating point numbers are strings, then you can
do this.
• This method has a lot of options. You are encouraged to experiment and play with this method to
gain intuition about how it works.
• When dict is used as the to_replace value, it is like key(s) in the dict are the to_replace part and
value(s) in the dict are the value parameter.
Examples
List-like ‘to_replace‘
dict-like ‘to_replace‘
Note that when replacing multiple bool or datetime64 objects, the data types in the to_replace pa-
rameter must match the data type of the value being replaced:
This raises a TypeError because one of the dict keys is not of the correct type for replacement.
Compare the behavior of s.replace({'a': None}) and s.replace('a', None) to under-
stand the pecularities of the to_replace parameter:
When one uses a dict as the to_replace value, it is like the value(s) in the dict are equal to the value
parameter. s.replace({'a': None}) is equivalent to s.replace(to_replace={'a':
None}, value=None, method=None):
When value=None and to_replace is a scalar, list or tuple, replace uses the method parameter (default
‘pad’) to do the replacement. So this is why the ‘a’ values are being replaced by 10 in rows 1 and 2
and ‘b’ in row 4 in this case. The command s.replace('a', None) is actually equivalent to s.
replace(to_replace='a', value=None, method='pad'):
pandas.Panel.resample
Notes
Examples
Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin.
>>> series.resample('3T').sum()
2000-01-01 00:00:00 3
2000-01-01 00:03:00 12
2000-01-01 00:06:00 21
Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the
left. Please note that the value in the bucket used as the label is not included in the bucket, which it labels.
For example, in the original series the bucket 2000-01-01 00:03:00 contains the value 3, but the
summed value in the resampled bucket with the label 2000-01-01 00:03:00 does not include 3 (if
it did, the summed value would be 6, not 3). To include this value close the right side of the bin interval
as illustrated in the example below this one.
Downsample the series into 3 minute bins as above, but close the right side of the bin interval.
Upsample the series into 30 second bins and fill the NaN values using the pad method.
>>> series.resample('30S').pad()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 0
2000-01-01 00:01:00 1
2000-01-01 00:01:30 1
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
Upsample the series into 30 second bins and fill the NaN values using the bfill method.
>>> series.resample('30S').bfill()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 1
2000-01-01 00:01:00 1
2000-01-01 00:01:30 2
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
>>> series.resample('3T').apply(custom_resampler)
2000-01-01 00:00:00 8
2000-01-01 00:03:00 17
2000-01-01 00:06:00 26
Freq: 3T, dtype: int64
For a Series with a PeriodIndex, the keyword convention can be used to control whether to use the start or
end of rule.
Resample by month using ‘start’ convention. Values are assigned to the first month of the period.
Resample by month using ‘end’ convention. Values are assigned to the last month of the period.
For DataFrame objects, the keyword on can be used to specify the column instead of the index for resam-
pling.
For a DataFrame with MultiIndex, the keyword level can be used to specify on level the resampling
needs to take place.
pandas.Panel.rfloordiv
Panel.rfloordiv(other, axis=0)
Integer division of series and other, element-wise (binary operator rfloordiv). Equivalent to other //
panel.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.floordiv
pandas.Panel.rmod
Panel.rmod(other, axis=0)
Modulo of series and other, element-wise (binary operator rmod). Equivalent to other % panel.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.mod
pandas.Panel.rmul
Panel.rmul(other, axis=0)
Multiplication of series and other, element-wise (binary operator rmul). Equivalent to other * panel.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.mul
pandas.Panel.round
See also:
numpy.around
pandas.Panel.rpow
Panel.rpow(other, axis=0)
Exponential power of series and other, element-wise (binary operator rpow). Equivalent to other **
panel.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.pow
pandas.Panel.rsub
Panel.rsub(other, axis=0)
Subtraction of series and other, element-wise (binary operator rsub). Equivalent to other - panel.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.sub
pandas.Panel.rtruediv
Panel.rtruediv(other, axis=0)
Floating division of series and other, element-wise (binary operator rtruediv). Equivalent to other /
panel.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.truediv
pandas.Panel.sample
Examples
pandas.Panel.select
Panel.select(crit, axis=0)
Return data corresponding to axis labels matching criteria
Deprecated since version 0.21.0: Use df.loc[df.index.map(crit)] to select via labels
Parameters crit : function
To be called on each index (label). Should return True or False
axis [int]
Returns
selection [type of caller]
pandas.Panel.sem
pandas.Panel.set_axis
Examples
Series
>>> s
0 1
1 2
2 3
dtype: int64
DataFrame
pandas.Panel.set_value
Panel.set_value(*args, **kwargs)
Quickly set single value at (item, major, minor) location
Deprecated since version 0.21.0.
Please use .at[] or .iat[] accessors.
Parameters
item [item label (panel item)]
major [major axis label (panel item row)]
minor [minor axis label (panel item column)]
value [scalar]
takeable [interpret the passed labels as indexers, default False]
Returns panel : Panel
If label combo is contained, will be reference to calling Panel, otherwise a new
object
pandas.Panel.shift
Returns
shifted [Panel]
pandas.Panel.skew
pandas.Panel.slice_shift
Panel.slice_shift(periods=1, axis=0)
Equivalent to shift without copying data. The shifted data will not include the dropped periods and the
shifted axis will be smaller than the original.
Parameters periods : int
Number of periods to move, can be positive or negative
Returns
shifted [same type as caller]
Notes
While the slice_shift is faster than shift, you may pay for it later during alignment.
pandas.Panel.sort_index
pandas.Panel.sort_values
pandas.Panel.squeeze
Panel.squeeze(axis=None)
Squeeze length 1 dimensions.
Parameters axis : None, integer or string axis name, optional
The axis to squeeze if 1-sized.
New in version 0.20.0.
Returns
scalar if 1-sized, else original object
pandas.Panel.std
pandas.Panel.sub
Panel.sub(other, axis=0)
Subtraction of series and other, element-wise (binary operator sub). Equivalent to panel - other.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.rsub
pandas.Panel.subtract
Panel.subtract(other, axis=0)
Subtraction of series and other, element-wise (binary operator sub). Equivalent to panel - other.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.rsub
pandas.Panel.sum
Examples
This can be controlled with the min_count parameter. For example, if you’d like the sum of an empty
series to be NaN, pass min_count=1.
>>> pd.Series([]).sum(min_count=1)
nan
Thanks to the skipna parameter, min_count handles all-NA and empty series identically.
>>> pd.Series([np.nan]).sum()
0.0
>>> pd.Series([np.nan]).sum(min_count=1)
nan
pandas.Panel.swapaxes
pandas.Panel.swaplevel
pandas.Panel.tail
Panel.tail(n=5)
Return the last n rows.
This function returns last n rows from the object based on position. It is useful for quickly verifying data,
for example, after sorting or appending rows.
Parameters n : int, default 5
Number of rows to select.
Returns type of caller
The last n rows of the caller object.
See also:
Examples
>>> df.tail()
animal
4 monkey
5 parrot
6 shark
7 whale
8 zebra
>>> df.tail(3)
animal
6 shark
7 whale
8 zebra
pandas.Panel.take
Examples
We may take elements using negative integers for positive indices, starting from the end of the object, just
like with Python lists.
pandas.Panel.to_clipboard
Field delimiter.
**kwargs
These parameters will be passed to DataFrame.to_csv.
See also:
Notes
Examples
We can omit the the index by passing the keyword index and setting it to false.
pandas.Panel.to_dense
Panel.to_dense()
Return dense representation of NDFrame (as opposed to sparse)
pandas.Panel.to_excel
Notes
Keyword arguments (and na_rep) are passed to the to_excel method for each DataFrame written.
pandas.Panel.to_frame
Panel.to_frame(filter_observations=True)
Transform wide format into long (stacked) format as DataFrame whose columns are the Panel’s items and
whose index is a MultiIndex formed of the Panel’s major and minor axes.
Parameters filter_observations : boolean, default True
Drop (major, minor) pairs without a complete set of observations across all the
items
Returns
y [DataFrame]
pandas.Panel.to_hdf
Hierarchical Data Format (HDF) is self-describing, allowing an application to interpret the structure and
contents of a file with no outside information. One HDF file can hold a mix of related objects which can
be accessed as a group or as individual objects.
In order to add another DataFrame or Series to an existing HDF file please use append mode and a different
a key.
For more information see the user guide.
Parameters path_or_buf : str or pandas.HDFStore
File path or HDFStore object.
key : str
Identifier for the group in the store.
mode : {‘a’, ‘w’, ‘r+’}, default ‘a’
Mode to open file:
• ‘w’: write, a new file is created (an existing file with the same name would be
deleted).
• ‘a’: append, an existing file is opened for reading and writing, and if the file
does not exist it is created.
• ‘r+’: similar to ‘a’, but the file must already exist.
format : {‘fixed’, ‘table’}, default ‘fixed’
Possible values:
• ‘fixed’: Fixed format. Fast writing/reading. Not-appendable, nor searchable.
• ‘table’: Table format. Write as a PyTables Table structure which may perform
worse but allow more flexible operations like searching / selecting subsets of
the data.
append : bool, default False
For Table formats, append the input data to the existing.
data_columns : list of columns or True, optional
List of columns to create as indexed data columns for on-disk queries, or True to
use all columns. By default only the axes of the object are indexed. See Query
via Data Columns. Applicable only to format=’table’.
complevel : {0-9}, optional
Specifies a compression level for data. A value of 0 disables compression.
complib : {‘zlib’, ‘lzo’, ‘bzip2’, ‘blosc’}, default ‘zlib’
Specifies the compression library to be used. As of v0.20.2 these addi-
tional compressors for Blosc are supported (default if no compressor speci-
fied: ‘blosc:blosclz’): {‘blosc:blosclz’, ‘blosc:lz4’, ‘blosc:lz4hc’, ‘blosc:snappy’,
‘blosc:zlib’, ‘blosc:zstd’}. Specifying a compression library which is not avail-
able issues a ValueError.
fletcher32 : bool, default False
If applying compression use the fletcher32 checksum.
dropna : bool, default False
Examples
pandas.Panel.to_json
If ‘orient’ is ‘records’ write out line delimited json format. Will throw ValueError
if incorrect ‘orient’ since others are not list like.
New in version 0.19.0.
compression : {None, ‘gzip’, ‘bz2’, ‘zip’, ‘xz’}
A string representing the compression to use in the output file, only used when
the first argument is a filename.
New in version 0.21.0.
index : boolean, default True
Whether to include the index values in the JSON string. Not including the index
(index=False) is only supported when orient is ‘split’ or ‘table’.
New in version 0.23.0.
See also:
pandas.read_json
Examples
Encoding/decoding a Dataframe using 'records' formatted JSON. Note that index labels are not pre-
served with this encoding.
>>> df.to_json(orient='records')
'[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
pandas.Panel.to_latex
pandas.Panel.to_msgpack
pandas.Panel.to_pickle
read_pickle Load pickled pandas object (or any object) from file.
DataFrame.to_hdf Write DataFrame to an HDF5 file.
DataFrame.to_sql Write DataFrame to a SQL database.
DataFrame.to_parquet Write a DataFrame to the binary parquet format.
Examples
>>> import os
>>> os.remove("./dummy.pkl")
pandas.Panel.to_sparse
Panel.to_sparse(*args, **kwargs)
NOT IMPLEMENTED: do not call this method, as sparsifying is not supported for Panel objects and will
raise an error.
Convert to SparsePanel
pandas.Panel.to_sql
References
[R22], [R23]
Examples
Specify the dtype (especially useful for integers with missing values). Notice that while pandas is forced
to store the data as floating point, the database supports nullable integers. When fetching the data with
Python, we get back integer scalars.
pandas.Panel.to_xarray
Panel.to_xarray()
Return an xarray object from the pandas object.
Returns
a DataArray for a Series
a Dataset for a DataFrame
a DataArray for higher dims
Notes
Examples
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (index: 3)
Coordinates:
* index (index) int64 0 1 2
Data variables:
A (index) int64 1 1 2
B (index) object 'foo' 'bar' 'foo'
C (index) float64 4.0 5.0 6.0
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (A: 2, B: 2)
Coordinates:
* B (B) object 'bar' 'foo'
* A (A) int64 1 2
Data variables:
C (B, A) float64 5.0 nan 4.0 6.0
>>> p = pd.Panel(np.arange(24).reshape(4,3,2),
items=list('ABCD'),
major_axis=pd.date_range('20130101', periods=3),
minor_axis=['first', 'second'])
>>> p
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 3 (major_axis) x 2 (minor_axis)
Items axis: A to D
Major_axis axis: 2013-01-01 00:00:00 to 2013-01-03 00:00:00
Minor_axis axis: first to second
>>> p.to_xarray()
<xarray.DataArray (items: 4, major_axis: 3, minor_axis: 2)>
array([[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 6, 7],
[ 8, 9],
(continues on next page)
pandas.Panel.transpose
Panel.transpose(*args, **kwargs)
Permute the dimensions of the Panel
Parameters
args [three positional arguments: each one of]
{0, 1, 2, ‘items’, ‘major_axis’, ‘minor_axis’}
copy [boolean, default False] Make a copy of the underlying data. Mixed-dtype
data will always result in a copy
Returns
y [same as input]
Examples
>>> p.transpose(2, 0, 1)
>>> p.transpose(2, 0, 1, copy=True)
pandas.Panel.truediv
Panel.truediv(other, axis=0)
Floating division of series and other, element-wise (binary operator truediv). Equivalent to panel /
other.
Parameters
other [DataFrame or Panel]
axis : {items, major_axis, minor_axis}
Axis to broadcast over
Returns
Panel
See also:
Panel.rtruediv
pandas.Panel.truncate
Notes
If the index being truncated contains only datetime values, before and after may be specified as strings
instead of Timestamps.
Examples
>>> df.truncate(before=pd.Timestamp('2016-01-05'),
... after=pd.Timestamp('2016-01-10')).tail()
A
2016-01-09 23:59:56 1
2016-01-09 23:59:57 1
2016-01-09 23:59:58 1
2016-01-09 23:59:59 1
2016-01-10 00:00:00 1
Because the index is a DatetimeIndex containing only dates, we can specify before and after as strings.
They will be coerced to Timestamps before truncation.
>>> df.truncate('2016-01-05', '2016-01-10').tail()
A
2016-01-09 23:59:56 1
2016-01-09 23:59:57 1
2016-01-09 23:59:58 1
2016-01-09 23:59:59 1
2016-01-10 00:00:00 1
Note that truncate assumes a 0 value for any unspecified time component (midnight). This differs
from partial string slicing, which returns any partially matching dates.
pandas.Panel.tshift
Notes
If freq is not specified then tries to use the freq or inferred_freq attributes of the index. If neither of those
attributes exist, a ValueError is thrown
pandas.Panel.tz_convert
pandas.Panel.tz_localize
pandas.Panel.update
pandas.Panel.var
pandas.Panel.where
Notes
The where method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond is True the element is used; otherwise the corresponding element from the DataFrame other is
used.
The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).
For further details and examples see the where documentation in indexing.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
pandas.Panel.xs
Panel.xs(key, axis=1)
Return slice of panel along selected axis
Parameters key : object
Label
Returns
y [ndim(self)-1]
Notes
agg
aggregate
drop
Axes
• items: axis 0; each item corresponds to a DataFrame contained inside
• major_axis: axis 1; the index (rows) of each of the DataFrames
• minor_axis: axis 2; the columns of each of the DataFrames
34.5.3 Conversion
34.5.5.1 pandas.Panel.__iter__
Panel.__iter__()
Iterate over infor axis
For more information on .at, .iat, .loc, and .iloc, see the indexing documentation.
Panel.apply(func[, axis]) Applies function along axis (or axes) of the Panel
Panel.groupby(function[, axis]) Group data on given axis, returning GroupBy object
34.5.9.1 pandas.Panel.drop
Panel.dropna([axis, how, inplace]) Drop 2D from panel, holding passed axis constant
Panel.join(other[, how, lsuffix, rsuffix]) Join items with other Panel either on major and minor
axes column
Panel.update(other[, join, overwrite, . . . ]) Modify Panel in place using non-NA values from passed
Panel, or object coercible to Panel.
Panel.from_dict(data[, intersect, orient, dtype]) Construct Panel from dict of DataFrame objects
Panel.to_pickle(path[, compression, protocol]) Pickle (serialize) object to file.
Panel.to_excel(path[, na_rep, engine]) Write each DataFrame in Panel to a separate excel sheet
Panel.to_hdf(path_or_buf, key, **kwargs) Write the contained data to an HDF5 file using HDFS-
tore.
Panel.to_sparse(*args, **kwargs) NOT IMPLEMENTED: do not call this method, as spar-
sifying is not supported for Panel objects and will raise
an error.
Panel.to_frame([filter_observations]) Transform wide format into long (stacked) format as
DataFrame whose columns are the Panel’s items and
whose index is a MultiIndex formed of the Panel’s ma-
jor and minor axes.
Panel.to_clipboard([excel, sep]) Copy object to the system clipboard.
34.6 Index
Many of these methods or variants thereof are available on the objects that contain an index (Series/DataFrame)
and those should most likely be used before calling these methods directly.
34.6.1 pandas.Index
class pandas.Index
Immutable ndarray implementing an ordered, sliceable set. The basic object storing axis labels for all pandas
objects
Parameters
Notes
Examples
>>> pd.Index(list('abc'))
Index(['a', 'b', 'c'], dtype='object')
Attributes
34.6.1.1 pandas.Index.T
Index.T
return the transpose, which is by definition self
34.6.1.2 pandas.Index.base
Index.base
return the base object if the memory of the underlying data is shared
34.6.1.3 pandas.Index.data
Index.data
return the data pointer of the underlying data
34.6.1.4 pandas.Index.dtype
Index.dtype
return the dtype object of the underlying data
34.6.1.5 pandas.Index.dtype_str
Index.dtype_str
return the dtype str of the underlying data
34.6.1.6 pandas.Index.flags
Index.flags
34.6.1.7 pandas.Index.hasnans
Index.hasnans
return if I have any nans; enables various perf speedups
34.6.1.8 pandas.Index.inferred_type
Index.inferred_type
return a string of the type inferred from the values
34.6.1.9 pandas.Index.is_monotonic
Index.is_monotonic
alias for is_monotonic_increasing (deprecated)
34.6.1.10 pandas.Index.is_monotonic_decreasing
Index.is_monotonic_decreasing
return if the index is monotonic decreasing (only equal or decreasing) values.
Examples
34.6.1.11 pandas.Index.is_monotonic_increasing
Index.is_monotonic_increasing
return if the index is monotonic increasing (only equal or increasing) values.
Examples
34.6.1.12 pandas.Index.is_unique
Index.is_unique
return if the index has unique values
34.6.1.13 pandas.Index.itemsize
Index.itemsize
return the size of the dtype of the item of the underlying data
34.6.1.14 pandas.Index.nbytes
Index.nbytes
return the number of bytes in the underlying data
34.6.1.15 pandas.Index.ndim
Index.ndim
return the number of dimensions of the underlying data, by definition 1
34.6.1.16 pandas.Index.shape
Index.shape
return a tuple of the shape of the underlying data
34.6.1.17 pandas.Index.size
Index.size
return the number of elements in the underlying data
34.6.1.18 pandas.Index.strides
Index.strides
return the strides of the underlying data
34.6.1.19 pandas.Index.values
Index.values
return the underlying data as an ndarray
asi8
empty
has_duplicates
is_all_dates
name
names
nlevels
Methods
34.6.1.20 pandas.Index.all
Index.all(*args, **kwargs)
Return whether all elements are True.
Parameters *args
These parameters will be passed to numpy.all.
**kwargs
These parameters will be passed to numpy.all.
Returns all : bool or array_like (if axis is specified)
A single element array_like may be converted to bool.
See also:
Notes
Not a Number (NaN), positive infinity and negative infinity evaluate to True because these are not equal
to zero.
Examples
all
True, because nonzero integers are considered True.
any
True, because 1 is considered True.
34.6.1.21 pandas.Index.any
Index.any(*args, **kwargs)
Return whether any element is True.
Parameters *args
These parameters will be passed to numpy.any.
**kwargs
These parameters will be passed to numpy.any.
Returns any : bool or array_like (if axis is specified)
A single element array_like may be converted to bool.
See also:
Notes
Not a Number (NaN), positive infinity and negative infinity evaluate to True because these are not equal
to zero.
Examples
34.6.1.22 pandas.Index.append
Index.append(other)
Append a collection of Index options together
Parameters
other [Index or list/tuple of indices]
Returns
appended [Index]
34.6.1.23 pandas.Index.argmax
Index.argmax(axis=None)
return a ndarray of the maximum argument indexer
See also:
numpy.ndarray.argmax
34.6.1.24 pandas.Index.argmin
Index.argmin(axis=None)
return a ndarray of the minimum argument indexer
See also:
numpy.ndarray.argmin
34.6.1.25 pandas.Index.argsort
Index.argsort(*args, **kwargs)
Return the integer indicies that would sort the index.
Parameters *args
Passed to numpy.ndarray.argsort.
**kwargs
Passed to numpy.ndarray.argsort.
Returns numpy.ndarray
Integer indicies that would sort the index if used as an indexer.
See also:
Examples
>>> idx[order]
Index(['a', 'b', 'c', 'd'], dtype='object')
34.6.1.26 pandas.Index.asof
Index.asof(label)
For a sorted index, return the most recent label up to and including the passed label. Return NaN if not
found.
See also:
34.6.1.27 pandas.Index.asof_locs
Index.asof_locs(where, mask)
where : array of timestamps mask : array of booleans where data is not NA
34.6.1.28 pandas.Index.astype
Index.astype(dtype, copy=True)
Create an Index with values cast to dtypes. The class of a new Index is determined by dtype. When
conversion is impossible, a ValueError exception is raised.
Parameters
dtype [numpy dtype or pandas type]
copy : bool, default True
By default, astype always returns a newly allocated object. If copy is set to False
and internal requirements on dtype are satisfied, the original data is used to create
a new Index or the original Index is returned.
New in version 0.19.0.
34.6.1.29 pandas.Index.contains
Index.contains(key)
return a boolean if this key is IN the index
Parameters
key [object]
Returns
boolean
34.6.1.30 pandas.Index.copy
Returns
copy [Index]
Notes
In most cases, there should be no functional difference from using deep, but if deep is passed it will
attempt to deepcopy.
34.6.1.31 pandas.Index.delete
Index.delete(loc)
Make new Index with passed location(-s) deleted
Returns
new_index [Index]
34.6.1.32 pandas.Index.difference
Index.difference(other)
Return a new Index with elements from the index that are not in other.
This is the set difference of two Index objects. It’s sorted if sorting is possible.
Parameters
other [Index or array-like]
Returns
difference [Index]
Examples
34.6.1.33 pandas.Index.drop
Index.drop(labels, errors=’raise’)
Make new Index with passed list of labels deleted
Parameters
labels [array-like]
errors : {‘ignore’, ‘raise’}, default ‘raise’
If ‘ignore’, suppress error and existing labels are dropped.
Returns
dropped [Index]
Raises KeyError
If not all of the labels are found in the selected axis
34.6.1.34 pandas.Index.drop_duplicates
Index.drop_duplicates(keep=’first’)
Return Index with duplicate values removed.
Parameters keep : {‘first’, ‘last’, False}, default ‘first’
• ‘first’ : Drop duplicates except for the first occurrence.
• ‘last’ : Drop duplicates except for the last occurrence.
• False : Drop all duplicates.
Returns
deduplicated [Index]
See also:
Examples
The keep parameter controls which duplicate values are removed. The value ‘first’ keeps the first occur-
rence for each set of duplicated entries. The default value of keep is ‘first’.
>>> idx.drop_duplicates(keep='first')
Index(['lama', 'cow', 'beetle', 'hippo'], dtype='object')
The value ‘last’ keeps the last occurrence for each set of duplicated entries.
>>> idx.drop_duplicates(keep='last')
Index(['cow', 'beetle', 'lama', 'hippo'], dtype='object')
>>> idx.drop_duplicates(keep=False)
Index(['cow', 'beetle', 'hippo'], dtype='object')
34.6.1.35 pandas.Index.dropna
Index.dropna(how=’any’)
Return Index without NA/NaN values
Parameters how : {‘any’, ‘all’}, default ‘any’
If the Index is a MultiIndex, drop the value when any or all levels are NaN.
Returns
valid [Index]
34.6.1.36 pandas.Index.duplicated
Index.duplicated(keep=’first’)
Indicate duplicate index values.
Duplicated values are indicated as True values in the resulting array. Either all duplicates, all except the
first, or all except the last occurrence of duplicates can be indicated.
Parameters keep : {‘first’, ‘last’, False}, default ‘first’
The value or values in a set of duplicates to mark as missing.
• ‘first’ : Mark duplicates as True except for the first occurrence.
• ‘last’ : Mark duplicates as True except for the last occurrence.
• False : Mark all duplicates as True.
Returns
numpy.ndarray
See also:
Examples
By default, for each set of duplicated values, the first occurrence is set to False and all others to True:
which is equivalent to
>>> idx.duplicated(keep='first')
array([False, False, True, False, True])
By using ‘last’, the last occurrence of each set of duplicated values is set on False and all others on True:
>>> idx.duplicated(keep='last')
array([ True, False, True, False, False])
>>> idx.duplicated(keep=False)
array([ True, False, True, False, True])
34.6.1.37 pandas.Index.equals
Index.equals(other)
Determines if two Index objects contain the same elements.
34.6.1.38 pandas.Index.factorize
Index.factorize(sort=False, na_sentinel=-1)
Encode the object as an enumerated type or categorical variable.
This method is useful for obtaining a numeric representation of an array when all that matters is identifying
distinct values. factorize is available as both a top-level function pandas.factorize(), and as a
method Series.factorize() and Index.factorize().
Parameters sort : boolean, default False
Sort uniques and shuffle labels to maintain the relationship.
na_sentinel : int, default -1
Value to mark “not found”.
Returns labels : ndarray
An integer ndarray that’s an indexer into uniques. uniques.take(labels)
will have the same values as values.
uniques : ndarray, Index, or Categorical
The unique valid values. When values is Categorical, uniques is a Categorical.
When values is some other pandas object, an Index is returned. Otherwise, a 1-D
ndarray is returned.
Note: Even if there’s a missing value in values, uniques will not contain an entry
for it.
See also:
Examples
These examples all show factorize as a top-level method like pd.factorize(values). The results
are identical for methods like Series.factorize().
With sort=True, the uniques will be sorted, and labels will be shuffled so that the relationship is the
maintained.
Missing values are indicated in labels with na_sentinel (-1 by default). Note that missing values are
never included in uniques.
Thus far, we’ve only factorized lists (which are internally coerced to NumPy arrays). When factorizing
pandas objects, the type of uniques will differ. For Categoricals, a Categorical is returned.
34.6.1.39 pandas.Index.fillna
Index.fillna(value=None, downcast=None)
Fill NA/NaN values with the specified value
Parameters value : scalar
Scalar value to use to fill holes (e.g. 0). This value cannot be a list-likes.
downcast : dict, default is None
a dict of item->dtype of what to downcast if possible, or the string ‘infer’ which
will try to downcast to an appropriate equal type (e.g. float64 to int64 if possible)
Returns
filled [%(klass)s]
34.6.1.40 pandas.Index.format
34.6.1.41 pandas.Index.get_duplicates
Index.get_duplicates()
Extract duplicated index elements.
Returns a sorted list of index elements which appear more than once in the index.
Deprecated since version 0.23.0: Use idx[idx.duplicated()].unique() instead
Returns array-like
List of duplicated indexes.
See also:
Examples
Note that for a DatetimeIndex, it does not return a list but a new DatetimeIndex:
34.6.1.42 pandas.Index.get_indexer
Examples
34.6.1.43 pandas.Index.get_indexer_for
Index.get_indexer_for(target, **kwargs)
guaranteed return of an indexer even when non-unique This dispatches to get_indexer or
get_indexer_nonunique as appropriate
34.6.1.44 pandas.Index.get_indexer_non_unique
Index.get_indexer_non_unique(target)
Compute indexer and mask for new index given the current index. The indexer should be then used as an
input to ndarray.take to align the current data to the new index.
Parameters
target [Index]
Returns indexer : ndarray of int
Integers from 0 to n - 1 indicating that the index at these positions matches the
corresponding target values. Missing values in the target are marked by -1.
missing : ndarray of int
An indexer into the target of the values not found. These correspond to the -1 in
the indexer array
34.6.1.45 pandas.Index.get_level_values
Index.get_level_values(level)
Return an Index of values for requested level, equal to the length of the index.
Parameters level : int or str
level is either the integer position of the level in the MultiIndex, or the name
of the level.
Returns values : Index
self, as there is only one level in the Index.
See also:
34.6.1.46 pandas.Index.get_loc
Maximum distance from index value for inexact matches. The value of the index
at the matching location most satisfy the equation abs(index[loc] - key)
<= tolerance.
Tolerance may be a scalar value, which applies the same tolerance to all values,
or list-like, which applies variable tolerance per element. List-like includes list,
tuple, array, Series, and must be the same size as the index and its dtype must
exactly match the index’s type.
New in version 0.21.0: (list-like tolerance)
Returns
loc [int if unique index, slice if monotonic index, else mask]
Examples
34.6.1.47 pandas.Index.get_slice_bound
34.6.1.48 pandas.Index.get_value
Index.get_value(series, key)
Fast lookup of value from 1-dimensional ndarray. Only use this if you know what you’re doing
34.6.1.49 pandas.Index.get_values
Index.get_values()
Return Index data as an numpy.ndarray.
Returns numpy.ndarray
Examples
34.6.1.50 pandas.Index.groupby
Index.groupby(values)
Group the index labels by a given array of values.
Parameters values : array
Values used to determine the groups.
Returns groups : dict
{group name -> group labels}
34.6.1.51 pandas.Index.identical
Index.identical(other)
Similar to equals, but check that other comparable attributes are also equal
34.6.1.52 pandas.Index.insert
Index.insert(loc, item)
Make new Index inserting new item at location. Follows Python list.append semantics for negative values
Parameters
loc [int]
item [object]
Returns
new_index [Index]
34.6.1.53 pandas.Index.intersection
Index.intersection(other)
Form the intersection of two Index objects.
This returns a new Index with elements common to the index and other, preserving the order of the calling
index.
Parameters
other [Index or array-like]
Returns
intersection [Index]
Examples
34.6.1.54 pandas.Index.is_
Index.is_(other)
More flexible, faster check like is but that works through views
Note: this is not the same as Index.identical(), which checks that metadata is also the same.
Parameters other : object
other object to compare against.
Returns
True if both have same underlying data, False otherwise [bool]
34.6.1.55 pandas.Index.is_categorical
Index.is_categorical()
Check if the Index holds categorical data.
Returns boolean
True if the Index is categorical.
See also:
Examples
34.6.1.56 pandas.Index.isin
Index.isin(values, level=None)
Return a boolean array where the index values are in values.
Compute boolean array of whether each index value is found in the passed set of values. The length of
the returned boolean array matches the length of the index.
Parameters values : set or list-like
Sought values.
New in version 0.18.1: Support for values as a set.
level : str or int, optional
Name or position of the index level to use (if the index is a MultiIndex).
Returns is_contained : ndarray
NumPy array of boolean values.
See also:
Notes
In the case of MultiIndex you must either specify values as a list-like object containing tuples that are the
same length as the number of levels, or specify level. Otherwise it will raise a ValueError.
If level is specified:
• if it is the name of one and only one index level, use that level;
• otherwise it should be a number indicating level position.
Examples
Check whether each index value in a list of values. >>> idx.isin([1, 4]) array([ True, False, False])
Check whether the strings in the ‘color’ level of the MultiIndex are in a list of colors.
>>> dti.isin(['2000-03-11'])
array([ True, False, False])
34.6.1.57 pandas.Index.isna
Index.isna()
Detect missing values.
Return a boolean same-sized object indicating if the values are NA. NA values, such as None, numpy.
NaN or pd.NaT, get mapped to True values. Everything else get mapped to False values. Charac-
ters such as empty strings ‘’ or numpy.inf are not considered NA values (unless you set pandas.
options.mode.use_inf_as_na = True).
New in version 0.20.0.
Returns numpy.ndarray
A boolean array of whether my values are NA
See also:
Examples
34.6.1.58 pandas.Index.isnull
Index.isnull()
Detect missing values.
Return a boolean same-sized object indicating if the values are NA. NA values, such as None, numpy.
NaN or pd.NaT, get mapped to True values. Everything else get mapped to False values. Charac-
ters such as empty strings ‘’ or numpy.inf are not considered NA values (unless you set pandas.
options.mode.use_inf_as_na = True).
New in version 0.20.0.
Returns numpy.ndarray
A boolean array of whether my values are NA
See also:
Examples
34.6.1.59 pandas.Index.item
Index.item()
return the first element of the underlying data as a python scalar
34.6.1.60 pandas.Index.join
34.6.1.61 pandas.Index.map
Index.map(mapper, na_action=None)
Map values using input correspondence (a dict, Series, or function).
Parameters mapper : function, dict, or Series
Mapping correspondence.
na_action : {None, ‘ignore’}
If ‘ignore’, propagate NA values, without passing them to the mapping corre-
spondence.
Returns applied : Union[Index, MultiIndex], inferred
The output of the mapping function applied to the index. If the function returns a
tuple with more than one element a MultiIndex will be returned.
34.6.1.62 pandas.Index.max
Index.max()
Return the maximum value of the Index.
Returns scalar
Maximum value.
See also:
Examples
34.6.1.63 pandas.Index.memory_usage
Index.memory_usage(deep=False)
Memory usage of the values
Parameters deep : bool
Introspect the data deeply, interrogate object dtypes for system-level memory
consumption
Returns
bytes used
See also:
numpy.ndarray.nbytes
Notes
Memory usage does not include memory consumed by elements that are not components of the array if
deep=False or if used on PyPy
34.6.1.64 pandas.Index.min
Index.min()
Return the minimum value of the Index.
Returns scalar
Minimum value.
See also:
Examples
34.6.1.65 pandas.Index.notna
Index.notna()
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to
True. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set
pandas.options.mode.use_inf_as_na = True). NA values, such as None or numpy.NaN,
get mapped to False values.
New in version 0.20.0.
Returns numpy.ndarray
Boolean array to indicate which entries are not NA.
See also:
Examples
Show which entries in an Index are not NA. The result is an array.
>>> idx = pd.Index([5.2, 6.0, np.NaN])
>>> idx
Float64Index([5.2, 6.0, nan], dtype='float64')
>>> idx.notna()
array([ True, True, False])
34.6.1.66 pandas.Index.notnull
Index.notnull()
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to
True. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set
pandas.options.mode.use_inf_as_na = True). NA values, such as None or numpy.NaN,
get mapped to False values.
New in version 0.20.0.
Returns numpy.ndarray
Boolean array to indicate which entries are not NA.
See also:
Examples
Show which entries in an Index are not NA. The result is an array.
34.6.1.67 pandas.Index.nunique
Index.nunique(dropna=True)
Return number of unique elements in the object.
Excludes NA values by default.
Parameters dropna : boolean, default True
Don’t include NaN in the count.
Returns
nunique [int]
34.6.1.68 pandas.Index.putmask
Index.putmask(mask, value)
return a new Index of the values set with the mask
See also:
numpy.ndarray.putmask
34.6.1.69 pandas.Index.ravel
Index.ravel(order=’C’)
return an ndarray of the flattened values of the underlying data
See also:
numpy.ndarray.ravel
34.6.1.70 pandas.Index.reindex
34.6.1.71 pandas.Index.rename
Index.rename(name, inplace=False)
Set new names on index. Defaults to returning new index.
Parameters name : str or list
name to set
inplace : bool
if True, mutates in place
Returns
new index (of same type and class. . . etc) [if inplace, returns None]
34.6.1.72 pandas.Index.repeat
Returns pandas.Index
Newly created Index with repeated elements.
See also:
Examples
34.6.1.73 pandas.Index.searchsorted
Notes
Examples
>>> x.searchsorted(4)
array([3])
>>> x.searchsorted('bread')
array([1]) # Note: an array, not a scalar
34.6.1.74 pandas.Index.set_names
Examples
34.6.1.75 pandas.Index.set_value
34.6.1.76 pandas.Index.shift
Index.shift(periods=1, freq=None)
Shift index by desired number of time frequency increments.
This method is for shifting the values of datetime-like indexes by a specified time increment a given
number of times.
Parameters periods : int, default 1
Number of periods (or increments) to shift by, can be positive or negative.
freq : pandas.DateOffset, pandas.Timedelta or string, optional
Frequency increment to shift by. If None, the index is shifted by its own freq
attribute. Offset aliases are valid strings, e.g., ‘D’, ‘W’, ‘M’ etc.
Returns pandas.Index
shifted index
See also:
Notes
This method is only implemented for datetime-like index classes, i.e., DatetimeIndex, PeriodIndex and
TimedeltaIndex.
Examples
The default value of freq is the freq attribute of the index, which is ‘MS’ (month start) in this example.
>>> month_starts.shift(10)
DatetimeIndex(['2011-11-01', '2011-12-01', '2012-01-01', '2012-02-01',
'2012-03-01'],
dtype='datetime64[ns]', freq='MS')
34.6.1.77 pandas.Index.slice_indexer
Returns
indexer [slice]
Raises KeyError : If key does not exist, or key is not unique and index is
not ordered.
Notes
This function assumes that the data is sorted, so use at your own peril
Examples
This is a method on all index types. For example you can do:
34.6.1.78 pandas.Index.slice_locs
Returns
start, end [int]
See also:
Notes
Examples
34.6.1.79 pandas.Index.sort_values
Index.sort_values(return_indexer=False, ascending=True)
Return a sorted copy of the index.
Return a sorted copy of the index, and optionally return the indices that sorted the index itself.
Parameters return_indexer : bool, default False
Should the indices that would sort the index be returned.
ascending : bool, default True
Should the index values be sorted in an ascending order.
Returns sorted_index : pandas.Index
Sorted copy of the index.
indexer : numpy.ndarray, optional
The indices that the index itself was sorted by.
See also:
Examples
>>> idx.sort_values()
Int64Index([1, 10, 100, 1000], dtype='int64')
Sort values in descending order, and also get the indices idx was sorted by.
34.6.1.80 pandas.Index.sortlevel
Returns
sorted_index [Index]
34.6.1.81 pandas.Index.str
Index.str()
Vectorized string functions for Series and Index. NAs stay NA unless handled otherwise by a particular
method. Patterned after Python’s string methods, with some inspiration from R’s stringr package.
Examples
>>> s.str.split('_')
>>> s.str.replace('_', '')
34.6.1.82 pandas.Index.summary
Index.summary(name=None)
Return a summarized representation .. deprecated:: 0.23.0
34.6.1.83 pandas.Index.symmetric_difference
Index.symmetric_difference(other, result_name=None)
Compute the symmetric difference of two Index objects. It’s sorted if sorting is possible.
Parameters
other [Index or array-like]
result_name [str]
Returns
symmetric_difference [Index]
Notes
symmetric_difference contains elements that appear in either idx1 or idx2 but not both. Equiv-
alent to the Index created by idx1.difference(idx2) | idx2.difference(idx1) with du-
plicates dropped.
Examples
34.6.1.84 pandas.Index.take
34.6.1.85 pandas.Index.to_frame
Index.to_frame(index=True)
Create a DataFrame with a column containing the Index.
New in version 0.21.0.
Parameters index : boolean, default True
Set the index of the returned DataFrame as the original Index.
Returns DataFrame
DataFrame containing the original Index data.
See also:
Examples
>>> idx.to_frame(index=False)
animal
0 Ant
1 Bear
2 Cow
34.6.1.86 pandas.Index.to_native_types
Index.to_native_types(slicer=None, **kwargs)
Format specified values of self and return them.
Parameters slicer : int, array-like
An indexer into self that specifies which values are used in the formatting process.
kwargs : dict
Options for specifying how the values should be formatted. These options include
the following:
1. na_rep [str] The value that serves as a placeholder for NULL values
2. quoting [bool or None] Whether or not there are quoted values in self
3. date_format [str] The format used to represent date-like values
34.6.1.87 pandas.Index.to_series
Index.to_series(index=None, name=None)
Create a Series with both index and values equal to the index keys useful with map for returning an indexer
based on an index
Parameters index : Index, optional
index of resulting Series. If None, defaults to original index
name : string, optional
name of resulting Series. If None, defaults to name of original index
Returns
Series [dtype will be based on the type of the Index values.]
34.6.1.88 pandas.Index.tolist
Index.tolist()
Return a list of the values.
These are each a scalar type, which is a Python scalar (for str, int, float) or a pandas scalar (for Times-
tamp/Timedelta/Interval/Period)
See also:
numpy.ndarray.tolist
34.6.1.89 pandas.Index.transpose
Index.transpose(*args, **kwargs)
return the transpose, which is by definition self
34.6.1.90 pandas.Index.union
Index.union(other)
Form the union of two Index objects and sorts if possible.
Parameters
other [Index or array-like]
Returns
union [Index]
Examples
34.6.1.91 pandas.Index.unique
Index.unique(level=None)
Return unique values in the index. Uniques are returned in order of appearance, this does NOT sort.
Parameters level : int or str, optional, default None
Only return values from specified level (for MultiIndex)
New in version 0.23.0.
Returns
Index without duplicates
See also:
unique, Series.unique
34.6.1.92 pandas.Index.value_counts
Sort by values
ascending : boolean, default False
Sort in ascending order
bins : integer, optional
Rather than count values, group them into half-open bins, a convenience for
pd.cut, only works with numeric data
dropna : boolean, default True
Don’t include counts of NaN.
Returns
counts [Series]
34.6.1.93 pandas.Index.where
Index.where(cond, other=None)
New in version 0.19.0.
Return an Index of same shape as self and whose corresponding entries are from self where cond is True
and otherwise are from other.
Parameters
cond [boolean array-like with the same length as self]
other [scalar, or array-like]
holds_integer
is_boolean
is_floating
is_integer
is_interval
is_lexsorted_for_tuple
is_mixed
is_numeric
is_object
is_type_compatible
sort
view
34.6.2 Attributes
34.6.2.1 pandas.Index.has_duplicates
Index.has_duplicates
34.6.2.2 pandas.Index.is_all_dates
Index.is_all_dates
34.6.2.3 pandas.Index.name
Index.name = None
34.6.2.4 pandas.Index.names
Index.names
34.6.2.5 pandas.Index.empty
Index.empty
Index.take(indices[, axis, allow_fill, . . . ]) return a new Index of the values selected by the indices
Index.putmask(mask, value) return a new Index of the values set with the mask
Index.set_names(names[, level, inplace]) Set new names on index.
Index.unique([level]) Return unique values in the index.
Index.nunique([dropna]) Return number of unique elements in the object.
Index.value_counts([normalize, sort, . . . ]) Returns object containing counts of unique values.
34.6.3.1 pandas.Index.is_boolean
Index.is_boolean()
34.6.3.2 pandas.Index.is_floating
Index.is_floating()
34.6.3.3 pandas.Index.is_integer
Index.is_integer()
34.6.3.4 pandas.Index.is_interval
Index.is_interval()
34.6.3.5 pandas.Index.is_lexsorted_for_tuple
Index.is_lexsorted_for_tuple(tup)
34.6.3.6 pandas.Index.is_mixed
Index.is_mixed()
34.6.3.7 pandas.Index.is_numeric
Index.is_numeric()
34.6.3.8 pandas.Index.is_object
Index.is_object()
34.6.5 Conversion
34.6.5.1 pandas.Index.view
Index.view(cls=None)
34.6.6 Sorting
Index.argsort(*args, **kwargs) Return the integer indicies that would sort the index.
Index.searchsorted(value[, side, sorter]) Find indices where elements should be inserted to main-
tain order.
Index.sort_values([return_indexer, ascending]) Return a sorted copy of the index.
34.6.9 Selecting
Index.asof(label) For a sorted index, return the most recent label up to and
including the passed label.
Index.asof_locs(where, mask) where : array of timestamps mask : array of booleans
where data is not NA
Index.contains(key) return a boolean if this key is IN the index
Index.get_duplicates() (DEPRECATED) Extract duplicated index elements.
Index.get_indexer(target[, method, limit, . . . ]) Compute indexer and mask for new index given the cur-
rent index.
Index.get_indexer_for(target, **kwargs) guaranteed return of an indexer even when
non-unique This dispatches to get_indexer or
get_indexer_nonunique as appropriate
Index.get_indexer_non_unique(target) Compute indexer and mask for new index given the cur-
rent index.
Index.get_level_values(level) Return an Index of values for requested level, equal to
the length of the index.
Index.get_loc(key[, method, tolerance]) Get integer location, slice or boolean mask for requested
label.
Index.get_slice_bound(label, side, kind) Calculate slice bound that corresponds to given label.
Index.get_value(series, key) Fast lookup of value from 1-dimensional ndarray.
Index.get_values() Return Index data as an numpy.ndarray.
Index.set_value(arr, key, value) Fast lookup of value from 1-dimensional ndarray.
Continued on next page
34.7.1 pandas.RangeIndex
class pandas.RangeIndex
Immutable Index implementing a monotonic integer range.
RangeIndex is a memory-saving special case of Int64Index limited to representing monotonic ranges. Using
RangeIndex may in some instances improve computing speed.
This is the default index type used by DataFrame and Series when no explicit index is provided by the user.
Parameters start : int (default: 0), or other RangeIndex instance.
If int and “stop” is not given, interpreted as “stop” instead.
Attributes
None
Methods
34.7.1.1 pandas.RangeIndex.from_range
34.7.2 pandas.Int64Index
class pandas.Int64Index
Immutable ndarray implementing an ordered, sliceable set. The basic object storing axis labels for all pandas
objects. Int64Index is a special case of Index with purely integer labels.
Parameters
data [array-like (1-dimensional)]
dtype [NumPy dtype (default: int64)]
copy : bool
Make a copy of input ndarray
name : object
Name to be stored in the index
See also:
Notes
Attributes
None
Methods
None
34.7.3 pandas.UInt64Index
class pandas.UInt64Index
Immutable ndarray implementing an ordered, sliceable set. The basic object storing axis labels for all pandas
objects. UInt64Index is a special case of Index with purely unsigned integer labels.
Parameters
data [array-like (1-dimensional)]
dtype [NumPy dtype (default: uint64)]
copy : bool
Make a copy of input ndarray
name : object
Name to be stored in the index
See also:
Notes
Attributes
None
Methods
None
34.7.4 pandas.Float64Index
class pandas.Float64Index
Immutable ndarray implementing an ordered, sliceable set. The basic object storing axis labels for all pandas
objects. Float64Index is a special case of Index with purely float labels.
Parameters
data [array-like (1-dimensional)]
dtype [NumPy dtype (default: float64)]
copy : bool
Make a copy of input ndarray
name : object
Name to be stored in the index
See also:
Notes
Attributes
None
Methods
None
RangeIndex.from_range(data[, name, dtype]) create RangeIndex from a range (py3), or xrange (py2)
object
34.8 CategoricalIndex
34.8.1 pandas.CategoricalIndex
class pandas.CategoricalIndex
Immutable Index implementing an ordered, sliceable set. CategoricalIndex represents a sparsely populated
Index with an underlying Categorical.
Parameters
data [array-like or Categorical, (1-dimensional)]
categories : optional, array-like
categories for the CategoricalIndex
ordered : boolean,
designating if the categories are ordered
copy : bool
Make a copy of input ndarray
name : object
Name to be stored in the index
See also:
Categorical, Index
Attributes
codes
categories
ordered
Methods
34.8.1.1 pandas.CategoricalIndex.rename_categories
CategoricalIndex.rename_categories(*args, **kwargs)
Renames categories.
Parameters new_categories : list-like, dict-like or callable
• list-like: all items must be unique and the number of items in the new categories
must match the existing number of categories.
• dict-like: specifies a mapping from old categories to new. Categories not con-
tained in the mapping are passed through and extra categories in the mapping
are ignored.
New in version 0.21.0.
• callable : a callable that is called on all items in the old categories and whose
return values comprise the new categories.
New in version 0.23.0.
Examples
For dict-like new_categories, extra keys are ignored and categories not in the dictionary are passed
through
34.8.1.2 pandas.CategoricalIndex.reorder_categories
CategoricalIndex.reorder_categories(*args, **kwargs)
Reorders categories as specified in new_categories.
new_categories need to include all old categories and no new category items.
Parameters new_categories : Index-like
The categories in new order.
ordered : boolean, optional
Whether or not the categorical is treated as a ordered categorical. If not given, do
not change the ordered information.
inplace : boolean (default: False)
Whether or not to reorder the categories inplace or return a copy of this categorical
with reordered categories.
Returns
cat [Categorical with reordered categories or None if inplace.]
Raises ValueError
If the new categories do not contain all old category items or any new ones
See also:
rename_categories, add_categories, remove_categories,
remove_unused_categories, set_categories
34.8.1.3 pandas.CategoricalIndex.add_categories
CategoricalIndex.add_categories(*args, **kwargs)
Add new categories.
new_categories will be included at the last/highest place in the categories and will be unused directly after
this call.
Parameters new_categories : category or list-like of category
The new categories to be included.
inplace : boolean (default: False)
Whether or not to add the categories inplace or return a copy of this categorical
with added categories.
Returns
cat [Categorical with new categories added or None if inplace.]
Raises ValueError
If the new categories include old categories or do not validate as categories
See also:
rename_categories, reorder_categories, remove_categories,
remove_unused_categories, set_categories
34.8.1.4 pandas.CategoricalIndex.remove_categories
CategoricalIndex.remove_categories(*args, **kwargs)
Removes the specified categories.
removals must be included in the old categories. Values which were in the removed categories will be set
to NaN
Parameters removals : category or list of categories
The categories which should be removed.
inplace : boolean (default: False)
Whether or not to remove the categories inplace or return a copy of this categori-
cal with removed categories.
Returns
cat [Categorical with removed categories or None if inplace.]
Raises ValueError
If the removals are not contained in the categories
See also:
rename_categories, reorder_categories, add_categories,
remove_unused_categories, set_categories
34.8.1.5 pandas.CategoricalIndex.remove_unused_categories
CategoricalIndex.remove_unused_categories(*args, **kwargs)
Removes categories which are not used.
Parameters inplace : boolean (default: False)
Whether or not to drop unused categories inplace or return a copy of this categor-
ical with unused categories dropped.
Returns
cat [Categorical with unused categories dropped or None if inplace.]
See also:
rename_categories, reorder_categories, add_categories, remove_categories,
set_categories
34.8.1.6 pandas.CategoricalIndex.set_categories
CategoricalIndex.set_categories(*args, **kwargs)
Sets the categories to the specified new_categories.
new_categories can include new categories (which will result in unused categories) or remove old cate-
gories (which results in values set to NaN). If rename==True, the categories will simple be renamed (less
or more items than in old categories will result in values set to NaN or in unused categories respectively).
This method can be used to perform more than one action of adding, removing, and reordering simulta-
neously and is therefore faster than performing the individual steps via the more specialised methods.
On the other hand this methods does not do checks (e.g., whether the old categories are included in the
new categories on a reorder), which can result in surprising changes, for example when using special
string dtypes on python3, which does not considers a S1 string equal to a single char python string.
Parameters new_categories : Index-like
The categories in new order.
ordered : boolean, (default: False)
Whether or not the categorical is treated as a ordered categorical. If not given, do
not change the ordered information.
rename : boolean (default: False)
Whether or not the new_categories should be considered as a rename of the old
categories or as reordered categories.
inplace : boolean (default: False)
Whether or not to reorder the categories inplace or return a copy of this categorical
with reordered categories.
Returns
cat [Categorical with reordered categories or None if inplace.]
Raises ValueError
If new_categories does not validate as categories
See also:
rename_categories, reorder_categories, add_categories, remove_categories,
remove_unused_categories
34.8.1.7 pandas.CategoricalIndex.as_ordered
CategoricalIndex.as_ordered(*args, **kwargs)
Sets the Categorical to be ordered
Parameters inplace : boolean (default: False)
Whether or not to set the ordered attribute inplace or return a copy of this cate-
gorical with ordered set to True
34.8.1.8 pandas.CategoricalIndex.as_unordered
CategoricalIndex.as_unordered(*args, **kwargs)
Sets the Categorical to be unordered
Parameters inplace : boolean (default: False)
Whether or not to set the ordered attribute inplace or return a copy of this cate-
gorical with ordered set to False
34.8.1.9 pandas.CategoricalIndex.map
CategoricalIndex.map(mapper)
Map values using input correspondence (a dict, Series, or function).
Maps the values (their categories, not the codes) of the index to new categories. If the mapping corre-
spondence is one-to-one the result is a CategoricalIndex which has the same order property as the
original, otherwise an Index is returned.
If a dict or Series is used any unmapped category is mapped to NaN. Note that if this happens an
Index will be returned.
Parameters mapper : function, dict, or Series
Mapping correspondence.
Returns pandas.CategoricalIndex or pandas.Index
Mapped index.
See also:
Examples
If a dict is used, all unmapped categories are mapped to NaN and the result is an Index:
CategoricalIndex.codes
CategoricalIndex.categories
CategoricalIndex.ordered
CategoricalIndex. Renames categories.
rename_categories(*args, . . . )
CategoricalIndex. Reorders categories as specified in new_categories.
reorder_categories(*args, . . . )
CategoricalIndex.add_categories(*args, Add new categories.
**kwargs)
CategoricalIndex. Removes the specified categories.
remove_categories(*args, . . . )
CategoricalIndex. Removes categories which are not used.
remove_unused_categories(. . . )
CategoricalIndex.set_categories(*args, Sets the categories to the specified new_categories.
**kwargs)
CategoricalIndex.as_ordered(*args, Sets the Categorical to be ordered
**kwargs)
Continued on next page
34.8.2.1 pandas.CategoricalIndex.codes
CategoricalIndex.codes
34.8.2.2 pandas.CategoricalIndex.categories
CategoricalIndex.categories
34.8.2.3 pandas.CategoricalIndex.ordered
CategoricalIndex.ordered
34.9 IntervalIndex
34.9.1 pandas.IntervalIndex
class pandas.IntervalIndex
Immutable Index implementing an ordered, sliceable set. IntervalIndex represents an Index of Interval objects
that are all closed on the same side.
New in version 0.20.0.
Warning: The indexing behaviors are provisional and may change in a future version of pandas.
See also:
cut, qcut
Notes
Examples
Attributes
34.9.1.1 pandas.IntervalIndex.closed
IntervalIndex.closed
Whether the intervals are closed on the left-side, right-side, both or neither
34.9.1.2 pandas.IntervalIndex.is_non_overlapping_monotonic
IntervalIndex.is_non_overlapping_monotonic
Return True if the IntervalIndex is non-overlapping (no Intervals share points) and is either monotonic
increasing or monotonic decreasing, else False
34.9.1.3 pandas.IntervalIndex.left
IntervalIndex.left
Return the left endpoints of each Interval in the IntervalIndex as an Index
34.9.1.4 pandas.IntervalIndex.length
IntervalIndex.length
Return an Index with entries denoting the length of each Interval in the IntervalIndex
34.9.1.5 pandas.IntervalIndex.mid
IntervalIndex.mid
Return the midpoint of each Interval in the IntervalIndex as an Index
34.9.1.6 pandas.IntervalIndex.right
IntervalIndex.right
Return the right endpoints of each Interval in the IntervalIndex as an Index
34.9.1.7 pandas.IntervalIndex.values
IntervalIndex.values
Return the IntervalIndex’s data as a numpy array of Interval objects (with dtype=’object’)
Methods
34.9.1.8 pandas.IntervalIndex.contains
IntervalIndex.contains(key)
Return a boolean indicating if the key is IN the index
34.9.1.9 pandas.IntervalIndex.from_arrays
Notes
Each element of left must be less than or equal to the right element at the same position. If an element is
missing, it must be missing in both left and right. A TypeError is raised when using an unsupported type
for left or right. At the moment, ‘category’, ‘object’, and ‘string’ subtypes are not supported.
Examples
If you want to segment different groups of people based on ages, you can apply the method as follows:
34.9.1.10 pandas.IntervalIndex.from_breaks
Examples
34.9.1.11 pandas.IntervalIndex.from_tuples
Examples
34.9.1.12 pandas.IntervalIndex.get_indexer
Examples
34.9.1.13 pandas.IntervalIndex.get_loc
IntervalIndex.get_loc(key, method=None)
Get integer location, slice or boolean mask for requested label.
Parameters
key [label]
method : {None}, optional
• default: matches where the label is within an interval only.
Returns
loc [int if unique index, slice if monotonic index, else mask]
Examples
You can also supply an interval or an location for a point inside an interval.
If a label is in several intervals, you get the locations of all the relevant intervals.
>>> i3 = pd.Interval(0, 2)
>>> overlapping_index = pd.IntervalIndex([i2, i3])
>>> overlapping_index.get_loc(1.5)
array([0, 1], dtype=int64)
IntervalIndex.from_arrays(left, right[, . . . ]) Construct from two arrays defining the left and right
bounds.
IntervalIndex.from_tuples(data[, closed, Construct an IntervalIndex from a list/array of tuples
. . . ])
IntervalIndex.from_breaks(breaks[, closed, Construct an IntervalIndex from an array of splits
. . . ])
IntervalIndex.contains(key) Return a boolean indicating if the key is IN the index
IntervalIndex.left Return the left endpoints of each Interval in the Inter-
valIndex as an Index
IntervalIndex.right Return the right endpoints of each Interval in the Inter-
valIndex as an Index
IntervalIndex.mid Return the midpoint of each Interval in the IntervalIndex
as an Index
IntervalIndex.closed Whether the intervals are closed on the left-side, right-
side, both or neither
IntervalIndex.length Return an Index with entries denoting the length of each
Interval in the IntervalIndex
IntervalIndex.values Return the IntervalIndex’s data as a numpy array of In-
terval objects (with dtype=’object’)
IntervalIndex.is_non_overlapping_monotonic Return True if the IntervalIndex is non-overlapping (no
Intervals share points) and is either monotonic increas-
ing or monotonic decreasing, else False
IntervalIndex.get_loc(key[, method]) Get integer location, slice or boolean mask for requested
label.
Continued on next page
34.10 MultiIndex
34.10.1 pandas.MultiIndex
class pandas.MultiIndex
A multi-level, or hierarchical, index object for pandas objects
Parameters levels : sequence of arrays
The unique labels for each level
labels : sequence of arrays
Integers for each level designating which label at each location
sortorder : optional int
Level of sortedness (must be lexicographically sorted by that level)
names : optional sequence of objects
Names for each of the index levels. (name is accepted for compat)
copy : boolean, default False
Copy the meta-data
verify_integrity : boolean, default True
Check that the levels/labels are consistent and valid
See also:
Notes
Examples
A new MultiIndex is typically constructed using one of the helper methods MultiIndex.
from_arrays(), MultiIndex.from_product() and MultiIndex.from_tuples(). For exam-
ple (using .from_arrays):
See further examples for how to construct a MultiIndex in the doc strings of the mentioned helper methods.
Attributes
34.10.1.1 pandas.MultiIndex.names
MultiIndex.names
Names of levels in MultiIndex
34.10.1.2 pandas.MultiIndex.nlevels
MultiIndex.nlevels
Integer number of levels in this MultiIndex.
34.10.1.3 pandas.MultiIndex.levshape
MultiIndex.levshape
A tuple with the length of each level.
levels
labels
Methods
34.10.1.4 pandas.MultiIndex.from_arrays
Examples
34.10.1.5 pandas.MultiIndex.from_tuples
Examples
34.10.1.6 pandas.MultiIndex.from_product
Examples
34.10.1.7 pandas.MultiIndex.set_levels
Examples
34.10.1.8 pandas.MultiIndex.set_labels
Examples
34.10.1.9 pandas.MultiIndex.to_hierarchical
MultiIndex.to_hierarchical(n_repeat, n_shuffle=1)
Return a MultiIndex reshaped to conform to the shapes given by n_repeat and n_shuffle.
Useful to replicate and rearrange a MultiIndex for combination with another Index with n_repeat items.
Parameters n_repeat : int
Number of times to repeat the labels on self
n_shuffle : int
Controls the reordering of the labels. If the result is going to be an inner level in
a MultiIndex, n_shuffle will need to be greater than one. The size of each label
must divisible by n_shuffle.
Returns
MultiIndex
Examples
34.10.1.10 pandas.MultiIndex.to_frame
MultiIndex.to_frame(index=True)
Create a DataFrame with the levels of the MultiIndex as columns.
New in version 0.20.0.
Parameters index : boolean, default True
Set the index of the returned DataFrame as the original MultiIndex.
Returns
DataFrame [a DataFrame containing the original MultiIndex data.]
34.10.1.11 pandas.MultiIndex.is_lexsorted
MultiIndex.is_lexsorted()
Return True if the labels are lexicographically sorted
34.10.1.12 pandas.MultiIndex.sortlevel
34.10.1.13 pandas.MultiIndex.droplevel
MultiIndex.droplevel(level=0)
Return Index with requested level removed. If MultiIndex has only 2 levels, the result will be of Index
type not MultiIndex.
Parameters
level [int/level name or list thereof]
Returns
index [Index or MultiIndex]
Notes
34.10.1.14 pandas.MultiIndex.swaplevel
MultiIndex.swaplevel(i=-2, j=-1)
Swap level i with level j.
Calling this method does not change the ordering of the values.
Parameters i : int, str, default -2
First level of index to be swapped. Can pass level name as string. Type of param-
eters can be mixed.
j : int, str, default -1
Second level of index to be swapped. Can pass level name as string. Type of
parameters can be mixed.
Returns MultiIndex
A new MultiIndex
.. versionchanged:: 0.18.1
The indexes i and j are now optional, and default to the two innermost levels of
the index.
See also:
Examples
34.10.1.15 pandas.MultiIndex.reorder_levels
MultiIndex.reorder_levels(order)
Rearrange levels using input order. May not drop or duplicate levels
34.10.1.16 pandas.MultiIndex.remove_unused_levels
MultiIndex.remove_unused_levels()
create a new MultiIndex from the current that removing unused levels, meaning that they are not expressed
in the labels
The resulting MultiIndex will have the same outward appearance, meaning the same .values and ordering.
It will also be .equals() to the original.
New in version 0.20.0.
Returns
MultiIndex
Examples
>>> i[2:]
MultiIndex(levels=[[0, 1], ['a', 'b']],
labels=[[1, 1], [0, 1]])
The 0 from the first level is not represented and can be removed
>>> i[2:].remove_unused_levels()
MultiIndex(levels=[[1], ['a', 'b']],
labels=[[0, 0], [0, 1]])
34.10.2 pandas.IndexSlice
Examples
34.10.4.1 pandas.MultiIndex.levels
MultiIndex.levels
34.10.4.2 pandas.MultiIndex.labels
MultiIndex.labels
34.10.5.1 pandas.MultiIndex.unique
MultiIndex.unique(level=None)
Return unique values in the index. Uniques are returned in order of appearance, this does NOT sort.
Parameters level : int or str, optional, default None
Only return values from specified level (for MultiIndex)
New in version 0.23.0.
Returns
Index without duplicates
See also:
unique, Series.unique
34.10.6.1 pandas.MultiIndex.get_loc
MultiIndex.get_loc(key, method=None)
Get location for a label or a tuple of labels as an integer, slice or boolean mask.
Parameters
key [label or tuple of labels (one for each level)]
method [None]
Returns loc : int, slice object or boolean mask
If the key is past the lexsort depth, the return may be a boolean mask array, otherwise
it is always a slice or int.
See also:
MultiIndex.slice_locs Get slice location given start label(s) and end label(s).
MultiIndex.get_locs Get location for a label/slice/list/mask or a sequence of such.
Notes
The key cannot be a slice, list of same-level labels, a boolean mask, or a sequence of such. If you want to use
those, use MultiIndex.get_locs() instead.
Examples
>>> mi.get_loc('b')
slice(1, 3, None)
34.10.6.2 pandas.MultiIndex.get_indexer
Integers from 0 to n - 1 indicating that the index at these positions matches the
corresponding target values. Missing values in the target are marked by -1.
Examples
34.10.6.3 pandas.MultiIndex.get_level_values
MultiIndex.get_level_values(level)
Return vector of label values for requested level, equal to the length of the index.
Parameters level : int or str
level is either the integer position of the level in the MultiIndex, or the name of
the level.
Returns values : Index
values is a level of this MultiIndex converted to a single Index (or subclass
thereof).
Examples
Create a MultiIndex:
>>> mi.get_level_values(0)
Index(['a', 'b', 'c'], dtype='object', name='level_1')
>>> mi.get_level_values('level_2')
Index(['d', 'e', 'f'], dtype='object', name='level_2')
34.11 DatetimeIndex
34.11.1 pandas.DatetimeIndex
class pandas.DatetimeIndex
Immutable ndarray of datetime64 data, represented internally as int64, and which can be boxed to Timestamp
objects that are subclasses of datetime and carry metadata such as frequency information.
Parameters data : array-like (1-dimensional), optional
tz [pytz.timezone or dateutil.tz.tzfile]
Notes
To learn more about the frequency strings, please see this link.
Attributes
34.11.1.1 pandas.DatetimeIndex.year
DatetimeIndex.year
The year of the datetime
34.11.1.2 pandas.DatetimeIndex.month
DatetimeIndex.month
The month as January=1, December=12
34.11.1.3 pandas.DatetimeIndex.day
DatetimeIndex.day
The days of the datetime
34.11.1.4 pandas.DatetimeIndex.hour
DatetimeIndex.hour
The hours of the datetime
34.11.1.5 pandas.DatetimeIndex.minute
DatetimeIndex.minute
The minutes of the datetime
34.11.1.6 pandas.DatetimeIndex.second
DatetimeIndex.second
The seconds of the datetime
34.11.1.7 pandas.DatetimeIndex.microsecond
DatetimeIndex.microsecond
The microseconds of the datetime
34.11.1.8 pandas.DatetimeIndex.nanosecond
DatetimeIndex.nanosecond
The nanoseconds of the datetime
34.11.1.9 pandas.DatetimeIndex.date
DatetimeIndex.date
Returns numpy array of python datetime.date objects (namely, the date part of Timestamps without time-
zone information).
34.11.1.10 pandas.DatetimeIndex.time
DatetimeIndex.time
Returns numpy array of datetime.time. The time part of the Timestamps.
34.11.1.11 pandas.DatetimeIndex.dayofyear
DatetimeIndex.dayofyear
The ordinal day of the year
34.11.1.12 pandas.DatetimeIndex.weekofyear
DatetimeIndex.weekofyear
The week ordinal of the year
34.11.1.13 pandas.DatetimeIndex.week
DatetimeIndex.week
The week ordinal of the year
34.11.1.14 pandas.DatetimeIndex.dayofweek
DatetimeIndex.dayofweek
The day of the week with Monday=0, Sunday=6
34.11.1.15 pandas.DatetimeIndex.weekday
DatetimeIndex.weekday
The day of the week with Monday=0, Sunday=6
34.11.1.16 pandas.DatetimeIndex.quarter
DatetimeIndex.quarter
The quarter of the date
34.11.1.17 pandas.DatetimeIndex.freq
DatetimeIndex.freq
Return the frequency object if it is set, otherwise None
34.11.1.18 pandas.DatetimeIndex.freqstr
DatetimeIndex.freqstr
Return the frequency object as a string if it is set, otherwise None
34.11.1.19 pandas.DatetimeIndex.is_month_start
DatetimeIndex.is_month_start
Logical indicating if first day of month (defined by frequency)
34.11.1.20 pandas.DatetimeIndex.is_month_end
DatetimeIndex.is_month_end
Indicator for whether the date is the last day of the month.
Returns Series or array
For Series, returns a Series with boolean values. For DatetimeIndex, returns a
boolean array.
See also:
is_month_start Indicator for whether the date is the first day of the month.
Examples
This method is available on Series with datetime values under the .dt accessor, and directly on Date-
timeIndex.
34.11.1.21 pandas.DatetimeIndex.is_quarter_start
DatetimeIndex.is_quarter_start
Indicator for whether the date is the first day of a quarter.
Returns is_quarter_start : Series or DatetimeIndex
The same type as the original data with boolean values. Series will have the same
name and index. DatetimeIndex will have the same name.
See also:
Examples
This method is available on Series with datetime values under the .dt accessor, and directly on Date-
timeIndex.
>>> idx.is_quarter_start
array([False, False, True, False])
34.11.1.22 pandas.DatetimeIndex.is_quarter_end
DatetimeIndex.is_quarter_end
Indicator for whether the date is the last day of a quarter.
Returns is_quarter_end : Series or DatetimeIndex
The same type as the original data with boolean values. Series will have the same
name and index. DatetimeIndex will have the same name.
See also:
Examples
This method is available on Series with datetime values under the .dt accessor, and directly on Date-
timeIndex.
>>> idx.is_quarter_end
array([False, True, False, False])
34.11.1.23 pandas.DatetimeIndex.is_year_start
DatetimeIndex.is_year_start
Indicate whether the date is the first day of a year.
Returns Series or DatetimeIndex
The same type as the original data with boolean values. Series will have the same
name and index. DatetimeIndex will have the same name.
See also:
Examples
This method is available on Series with datetime values under the .dt accessor, and directly on Date-
timeIndex.
>>> dates.dt.is_year_start
0 False
1 False
2 True
dtype: bool
>>> idx.is_year_start
array([False, False, True])
34.11.1.24 pandas.DatetimeIndex.is_year_end
DatetimeIndex.is_year_end
Indicate whether the date is the last day of the year.
Returns Series or DatetimeIndex
The same type as the original data with boolean values. Series will have the same
name and index. DatetimeIndex will have the same name.
See also:
Examples
This method is available on Series with datetime values under the .dt accessor, and directly on Date-
timeIndex.
>>> dates.dt.is_year_end
0 False
1 True
2 False
dtype: bool
>>> idx.is_year_end
array([False, True, False])
34.11.1.25 pandas.DatetimeIndex.is_leap_year
DatetimeIndex.is_leap_year
Boolean indicator if the date belongs to a leap year.
A leap year is a year, which has 366 days (instead of 365) including 29th of February as an intercalary
day. Leap years are years which are multiples of four with the exception of years divisible by 100 but not
by 400.
Returns Series or ndarray
Booleans indicating if dates belong to a leap year.
Examples
This method is available on Series with datetime values under the .dt accessor, and directly on Date-
timeIndex.
34.11.1.26 pandas.DatetimeIndex.inferred_freq
DatetimeIndex.inferred_freq
Tries to return a string representing a frequency guess, generated by infer_freq. Returns None if it can’t
autodetect the frequency.
tz
Methods
34.11.1.27 pandas.DatetimeIndex.normalize
DatetimeIndex.normalize()
Convert times to midnight.
The time component of the date-timeise converted to midnight i.e. 00:00:00. This is useful in cases, when
the time does not matter. Length is unaltered. The timezones are unaffected.
This method is available on Series with datetime values under the .dt accessor, and directly on Date-
timeIndex.
Returns DatetimeIndex or Series
The same type as the original data. Series will have the same name and index.
DatetimeIndex will have the same name.
See also:
Examples
34.11.1.28 pandas.DatetimeIndex.strftime
DatetimeIndex.strftime(date_format)
Convert to Index using specified date_format.
Return an Index of formatted strings specified by date_format, which supports the same string format as
the python standard library. Details of the string format can be found in python string format doc
Parameters date_format : str
Date format string (e.g. “%Y-%m-%d”).
Returns Index
Index of formatted strings
See also:
Examples
34.11.1.29 pandas.DatetimeIndex.snap
DatetimeIndex.snap(freq=’S’)
Snap time stamps to nearest occurring frequency
34.11.1.30 pandas.DatetimeIndex.tz_convert
DatetimeIndex.tz_convert(tz)
Convert tz-aware DatetimeIndex from one time zone to another.
Parameters tz : string, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time. Corresponding timestamps would be converted to this time
zone of the DatetimeIndex. A tz of None will convert to UTC and remove the
timezone information.
Returns
normalized [DatetimeIndex]
Raises TypeError
If DatetimeIndex is tz-naive.
See also:
Examples
With the tz parameter, we can change the DatetimeIndex to other time zones:
>>> dti
DatetimeIndex(['2014-08-01 09:00:00+02:00',
'2014-08-01 10:00:00+02:00',
'2014-08-01 11:00:00+02:00'],
dtype='datetime64[ns, Europe/Berlin]', freq='H')
>>> dti.tz_convert('US/Central')
DatetimeIndex(['2014-08-01 02:00:00-05:00',
'2014-08-01 03:00:00-05:00',
'2014-08-01 04:00:00-05:00'],
dtype='datetime64[ns, US/Central]', freq='H')
With the tz=None, we can remove the timezone (after converting to UTC if necessary):
>>> dti
DatetimeIndex(['2014-08-01 09:00:00+02:00',
'2014-08-01 10:00:00+02:00',
'2014-08-01 11:00:00+02:00'],
dtype='datetime64[ns, Europe/Berlin]', freq='H')
>>> dti.tz_convert(None)
DatetimeIndex(['2014-08-01 07:00:00',
'2014-08-01 08:00:00',
'2014-08-01 09:00:00'],
dtype='datetime64[ns]', freq='H')
34.11.1.31 pandas.DatetimeIndex.tz_localize
• ‘coerce’ will return NaT if the timestamp can not be converted to the specified
time zone
New in version 0.19.0.
Returns DatetimeIndex
Index converted to the specified time zone.
Raises TypeError
If the DatetimeIndex is tz-aware and tz is not None.
See also:
Examples
With the tz=None, we can remove the time zone information while keeping the local time (not converted
to UTC):
>>> tz_aware.tz_localize(None)
DatetimeIndex(['2018-03-01 09:00:00', '2018-03-02 09:00:00',
'2018-03-03 09:00:00'],
dtype='datetime64[ns]', freq='D')
34.11.1.32 pandas.DatetimeIndex.round
Examples
DatetimeIndex
Series
>>> pd.Series(rng).dt.round("H")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 12:00:00
dtype: datetime64[ns]
34.11.1.33 pandas.DatetimeIndex.floor
DatetimeIndex.floor(freq)
floor the data to the specified freq.
Parameters freq : str or Offset
The frequency level to floor the index to. Must be a fixed frequency like ‘S’
(second) not ‘ME’ (month end). See frequency aliases for a list of possible freq
values.
Returns DatetimeIndex, TimedeltaIndex, or Series
Index of the same type for a DatetimeIndex or TimedeltaIndex, or a Series with
the same index for a Series.
Raises
ValueError if the ‘freq‘ cannot be converted.
Examples
DatetimeIndex
Series
>>> pd.Series(rng).dt.floor("H")
0 2018-01-01 11:00:00
1 2018-01-01 12:00:00
2 2018-01-01 12:00:00
dtype: datetime64[ns]
34.11.1.34 pandas.DatetimeIndex.ceil
DatetimeIndex.ceil(freq)
ceil the data to the specified freq.
Parameters freq : str or Offset
The frequency level to ceil the index to. Must be a fixed frequency like ‘S’ (sec-
ond) not ‘ME’ (month end). See frequency aliases for a list of possible freq
values.
Returns DatetimeIndex, TimedeltaIndex, or Series
Index of the same type for a DatetimeIndex or TimedeltaIndex, or a Series with
the same index for a Series.
Raises
ValueError if the ‘freq‘ cannot be converted.
Examples
DatetimeIndex
Series
>>> pd.Series(rng).dt.ceil("H")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 13:00:00
dtype: datetime64[ns]
34.11.1.35 pandas.DatetimeIndex.to_period
DatetimeIndex.to_period(freq=None)
Cast to PeriodIndex at a particular frequency.
Converts DatetimeIndex to PeriodIndex.
Parameters freq : string or Offset, optional
One of pandas’ offset strings or an Offset object. Will be inferred by default.
Returns
PeriodIndex
Raises ValueError
When converting a DatetimeIndex with non-regular values, so that a frequency
cannot be inferred.
See also:
Examples
34.11.1.36 pandas.DatetimeIndex.to_perioddelta
DatetimeIndex.to_perioddelta(freq)
Calculate TimedeltaIndex of difference between index values and index converted to periodIndex at spec-
ified freq. Used for vectorized offsets
Parameters
freq: Period frequency
Returns
y: TimedeltaIndex
34.11.1.37 pandas.DatetimeIndex.to_pydatetime
DatetimeIndex.to_pydatetime()
Return DatetimeIndex as object ndarray of datetime.datetime objects
Returns
datetimes [ndarray]
34.11.1.38 pandas.DatetimeIndex.to_series
34.11.1.39 pandas.DatetimeIndex.to_frame
DatetimeIndex.to_frame(index=True)
Create a DataFrame with a column containing the Index.
New in version 0.21.0.
Parameters index : boolean, default True
Set the index of the returned DataFrame as the original Index.
Returns DataFrame
DataFrame containing the original Index data.
See also:
Examples
>>> idx.to_frame(index=False)
animal
0 Ant
1 Bear
2 Cow
34.11.1.40 pandas.DatetimeIndex.month_name
DatetimeIndex.month_name(locale=None)
Return the month names of the DateTimeIndex with specified locale.
Parameters locale : string, default None (English locale)
locale determining the language in which to return the month name
Returns month_names : Index
Index of month names
.. versionadded:: 0.23.0
34.11.1.41 pandas.DatetimeIndex.day_name
DatetimeIndex.day_name(locale=None)
Return the day names of the DateTimeIndex with specified locale.
Parameters locale : string, default None (English locale)
locale determining the language in which to return the day name
Returns month_names : Index
Index of day names
.. versionadded:: 0.23.0
34.11.2.1 pandas.DatetimeIndex.tz
DatetimeIndex.tz
34.11.3 Selecting
34.11.3.1 pandas.DatetimeIndex.indexer_at_time
DatetimeIndex.indexer_at_time(time, asof=False)
Returns index locations of index values at particular time of day (e.g. 9:30AM).
Parameters time : datetime.time or string
datetime.time or string in appropriate format (“%H:%M”, “%H%M”, “%I:%M%p”,
“%I%M%p”, “%H:%M:%S”, “%H%M%S”, “%I:%M:%S%p”, “%I%M%S%p”).
Returns
34.11.3.2 pandas.DatetimeIndex.indexer_between_time
Returns
values_between_time [array of integers]
See also:
indexer_at_time, DataFrame.between_time
34.11.5 Conversion
34.12 TimedeltaIndex
34.12.1 pandas.TimedeltaIndex
class pandas.TimedeltaIndex
Immutable ndarray of timedelta64 data, represented internally as int64, and which can be boxed to timedelta
objects
Parameters data : array-like (1-dimensional), optional
Optional timedelta-like data to construct index with
unit: unit of the arg (D,h,m,s,ms,us,ns) denote the unit, optional
which is an integer/float number
copy : bool
Make a copy of input ndarray
start : starting value, timedelta-like, optional
If data is None, start is used as the start point in generating regular timedelta data.
periods : int, optional, > 0
Number of periods to generate, if generating index. Takes precedence over end
argument
end : end time, timedelta-like, optional
If periods is none, generated index will extend to first conforming time on or just
past end argument
closed : string or None, default None
Make the interval closed with respect to the given frequency to the ‘left’, ‘right’, or
both sides (None)
name : object
Name to be stored in the index
See also:
Notes
To learn more about the frequency strings, please see this link.
Attributes
34.12.1.1 pandas.TimedeltaIndex.days
TimedeltaIndex.days
Number of days for each element.
34.12.1.2 pandas.TimedeltaIndex.seconds
TimedeltaIndex.seconds
Number of seconds (>= 0 and less than 1 day) for each element.
34.12.1.3 pandas.TimedeltaIndex.microseconds
TimedeltaIndex.microseconds
Number of microseconds (>= 0 and less than 1 second) for each element.
34.12.1.4 pandas.TimedeltaIndex.nanoseconds
TimedeltaIndex.nanoseconds
Number of nanoseconds (>= 0 and less than 1 microsecond) for each element.
34.12.1.5 pandas.TimedeltaIndex.components
TimedeltaIndex.components
Return a dataframe of the components (days, hours, minutes, seconds, milliseconds, microseconds,
nanoseconds) of the Timedeltas.
Returns
a DataFrame
34.12.1.6 pandas.TimedeltaIndex.inferred_freq
TimedeltaIndex.inferred_freq
Tries to return a string representing a frequency guess, generated by infer_freq. Returns None if it can’t
autodetect the frequency.
Methods
34.12.1.7 pandas.TimedeltaIndex.to_pytimedelta
TimedeltaIndex.to_pytimedelta()
Return TimedeltaIndex as object ndarray of datetime.timedelta objects
Returns
datetimes [ndarray]
34.12.1.8 pandas.TimedeltaIndex.to_series
TimedeltaIndex.to_series(index=None, name=None)
Create a Series with both index and values equal to the index keys useful with map for returning an indexer
based on an index
Parameters index : Index, optional
index of resulting Series. If None, defaults to original index
name : string, optional
name of resulting Series. If None, defaults to name of original index
Returns
Series [dtype will be based on the type of the Index values.]
34.12.1.9 pandas.TimedeltaIndex.round
Examples
DatetimeIndex
Series
>>> pd.Series(rng).dt.round("H")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 12:00:00
dtype: datetime64[ns]
34.12.1.10 pandas.TimedeltaIndex.floor
TimedeltaIndex.floor(freq)
floor the data to the specified freq.
Parameters freq : str or Offset
The frequency level to floor the index to. Must be a fixed frequency like ‘S’
(second) not ‘ME’ (month end). See frequency aliases for a list of possible freq
values.
Returns DatetimeIndex, TimedeltaIndex, or Series
Index of the same type for a DatetimeIndex or TimedeltaIndex, or a Series with
the same index for a Series.
Raises
Examples
DatetimeIndex
Series
>>> pd.Series(rng).dt.floor("H")
0 2018-01-01 11:00:00
1 2018-01-01 12:00:00
2 2018-01-01 12:00:00
dtype: datetime64[ns]
34.12.1.11 pandas.TimedeltaIndex.ceil
TimedeltaIndex.ceil(freq)
ceil the data to the specified freq.
Parameters freq : str or Offset
The frequency level to ceil the index to. Must be a fixed frequency like ‘S’ (sec-
ond) not ‘ME’ (month end). See frequency aliases for a list of possible freq
values.
Returns DatetimeIndex, TimedeltaIndex, or Series
Index of the same type for a DatetimeIndex or TimedeltaIndex, or a Series with
the same index for a Series.
Raises
ValueError if the ‘freq‘ cannot be converted.
Examples
DatetimeIndex
Series
>>> pd.Series(rng).dt.ceil("H")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 13:00:00
dtype: datetime64[ns]
34.12.1.12 pandas.TimedeltaIndex.to_frame
TimedeltaIndex.to_frame(index=True)
Create a DataFrame with a column containing the Index.
New in version 0.21.0.
Parameters index : boolean, default True
Set the index of the returned DataFrame as the original Index.
Returns DataFrame
DataFrame containing the original Index data.
See also:
Examples
>>> idx.to_frame(index=False)
animal
0 Ant
1 Bear
2 Cow
34.12.2 Components
34.12.3 Conversion
34.13 PeriodIndex
34.13.1 pandas.PeriodIndex
class pandas.PeriodIndex
Immutable ndarray holding ordinal values indicating regular periods in time such as particular years, quarters,
months, etc.
Index keys are boxed to Period objects which carries the metadata (eg, frequency information).
Parameters data : array-like (1-dimensional), optional
Optional period-like data to construct index with
copy : bool
Make a copy of input ndarray
freq : string or period object, optional
One of pandas period strings or corresponding objects
start : starting value, period-like, optional
If data is None, used as the start point in generating regular period data.
See also:
Examples
Attributes
34.13.1.1 pandas.PeriodIndex.day
PeriodIndex.day
The days of the period
34.13.1.2 pandas.PeriodIndex.dayofweek
PeriodIndex.dayofweek
The day of the week with Monday=0, Sunday=6
34.13.1.3 pandas.PeriodIndex.dayofyear
PeriodIndex.dayofyear
The ordinal day of the year
34.13.1.4 pandas.PeriodIndex.days_in_month
PeriodIndex.days_in_month
The number of days in the month
34.13.1.5 pandas.PeriodIndex.daysinmonth
PeriodIndex.daysinmonth
The number of days in the month
34.13.1.6 pandas.PeriodIndex.freq
PeriodIndex.freq
Return the frequency object if it is set, otherwise None
34.13.1.7 pandas.PeriodIndex.freqstr
PeriodIndex.freqstr
Return the frequency object as a string if it is set, otherwise None
34.13.1.8 pandas.PeriodIndex.hour
PeriodIndex.hour
The hour of the period
34.13.1.9 pandas.PeriodIndex.is_leap_year
PeriodIndex.is_leap_year
Logical indicating if the date belongs to a leap year
34.13.1.10 pandas.PeriodIndex.minute
PeriodIndex.minute
The minute of the period
34.13.1.11 pandas.PeriodIndex.month
PeriodIndex.month
The month as January=1, December=12
34.13.1.12 pandas.PeriodIndex.quarter
PeriodIndex.quarter
The quarter of the date
34.13.1.13 pandas.PeriodIndex.second
PeriodIndex.second
The second of the period
34.13.1.14 pandas.PeriodIndex.week
PeriodIndex.week
The week ordinal of the year
34.13.1.15 pandas.PeriodIndex.weekday
PeriodIndex.weekday
The day of the week with Monday=0, Sunday=6
34.13.1.16 pandas.PeriodIndex.weekofyear
PeriodIndex.weekofyear
The week ordinal of the year
34.13.1.17 pandas.PeriodIndex.year
PeriodIndex.year
The year of the period
end_time
qyear
start_time
Methods
34.13.1.18 pandas.PeriodIndex.asfreq
PeriodIndex.asfreq(freq=None, how=’E’)
Convert the PeriodIndex to the specified frequency freq.
Parameters freq : str
a frequency
how : str {‘E’, ‘S’}
‘E’, ‘END’, or ‘FINISH’ for end, ‘S’, ‘START’, or ‘BEGIN’ for start. Whether
the elements should be aligned to the end or start within pa period. January 31st
(‘END’) vs. Janury 1st (‘START’) for example.
Returns
new [PeriodIndex with the new frequency]
Examples
>>> pidx.asfreq('M')
<class 'pandas.core.indexes.period.PeriodIndex'>
[2010-12, ..., 2015-12]
Length: 6, Freq: M
34.13.1.19 pandas.PeriodIndex.strftime
PeriodIndex.strftime(date_format)
Convert to Index using specified date_format.
Return an Index of formatted strings specified by date_format, which supports the same string format as
the python standard library. Details of the string format can be found in python string format doc
Parameters date_format : str
Date format string (e.g. “%Y-%m-%d”).
Returns Index
Index of formatted strings
See also:
Examples
34.13.1.20 pandas.PeriodIndex.to_timestamp
PeriodIndex.to_timestamp(freq=None, how=’start’)
Cast to DatetimeIndex
Parameters freq : string or DateOffset, optional
Target frequency. The default is ‘D’ for week or longer, ‘S’ otherwise
Returns
DatetimeIndex
34.13.1.21 pandas.PeriodIndex.tz_convert
PeriodIndex.tz_convert(tz)
Convert tz-aware DatetimeIndex from one time zone to another (using pytz/dateutil)
Parameters tz : string, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time. Corresponding timestamps would be converted to time zone
of the TimeSeries. None will remove timezone holding UTC time.
Returns
normalized [DatetimeIndex]
Notes
34.13.1.22 pandas.PeriodIndex.tz_localize
PeriodIndex.tz_localize(tz, ambiguous=’raise’)
Localize tz-naive DatetimeIndex to given time zone (using pytz/dateutil), or remove timezone from tz-
aware DatetimeIndex
Parameters tz : string, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time. Corresponding timestamps would be converted to time zone
of the TimeSeries. None will remove timezone holding local time.
Returns
localized [DatetimeIndex]
Notes
34.13.2 Attributes
34.13.2.1 pandas.PeriodIndex.end_time
PeriodIndex.end_time
34.13.2.2 pandas.PeriodIndex.qyear
PeriodIndex.qyear
34.13.2.3 pandas.PeriodIndex.start_time
PeriodIndex.start_time
34.13.3 Methods
34.14 Scalars
34.14.1 Period
34.14.1.1 pandas.Period
class pandas.Period
Represents a period of time
Parameters value : Period or compat.string_types, default None
The time period represented (e.g., ‘4Q2005’)
freq : str, default None
Attributes
pandas.Period.day
Period.day
Get day of the month that a Period falls on.
Returns
int
See also:
Examples
pandas.Period.dayofweek
Period.dayofweek
Return the day of the week.
This attribute returns the day of the week on which the particular date for the given period occurs depend-
ing on the frequency with Monday=0, Sunday=6.
Returns Int
Range from 0 to 6 (included).
See also:
Examples
pandas.Period.dayofyear
Period.dayofyear
Return the day of the year.
This attribute returns the day of the year on which the particular date occurs. The return value ranges
between 1 to 365 for regular years and 1 to 366 for leap years.
Returns int
The day of year.
See also:
Examples
pandas.Period.days_in_month
Period.days_in_month
Get the total number of days in the month that this period falls on.
Returns
int
See also:
Examples
>>> p = pd.Period('2018-2-17')
>>> p.days_in_month
28
>>> pd.Period('2018-03-01').days_in_month
31
>>> p = pd.Period('2016-2-17')
>>> p.days_in_month
29
pandas.Period.daysinmonth
Period.daysinmonth
Get the total number of days of the month that the Period falls in.
Returns
int
See also:
Examples
pandas.Period.hour
Period.hour
Get the hour of the day component of the Period.
Returns int
The hour as an integer, between 0 and 23.
See also:
Examples
pandas.Period.minute
Period.minute
Get minute of the hour component of the Period.
Returns int
The minute as an integer, between 0 and 59.
See also:
Examples
pandas.Period.second
Period.second
Get the second component of the Period.
Returns int
The second of the Period (ranges from 0 to 59).
See also:
Examples
pandas.Period.start_time
Period.start_time
Get the Timestamp for the start of the period.
Returns
Timestamp
See also:
Examples
>>> period.start_time
Timestamp('2012-01-01 00:00:00')
>>> period.end_time
Timestamp('2012-01-01 23:59:59.999999999')
pandas.Period.week
Period.week
Get the week of the year on the given Period.
Returns
int
See also:
Examples
end_time
freq
freqstr
is_leap_year
month
ordinal
quarter
qyear
weekday
weekofyear
year
Methods
pandas.Period.asfreq
Period.asfreq()
Convert Period to desired frequency, either at the start or end of the interval
Parameters
freq [string]
how : {‘E’, ‘S’, ‘end’, ‘start’}, default ‘end’
Start or end of the timespan
Returns
resampled [Period]
pandas.Period.strftime
Period.strftime()
Returns the string representation of the Period, depending on the selected fmt. fmt must be a
string containing one or several directives. The method recognizes the same directives as the time.
strftime() function of the standard Python distribution, as well as the specific additional directives
%f, %F, %q. (formatting & docs originally from scikits.timeries)
Notes
1. The %f directive is the same as %y if the frequency is not quarterly. Otherwise, it corresponds to the
‘fiscal’ year, as defined by the qyear attribute.
2. The %F directive is the same as %Y if the frequency is not quarterly. Otherwise, it corresponds to the
‘fiscal’ year, as defined by the qyear attribute.
3. The %p directive only affects the output hour field if the %I directive is used to parse the hour.
4. The range really is 0 to 61; this accounts for leap seconds and the (very rare) double leap seconds.
5. The %U and %W directives are only used in calculations when the day of the week and the year are
specified.
Examples
pandas.Period.to_timestamp
Period.to_timestamp()
Return the Timestamp representation of the Period at the target frequency at the specified end (how) of
the Period
Parameters freq : string or DateOffset
Target frequency. Default is ‘D’ if self.freq is week or longer and ‘S’ otherwise
how: str, default ‘S’ (start)
‘S’, ‘E’. Can be aliased as case insensitive ‘Start’, ‘Finish’, ‘Begin’, ‘End’
Returns
Timestamp
now
34.14.2 Attributes
34.14.2.1 pandas.Period.end_time
Period.end_time
34.14.2.2 pandas.Period.freq
Period.freq
34.14.2.3 pandas.Period.freqstr
Period.freqstr
34.14.2.4 pandas.Period.is_leap_year
Period.is_leap_year
34.14.2.5 pandas.Period.month
Period.month
34.14.2.6 pandas.Period.ordinal
Period.ordinal
34.14.2.7 pandas.Period.quarter
Period.quarter
34.14.2.8 pandas.Period.qyear
Period.qyear
34.14.2.9 pandas.Period.weekday
Period.weekday
34.14.2.10 pandas.Period.weekofyear
Period.weekofyear
34.14.2.11 pandas.Period.year
Period.year
34.14.3 Methods
34.14.3.1 pandas.Period.now
Period.now()
34.14.4 Timestamp
34.14.4.1 pandas.Timestamp
class pandas.Timestamp
Pandas replacement for datetime.datetime
Timestamp is the pandas equivalent of python’s Datetime and is interchangeable with it in most cases. It’s the
type used for the entries that make up a DatetimeIndex, and other timeseries oriented data structures in pandas.
Parameters ts_input : datetime-like, str, int, float
Value to be converted to Timestamp
freq : str, DateOffset
Offset which Timestamp will have
tz : str, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time which Timestamp will have.
unit : str
Unit used for conversion if ts_input is of type int or float. The valid values are ‘D’,
‘h’, ‘m’, ‘s’, ‘ms’, ‘us’, and ‘ns’. For example, ‘s’ means seconds and ‘ms’ means
milliseconds.
year, month, day : int
New in version 0.19.0.
hour, minute, second, microsecond : int, optional, default 0
New in version 0.19.0.
nanosecond : int, optional, default 0
Notes
There are essentially three calling conventions for the constructor. The primary form accepts four parameters.
They can be passed by position or keyword.
The other two forms mimic the parameters from datetime.datetime. They can be passed by either posi-
tion or keyword, but not both mixed together.
Examples
Attributes
pandas.Timestamp.tz
Timestamp.tz
Alias for tzinfo
pandas.Timestamp.weekday_name
Timestamp.weekday_name
Deprecated since version 0.23.0: Use Timestamp.day_name() instead
asm8
day
dayofweek
dayofyear
days_in_month
daysinmonth
fold
freq
freqstr
hour
is_leap_year
is_month_end
is_month_start
is_quarter_end
is_quarter_start
is_year_end
is_year_start
microsecond
minute
month
nanosecond
quarter
second
tzinfo
value
week
weekofyear
year
Methods
pandas.Timestamp.astimezone
Timestamp.astimezone
Convert tz-aware Timestamp to another time zone.
Parameters tz : str, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time which Timestamp will be converted to. None will remove
timezone holding UTC time.
Returns
converted [Timestamp]
Raises TypeError
If Timestamp is tz-naive.
pandas.Timestamp.ceil
Timestamp.ceil
return a new Timestamp ceiled to this resolution
Parameters
freq [a freq string indicating the ceiling resolution]
pandas.Timestamp.combine
pandas.Timestamp.ctime
Timestamp.ctime()
Return ctime() style string.
pandas.Timestamp.date
Timestamp.date()
Return date object with same year, month and day.
pandas.Timestamp.day_name
Timestamp.day_name
Return the day name of the Timestamp with specified locale.
Parameters locale : string, default None (English locale)
locale determining the language in which to return the day name
Returns
day_name [string]
.. versionadded:: 0.23.0
pandas.Timestamp.dst
Timestamp.dst()
Return self.tzinfo.dst(self).
pandas.Timestamp.floor
Timestamp.floor
return a new Timestamp floored to this resolution
Parameters
freq [a freq string indicating the flooring resolution]
pandas.Timestamp.fromordinal
pandas.Timestamp.fromtimestamp
classmethod Timestamp.fromtimestamp(ts)
timestamp[, tz] -> tz’s local time from POSIX timestamp.
pandas.Timestamp.isocalendar
Timestamp.isocalendar()
Return a 3-tuple containing ISO year, week number, and weekday.
pandas.Timestamp.isoweekday
Timestamp.isoweekday()
Return the day of the week represented by the date. Monday == 1 . . . Sunday == 7
pandas.Timestamp.month_name
Timestamp.month_name
Return the month name of the Timestamp with specified locale.
Parameters locale : string, default None (English locale)
locale determining the language in which to return the month name
Returns
month_name [string]
.. versionadded:: 0.23.0
pandas.Timestamp.normalize
Timestamp.normalize
Normalize Timestamp to midnight, preserving tz information.
pandas.Timestamp.now
classmethod Timestamp.now(tz=None)
Returns new Timestamp object representing current time local to tz.
Parameters tz : str or timezone object, default None
Timezone to localize to
pandas.Timestamp.replace
Timestamp.replace
implements datetime.replace, handles nanoseconds
Parameters
year [int, optional]
month [int, optional]
day [int, optional]
hour [int, optional]
minute [int, optional]
second [int, optional]
microsecond [int, optional]
nanosecond: int, optional
tzinfo [tz-convertible, optional]
fold : int, optional, default is 0
added in 3.6, NotImplemented
Returns
Timestamp with fields replaced
pandas.Timestamp.round
Timestamp.round
Round the Timestamp to the specified resolution
Parameters
freq [a freq string indicating the rounding resolution]
Returns
a new Timestamp rounded to the given resolution of ‘freq‘
Raises
ValueError if the freq cannot be converted
pandas.Timestamp.strftime
Timestamp.strftime()
format -> strftime() style string.
pandas.Timestamp.strptime
Timestamp.strptime()
string, format -> new datetime parsed from a string (like time.strptime()).
pandas.Timestamp.time
Timestamp.time()
Return time object with same time but with tzinfo=None.
pandas.Timestamp.timestamp
Timestamp.timestamp()
Return POSIX timestamp as float.
pandas.Timestamp.timetuple
Timestamp.timetuple()
Return time tuple, compatible with time.localtime().
pandas.Timestamp.timetz
Timestamp.timetz()
Return time object with same time and tzinfo.
pandas.Timestamp.to_datetime64
Timestamp.to_datetime64()
Returns a numpy.datetime64 object with ‘ns’ precision
pandas.Timestamp.to_julian_date
Timestamp.to_julian_date
Convert TimeStamp to a Julian Date. 0 Julian date is noon January 1, 4713 BC.
pandas.Timestamp.to_period
Timestamp.to_period
Return an period of which this timestamp is an observation.
pandas.Timestamp.to_pydatetime
Timestamp.to_pydatetime()
Convert a Timestamp object to a native Python datetime object.
If warn=True, issue a warning if nanoseconds is nonzero.
pandas.Timestamp.today
pandas.Timestamp.toordinal
Timestamp.toordinal()
Return proleptic Gregorian ordinal. January 1 of year 1 is day 1.
pandas.Timestamp.tz_convert
Timestamp.tz_convert
Convert tz-aware Timestamp to another time zone.
Parameters tz : str, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time which Timestamp will be converted to. None will remove
timezone holding UTC time.
Returns
converted [Timestamp]
Raises TypeError
If Timestamp is tz-naive.
pandas.Timestamp.tz_localize
Timestamp.tz_localize
Convert naive Timestamp to local time zone, or remove timezone from tz-aware Timestamp.
Parameters tz : str, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time which Timestamp will be converted to. None will remove
timezone holding local time.
ambiguous : bool, ‘NaT’, default ‘raise’
• bool contains flags to determine if time is dst or not (note that this flag is only appli-
cable for ambiguous fall dst dates)
• ‘NaT’ will return NaT for an ambiguous time
pandas.Timestamp.tzname
Timestamp.tzname()
Return self.tzinfo.tzname(self).
pandas.Timestamp.utcfromtimestamp
classmethod Timestamp.utcfromtimestamp(ts)
Construct a naive UTC datetime from a POSIX timestamp.
pandas.Timestamp.utcnow
classmethod Timestamp.utcnow()
Return a new Timestamp representing UTC day and time.
pandas.Timestamp.utcoffset
Timestamp.utcoffset()
Return self.tzinfo.utcoffset(self).
pandas.Timestamp.utctimetuple
Timestamp.utctimetuple()
Return UTC time tuple, compatible with time.localtime().
pandas.Timestamp.weekday
Timestamp.weekday()
Return the day of the week represented by the date. Monday == 0 . . . Sunday == 6
isoformat
34.14.5 Properties
Timestamp.asm8
Timestamp.day
Timestamp.dayofweek
Timestamp.dayofyear
Timestamp.days_in_month
Timestamp.daysinmonth
Timestamp.fold
Timestamp.hour
Timestamp.is_leap_year
Timestamp.is_month_end
Timestamp.is_month_start
Timestamp.is_quarter_end
Timestamp.is_quarter_start
Timestamp.is_year_end
Timestamp.is_year_start
Timestamp.max
Timestamp.microsecond
Timestamp.min
Timestamp.minute
Timestamp.month
Timestamp.nanosecond
Timestamp.quarter
Timestamp.resolution
Timestamp.second
Timestamp.tz Alias for tzinfo
Timestamp.tzinfo
Timestamp.value
Timestamp.week
Timestamp.weekofyear
Timestamp.year
34.14.5.1 pandas.Timestamp.asm8
Timestamp.asm8
34.14.5.2 pandas.Timestamp.day
Timestamp.day
34.14.5.3 pandas.Timestamp.dayofweek
Timestamp.dayofweek
34.14.5.4 pandas.Timestamp.dayofyear
Timestamp.dayofyear
34.14.5.5 pandas.Timestamp.days_in_month
Timestamp.days_in_month
34.14.5.6 pandas.Timestamp.daysinmonth
Timestamp.daysinmonth
34.14.5.7 pandas.Timestamp.fold
Timestamp.fold
34.14.5.8 pandas.Timestamp.hour
Timestamp.hour
34.14.5.9 pandas.Timestamp.is_leap_year
Timestamp.is_leap_year
34.14.5.10 pandas.Timestamp.is_month_end
Timestamp.is_month_end
34.14.5.11 pandas.Timestamp.is_month_start
Timestamp.is_month_start
34.14.5.12 pandas.Timestamp.is_quarter_end
Timestamp.is_quarter_end
34.14.5.13 pandas.Timestamp.is_quarter_start
Timestamp.is_quarter_start
34.14.5.14 pandas.Timestamp.is_year_end
Timestamp.is_year_end
34.14.5.15 pandas.Timestamp.is_year_start
Timestamp.is_year_start
34.14.5.16 pandas.Timestamp.max
34.14.5.17 pandas.Timestamp.microsecond
Timestamp.microsecond
34.14.5.18 pandas.Timestamp.min
34.14.5.19 pandas.Timestamp.minute
Timestamp.minute
34.14.5.20 pandas.Timestamp.month
Timestamp.month
34.14.5.21 pandas.Timestamp.nanosecond
Timestamp.nanosecond
34.14.5.22 pandas.Timestamp.quarter
Timestamp.quarter
34.14.5.23 pandas.Timestamp.resolution
Timestamp.resolution = datetime.timedelta(0, 0, 1)
34.14.5.24 pandas.Timestamp.second
Timestamp.second
34.14.5.25 pandas.Timestamp.tzinfo
Timestamp.tzinfo
34.14.5.26 pandas.Timestamp.value
Timestamp.value
34.14.5.27 pandas.Timestamp.week
Timestamp.week
34.14.5.28 pandas.Timestamp.weekofyear
Timestamp.weekofyear
34.14.5.29 pandas.Timestamp.year
Timestamp.year
34.14.6 Methods
34.14.6.1 pandas.Timestamp.freq
Timestamp.freq
34.14.6.2 pandas.Timestamp.freqstr
Timestamp.freqstr
34.14.6.3 pandas.Timestamp.isoformat
Timestamp.isoformat
34.14.7 Interval
34.14.7.1 pandas.Interval
class pandas.Interval
Immutable object implementing an Interval, a bounded slice-like interval.
New in version 0.20.0.
Parameters left : orderable scalar
Left bound for the interval.
right : orderable scalar
IntervalIndex An Index of Interval objects that are all closed on the same side.
cut Convert continuous data into discrete bins (Categorical of Interval objects).
qcut Convert continuous data into bins (Categorical of Interval objects) based on quantiles.
Period Represents a period of time.
Notes
The parameters left and right must be from the same type, you must be able to compare them and they must
satisfy left <= right.
A closed interval (in mathematics denoted by square brackets) contains its endpoints, i.e. the closed interval
[0, 5] is characterized by the conditions 0 <= x <= 5. This is what closed='both' stands for. An
open interval (in mathematics denoted by parentheses) does not contain its endpoints, i.e. the open interval (0,
5) is characterized by the conditions 0 < x < 5. This is what closed='neither' stands for. Intervals
can also be half-open or half-closed, i.e. [0, 5) is described by 0 <= x < 5 (closed='left') and (0,
5] is described by 0 < x <= 5 (closed='right').
Examples
>>> 2.5 in iv
True
>>> 0 in iv
False
>>> 5 in iv
True
>>> 0.0001 in iv
True
>>> iv.length
5
You can operate with + and * over an Interval and the operation is applied to each of its bounds, so the result
depends on the type of the bound elements
>>> shifted_iv = iv + 3
>>> shifted_iv
Interval(3, 8, closed='right')
>>> extended_iv = iv * 10.0
>>> extended_iv
Interval(0.0, 50.0, closed='right')
Attributes
pandas.Interval.closed
Interval.closed
Whether the interval is closed on the left-side, right-side, both or neither
pandas.Interval.closed_left
Interval.closed_left
Check if the interval is closed on the left side.
pandas.Interval.closed_right
Interval.closed_right
Check if the interval is closed on the right side.
For the meaning of closed and open see Interval.
Returns bool
True if the Interval is closed on the left-side, else False.
pandas.Interval.left
Interval.left
Left bound for the interval
pandas.Interval.length
Interval.length
Return the length of the Interval
pandas.Interval.mid
Interval.mid
Return the midpoint of the Interval
pandas.Interval.open_left
Interval.open_left
Check if the interval is open on the left side.
For the meaning of closed and open see Interval.
Returns bool
True if the Interval is closed on the left-side, else False.
pandas.Interval.open_right
Interval.open_right
Check if the interval is open on the right side.
For the meaning of closed and open see Interval.
Returns bool
True if the Interval is closed on the left-side, else False.
pandas.Interval.right
Interval.right
Right bound for the interval
34.14.8 Properties
34.14.9 Timedelta
34.14.9.1 pandas.Timedelta
class pandas.Timedelta
Represents a duration, the difference between two dates or times.
Timedelta is the pandas equivalent of python’s datetime.timedelta and is interchangeable with it in most
cases.
Parameters
value [Timedelta, timedelta, np.timedelta64, string, or integer]
unit : string, {‘ns’, ‘us’, ‘ms’, ‘s’, ‘m’, ‘h’, ‘D’}, optional
Denote the unit of the input, if input is an integer. Default ‘ns’.
Notes
Attributes
pandas.Timedelta.asm8
Timedelta.asm8
return a numpy timedelta64 array view of myself
pandas.Timedelta.components
Timedelta.components
Return a Components NamedTuple-like
pandas.Timedelta.days
Timedelta.days
Number of days.
pandas.Timedelta.delta
Timedelta.delta
Return the timedelta in nanoseconds (ns), for internal compatibility.
Returns int
Timedelta in nanoseconds.
Examples
pandas.Timedelta.microseconds
Timedelta.microseconds
Number of microseconds (>= 0 and less than 1 second).
pandas.Timedelta.nanoseconds
Timedelta.nanoseconds
Return the number of nanoseconds (n), where 0 <= n < 1 microsecond.
Returns int
Number of nanoseconds.
See also:
Timedelta.components Return all attributes with assigned values (i.e. days, hours, minutes, sec-
onds, milliseconds, microseconds, nanoseconds).
Examples
pandas.Timedelta.resolution
Timedelta.resolution
return a string representing the lowest resolution that we have
pandas.Timedelta.seconds
Timedelta.seconds
Number of seconds (>= 0 and less than 1 day).
freq
is_populated
value
Methods
pandas.Timedelta.ceil
Timedelta.ceil
return a new Timedelta ceiled to this resolution
Parameters
freq [a freq string indicating the ceiling resolution]
pandas.Timedelta.floor
Timedelta.floor
return a new Timedelta floored to this resolution
Parameters
freq [a freq string indicating the flooring resolution]
pandas.Timedelta.isoformat
Timedelta.isoformat()
Format Timedelta as ISO 8601 Duration like P[n]Y[n]M[n]DT[n]H[n]M[n]S, where the [n] s are
replaced by the values. See https://en.wikipedia.org/wiki/ISO_8601#Durations
New in version 0.20.0.
Returns
formatted [str]
See also:
Timestamp.isoformat
Notes
The longest component is days, whose value may be larger than 365. Every component is always included,
even if its value is 0. Pandas uses nanosecond precision, so up to 9 decimal places may be included in the
seconds component. Trailing 0’s are removed from the seconds component after the decimal. We do not
0 pad components, so it’s . . . T5H. . . , not . . . T05H. . .
Examples
pandas.Timedelta.round
Timedelta.round
Round the Timedelta to the specified resolution
Parameters
freq [a freq string indicating the rounding resolution]
Returns
a new Timedelta rounded to the given resolution of ‘freq‘
Raises
ValueError if the freq cannot be converted
pandas.Timedelta.to_pytimedelta
Timedelta.to_pytimedelta()
return an actual datetime.timedelta object note: we lose nanosecond resolution if any
pandas.Timedelta.to_timedelta64
Timedelta.to_timedelta64()
Returns a numpy.timedelta64 object with ‘ns’ precision
pandas.Timedelta.total_seconds
Timedelta.total_seconds()
Total duration of timedelta in seconds (to ns precision)
pandas.Timedelta.view
Timedelta.view()
array view compat
34.14.10 Properties
34.14.10.1 pandas.Timedelta.freq
Timedelta.freq
34.14.10.2 pandas.Timedelta.is_populated
Timedelta.is_populated
34.14.10.3 pandas.Timedelta.max
34.14.10.4 pandas.Timedelta.min
34.14.10.5 pandas.Timedelta.value
Timedelta.value
34.14.11 Methods
34.15 Frequencies
34.15.1 pandas.tseries.frequencies.to_offset
pandas.tseries.frequencies.to_offset(freq)
Return DateOffset object from string or tuple representation or datetime.timedelta object
Parameters
freq [str, tuple, datetime.timedelta, DateOffset or None]
Returns delta : DateOffset
None if freq is None
Raises ValueError
If freq is an invalid frequency
See also:
pandas.DateOffset
Examples
>>> to_offset('5min')
<5 * Minutes>
>>> to_offset('1D1H')
<25 * Hours>
>>> to_offset(datetime.timedelta(days=1))
<Day>
>>> to_offset(Hour())
<Hour>
34.16 Window
34.16.1.1 pandas.core.window.Rolling.count
Rolling.count()
The rolling count of any non-NaN observations inside the window.
Returns Series or DataFrame
Returned object type is determined by the caller of the rolling calculation.
See also:
Examples
34.16.1.2 pandas.core.window.Rolling.sum
Rolling.sum(*args, **kwargs)
Calculate rolling sum of given DataFrame or Series.
Parameters *args, **kwargs
For compatibility with other rolling methods. Has no effect on the computed value.
Returns Series or DataFrame
Same type as the input, with the same index, containing the rolling sum.
See also:
Examples
>>> s.rolling(3).sum()
0 NaN
1 NaN
(continues on next page)
>>> s.expanding(3).sum()
0 NaN
1 NaN
2 6.0
3 10.0
4 15.0
dtype: float64
>>> df.rolling(3).sum()
A B
0 NaN NaN
1 NaN NaN
2 6.0 14.0
3 9.0 29.0
4 12.0 50.0
34.16.1.3 pandas.core.window.Rolling.mean
Rolling.mean(*args, **kwargs)
Calculate the rolling mean of the values.
Parameters *args
Under Review.
**kwargs
Under Review.
Returns Series or DataFrame
Returned object type is determined by the caller of the rolling calculation.
See also:
Examples
The below examples will show rolling mean calculations with window sizes of two and three, respectively.
>>> s.rolling(3).mean()
0 NaN
1 NaN
2 2.0
3 3.0
dtype: float64
34.16.1.4 pandas.core.window.Rolling.median
Rolling.median(**kwargs)
Calculate the rolling median.
Parameters **kwargs
For compatibility with other rolling methods. Has no effect on the computed median.
Returns Series or DataFrame
Returned type is the same as the original object.
See also:
Examples
34.16.1.5 pandas.core.window.Rolling.var
Notes
The default ddof of 1 used in Series.var() is different than the default ddof of 0 in numpy.var().
A minimum of 1 period is required for the rolling calculation.
Examples
>>> s.expanding(3).var()
0 NaN
1 NaN
2 0.333333
3 0.916667
4 0.800000
5 0.700000
6 0.619048
dtype: float64
34.16.1.6 pandas.core.window.Rolling.std
Notes
The default ddof of 1 used in Series.std is different than the default ddof of 0 in numpy.std.
A minimum of one period is required for the rolling calculation.
Examples
>>> s.expanding(3).std()
0 NaN
1 NaN
2 0.577350
3 0.957427
4 0.894427
5 0.836660
6 0.786796
dtype: float64
34.16.1.7 pandas.core.window.Rolling.min
Rolling.min(*args, **kwargs)
Calculate the rolling minimum.
Parameters **kwargs
Under Review.
Returns Series or DataFrame
Returned object type is determined by the caller of the rolling calculation.
See also:
Examples
34.16.1.8 pandas.core.window.Rolling.max
Rolling.max(*args, **kwargs)
rolling maximum
Returns
same type as input
See also:
pandas.Series.rolling, pandas.DataFrame.rolling
34.16.1.9 pandas.core.window.Rolling.corr
34.16.1.10 pandas.core.window.Rolling.cov
See also:
pandas.Series.rolling, pandas.DataFrame.rolling
34.16.1.11 pandas.core.window.Rolling.skew
Rolling.skew(**kwargs)
Unbiased rolling skewness
Returns
same type as input
See also:
pandas.Series.rolling, pandas.DataFrame.rolling
34.16.1.12 pandas.core.window.Rolling.kurt
Rolling.kurt(**kwargs)
Calculate unbiased rolling kurtosis.
This function uses Fisher’s definition of kurtosis without bias.
Parameters **kwargs
Under Review.
Returns Series or DataFrame
Returned object type is determined by the caller of the rolling calculation
See also:
Notes
Examples
The example below will show a rolling calculation with a window size of four matching the equivalent function
call using scipy.stats.
34.16.1.13 pandas.core.window.Rolling.apply
Returns
same type as input
See also:
pandas.Series.rolling, pandas.DataFrame.rolling
34.16.1.14 pandas.core.window.Rolling.aggregate
Notes
Examples
>>> df.rolling(3).sum()
A B C
0 NaN NaN NaN
1 NaN NaN NaN
2 -2.655105 0.637799 -2.135068
3 -0.971785 -0.600366 -3.280224
4 -0.214334 -1.294599 -3.227500
5 1.514216 2.028250 -2.989060
6 1.074618 5.709767 -2.322600
7 2.718061 3.850718 0.256446
8 -0.289082 2.454418 1.416871
9 0.212668 0.403198 -0.093924
34.16.1.15 pandas.core.window.Rolling.quantile
pandas.Series.quantile Computes value at the given quantile over all data in Series.
pandas.DataFrame.quantile Computes values at the given quantile over requested axis in DataFrame.
Examples
34.16.1.16 pandas.core.window.Window.mean
Window.mean(*args, **kwargs)
Calculate the window mean of the values.
Parameters *args
Under Review.
**kwargs
Under Review.
Returns Series or DataFrame
Returned object type is determined by the caller of the window calculation.
See also:
Examples
The below examples will show rolling mean calculations with window sizes of two and three, respectively.
>>> s = pd.Series([1, 2, 3, 4])
>>> s.rolling(2).mean()
0 NaN
1 1.5
2 2.5
3 3.5
dtype: float64
>>> s.rolling(3).mean()
0 NaN
1 NaN
2 2.0
3 3.0
dtype: float64
34.16.1.17 pandas.core.window.Window.sum
Window.sum(*args, **kwargs)
Calculate window sum of given DataFrame or Series.
Parameters *args, **kwargs
For compatibility with other window methods. Has no effect on the computed value.
Returns Series or DataFrame
Same type as the input, with the same index, containing the window sum.
See also:
Examples
>>> s.rolling(3).sum()
0 NaN
1 NaN
2 6.0
3 9.0
4 12.0
dtype: float64
>>> s.expanding(3).sum()
0 NaN
1 NaN
2 6.0
3 10.0
4 15.0
dtype: float64
>>> df.rolling(3).sum()
A B
0 NaN NaN
1 NaN NaN
2 6.0 14.0
3 9.0 29.0
4 12.0 50.0
34.16.2.1 pandas.core.window.Expanding.count
Expanding.count(**kwargs)
The expanding count of any non-NaN observations inside the window.
Returns Series or DataFrame
Returned object type is determined by the caller of the expanding calculation.
See also:
Examples
34.16.2.2 pandas.core.window.Expanding.sum
Expanding.sum(*args, **kwargs)
Calculate expanding sum of given DataFrame or Series.
Parameters *args, **kwargs
For compatibility with other expanding methods. Has no effect on the computed
value.
Returns Series or DataFrame
Same type as the input, with the same index, containing the expanding sum.
See also:
Examples
>>> s.rolling(3).sum()
0 NaN
(continues on next page)
>>> s.expanding(3).sum()
0 NaN
1 NaN
2 6.0
3 10.0
4 15.0
dtype: float64
>>> df.rolling(3).sum()
A B
0 NaN NaN
1 NaN NaN
2 6.0 14.0
3 9.0 29.0
4 12.0 50.0
34.16.2.3 pandas.core.window.Expanding.mean
Expanding.mean(*args, **kwargs)
Calculate the expanding mean of the values.
Parameters *args
Under Review.
**kwargs
Under Review.
Returns Series or DataFrame
Returned object type is determined by the caller of the expanding calculation.
See also:
Examples
The below examples will show rolling mean calculations with window sizes of two and three, respectively.
>>> s.rolling(3).mean()
0 NaN
1 NaN
2 2.0
3 3.0
dtype: float64
34.16.2.4 pandas.core.window.Expanding.median
Expanding.median(**kwargs)
Calculate the expanding median.
Parameters **kwargs
For compatibility with other expanding methods. Has no effect on the computed
median.
Returns Series or DataFrame
Returned type is the same as the original object.
See also:
Examples
34.16.2.5 pandas.core.window.Expanding.var
Notes
The default ddof of 1 used in Series.var() is different than the default ddof of 0 in numpy.var().
A minimum of 1 period is required for the rolling calculation.
Examples
>>> s.expanding(3).var()
0 NaN
1 NaN
2 0.333333
3 0.916667
4 0.800000
5 0.700000
6 0.619048
dtype: float64
34.16.2.6 pandas.core.window.Expanding.std
Notes
The default ddof of 1 used in Series.std is different than the default ddof of 0 in numpy.std.
A minimum of one period is required for the rolling calculation.
Examples
>>> s.expanding(3).std()
0 NaN
1 NaN
2 0.577350
3 0.957427
4 0.894427
5 0.836660
6 0.786796
dtype: float64
34.16.2.7 pandas.core.window.Expanding.min
Expanding.min(*args, **kwargs)
Calculate the expanding minimum.
Parameters **kwargs
Under Review.
Returns Series or DataFrame
Returned object type is determined by the caller of the expanding calculation.
See also:
Examples
34.16.2.8 pandas.core.window.Expanding.max
Expanding.max(*args, **kwargs)
expanding maximum
Returns
same type as input
See also:
pandas.Series.expanding, pandas.DataFrame.expanding
34.16.2.9 pandas.core.window.Expanding.corr
34.16.2.10 pandas.core.window.Expanding.cov
See also:
pandas.Series.expanding, pandas.DataFrame.expanding
34.16.2.11 pandas.core.window.Expanding.skew
Expanding.skew(**kwargs)
Unbiased expanding skewness
Returns
same type as input
See also:
pandas.Series.expanding, pandas.DataFrame.expanding
34.16.2.12 pandas.core.window.Expanding.kurt
Expanding.kurt(**kwargs)
Calculate unbiased expanding kurtosis.
This function uses Fisher’s definition of kurtosis without bias.
Parameters **kwargs
Under Review.
Returns Series or DataFrame
Returned object type is determined by the caller of the expanding calculation
See also:
Notes
Examples
The example below will show an expanding calculation with a window size of four matching the equivalent
function call using scipy.stats.
34.16.2.13 pandas.core.window.Expanding.apply
Returns
same type as input
See also:
pandas.Series.expanding, pandas.DataFrame.expanding
34.16.2.14 pandas.core.window.Expanding.aggregate
Notes
Examples
>>> df.ewm(alpha=0.5).mean()
A B C
0 -2.385977 -0.102758 0.438822
1 -1.464856 0.569633 -0.490089
2 -0.207700 0.149687 -1.135379
3 -0.471677 -0.645305 -0.906555
4 -0.355635 -0.203033 -0.904111
5 1.076417 1.503943 -1.146293
6 -0.041654 1.925562 -0.588728
7 0.680292 0.132049 0.548693
(continues on next page)
34.16.2.15 pandas.core.window.Expanding.quantile
pandas.Series.quantile Computes value at the given quantile over all data in Series.
pandas.DataFrame.quantile Computes values at the given quantile over requested axis in DataFrame.
Examples
34.16.3.1 pandas.core.window.EWM.mean
EWM.mean(*args, **kwargs)
exponential weighted moving average
Returns
same type as input
See also:
pandas.Series.ewm, pandas.DataFrame.ewm
34.16.3.2 pandas.core.window.EWM.std
34.16.3.3 pandas.core.window.EWM.var
34.16.3.4 pandas.core.window.EWM.corr
34.16.3.5 pandas.core.window.EWM.cov
34.17 GroupBy
34.17.1.1 pandas.core.groupby.GroupBy.__iter__
GroupBy.__iter__()
Groupby iterator
Returns
Generator yielding sequence of (name, subsetted object)
for each group
34.17.1.2 pandas.core.groupby.GroupBy.groups
GroupBy.groups
dict {group name -> group labels}
34.17.1.3 pandas.core.groupby.GroupBy.indices
GroupBy.indices
dict {group name -> group indices}
34.17.1.4 pandas.core.groupby.GroupBy.get_group
GroupBy.get_group(name, obj=None)
Constructs NDFrame from group with provided name
Parameters name : object
the name of the group to get as a DataFrame
obj : NDFrame, default None
the NDFrame to take the DataFrame out of. If it is None, the object groupby was
called on will be used
Returns
group [type of obj]
Grouper([key, level, freq, axis, sort]) A Grouper allows the user to specify a groupby instruc-
tion for a target object
34.17.1.5 pandas.Grouper
This specification will select a column via the key parameter, or if the level and/or axis parameters are given, a
level of the index of the target object.
These are local specifications and will override ‘global’ settings, that is the parameters axis and level which are
passed to the groupby itself.
Parameters key : string, defaults to None
groupby key, which selects the grouping column of the target
level : name/number, defaults to None
the level for the target index
freq : string / frequency object, defaults to None
This will groupby the specified frequency if the target selection (via key or level) is a
datetime-like object. For full specification of available frequencies, please see here.
base, loffset
Returns
A specification for a groupby instruction
Examples
>>> df.groupby(Grouper(key='A'))
Specify a resample operation on the level ‘date’ on the columns axis with a frequency of 60s
Attributes
ax
groups
GroupBy.apply(func, *args, **kwargs) Apply function func group-wise and combine the re-
sults together.
GroupBy.aggregate(func, *args, **kwargs)
GroupBy.transform(func, *args, **kwargs)
GroupBy.pipe(func, *args, **kwargs) Apply a function func with arguments to this GroupBy
object and return the function’s result.
34.17.2.1 pandas.core.groupby.GroupBy.apply
pipe Apply function to the full GroupBy object instead of to each group.
aggregate, transform
Notes
In the current implementation apply calls func twice on the first group to decide whether it can take a fast or
slow code path. This can lead to unexpected behavior if func has side-effects, as they will take effect twice for
the first group.
Examples
From df above we can see that g has two groups, a, b. Calling apply in various ways, we can get different
grouping results:
Example 1: below the function passed to apply takes a dataframe as its argument and returns a dataframe.
apply combines the result for each group together into a new dataframe:
Example 2: The function passed to apply takes a dataframe as its argument and returns a series. apply
combines the result for each group together into a new dataframe:
Example 3: The function passed to apply takes a dataframe as its argument and returns a scalar. apply
combines the result for each group together into a series, including setting the index as appropriate:
34.17.2.2 pandas.core.groupby.GroupBy.aggregate
34.17.2.3 pandas.core.groupby.GroupBy.transform
34.17.2.4 pandas.core.groupby.GroupBy.pipe
>>> (df.groupby('group')
... .pipe(f)
... .pipe(g, arg1=a)
... .pipe(h, arg2=b, arg3=c))
Notes
Examples
To get the difference between each groups maximum and minimum value in one pass, you can do
GroupBy.all([skipna]) Returns True if all values in the group are truthful, else
False
GroupBy.any([skipna]) Returns True if any value in the group is truthful, else
False
GroupBy.bfill([limit]) Backward fill the values
GroupBy.count() Compute count of group, excluding missing values
GroupBy.cumcount([ascending]) Number each item in each group from 0 to the length of
that group - 1.
GroupBy.ffill([limit]) Forward fill the values
GroupBy.first(**kwargs) Compute first of group values
GroupBy.head([n]) Returns first n rows of each group.
GroupBy.last(**kwargs) Compute last of group values
GroupBy.max(**kwargs) Compute max of group values
GroupBy.mean(*args, **kwargs) Compute mean of groups, excluding missing values
GroupBy.median(**kwargs) Compute median of groups, excluding missing values
GroupBy.min(**kwargs) Compute min of group values
GroupBy.ngroup([ascending]) Number each group from 0 to the number of groups - 1.
GroupBy.nth(n[, dropna]) Take the nth row from each group if n is an int, or a
subset of rows if n is a list of ints.
GroupBy.ohlc() Compute sum of values, excluding missing values For
multiple groupings, the result index will be a MultiIndex
GroupBy.prod(**kwargs) Compute prod of group values
GroupBy.rank([method, ascending, na_option, . . . ]) Provides the rank of values within each group.
GroupBy.pct_change([periods, fill_method, . . . ]) Calcuate pct_change of each value to previous entry in
group
GroupBy.size() Compute group sizes
GroupBy.sem([ddof]) Compute standard error of the mean of groups, exclud-
ing missing values
GroupBy.std([ddof]) Compute standard deviation of groups, excluding miss-
ing values
GroupBy.sum(**kwargs) Compute sum of group values
GroupBy.var([ddof]) Compute variance of groups, excluding missing values
GroupBy.tail([n]) Returns last n rows of each group
34.17.3.1 pandas.core.groupby.GroupBy.all
GroupBy.all(skipna=True)
Returns True if all values in the group are truthful, else False
Parameters skipna : bool, default True
Flag to ignore nan values during truth testing
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.2 pandas.core.groupby.GroupBy.any
GroupBy.any(skipna=True)
Returns True if any value in the group is truthful, else False
34.17.3.3 pandas.core.groupby.GroupBy.bfill
GroupBy.bfill(limit=None)
Backward fill the values
Parameters limit : integer, optional
limit of how many values to fill
See also:
Series.backfill, DataFrame.backfill, Series.fillna, DataFrame.fillna
34.17.3.4 pandas.core.groupby.GroupBy.count
GroupBy.count()
Compute count of group, excluding missing values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.5 pandas.core.groupby.GroupBy.cumcount
GroupBy.cumcount(ascending=True)
Number each item in each group from 0 to the length of that group - 1.
Essentially this is equivalent to
>>> self.apply(lambda x: Series(np.arange(len(x)), x.index))
See also:
Examples
34.17.3.6 pandas.core.groupby.GroupBy.ffill
GroupBy.ffill(limit=None)
Forward fill the values
Parameters limit : integer, optional
limit of how many values to fill
See also:
Series.pad, DataFrame.pad, Series.fillna, DataFrame.fillna
34.17.3.7 pandas.core.groupby.GroupBy.first
GroupBy.first(**kwargs)
Compute first of group values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.8 pandas.core.groupby.GroupBy.head
GroupBy.head(n=5)
Returns first n rows of each group.
Essentially equivalent to .apply(lambda x: x.head(n)), except ignores as_index flag.
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
Examples
34.17.3.9 pandas.core.groupby.GroupBy.last
GroupBy.last(**kwargs)
Compute last of group values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.10 pandas.core.groupby.GroupBy.max
GroupBy.max(**kwargs)
Compute max of group values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.11 pandas.core.groupby.GroupBy.mean
GroupBy.mean(*args, **kwargs)
Compute mean of groups, excluding missing values
For multiple groupings, the result index will be a MultiIndex
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.12 pandas.core.groupby.GroupBy.median
GroupBy.median(**kwargs)
Compute median of groups, excluding missing values
For multiple groupings, the result index will be a MultiIndex
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.13 pandas.core.groupby.GroupBy.min
GroupBy.min(**kwargs)
Compute min of group values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.14 pandas.core.groupby.GroupBy.ngroup
GroupBy.ngroup(ascending=True)
Number each group from 0 to the number of groups - 1.
This is the enumerative complement of cumcount. Note that the numbers given to the groups match the order in
which the groups would be seen when iterating over the groupby object, not the order they are first observed.
New in version 0.20.2.
Parameters ascending : bool, default True
If False, number in reverse, from number of group - 1 to 0.
See also:
Examples
34.17.3.15 pandas.core.groupby.GroupBy.nth
GroupBy.nth(n, dropna=None)
Take the nth row from each group if n is an int, or a subset of rows if n is a list of ints.
If dropna, will take the nth non-null row, dropna is either Truthy (if a Series) or ‘all’, ‘any’ (if a DataFrame);
this is equivalent to calling dropna(how=dropna) before the groupby.
Parameters n : int or list of ints
a single nth value for the row or a list of nth values
dropna : None or str, optional
apply the specified dropna operation before counting which row is the nth row. Needs
to be None, ‘any’ or ‘all’
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
Examples
34.17.3.16 pandas.core.groupby.GroupBy.ohlc
GroupBy.ohlc()
Compute sum of values, excluding missing values For multiple groupings, the result index will be a MultiIndex
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.17 pandas.core.groupby.GroupBy.prod
GroupBy.prod(**kwargs)
Compute prod of group values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.18 pandas.core.groupby.GroupBy.rank
Returns
—–
DataFrame with ranking of values within each group
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.19 pandas.core.groupby.GroupBy.pct_change
34.17.3.20 pandas.core.groupby.GroupBy.size
GroupBy.size()
Compute group sizes
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.21 pandas.core.groupby.GroupBy.sem
GroupBy.sem(ddof=1)
Compute standard error of the mean of groups, excluding missing values
For multiple groupings, the result index will be a MultiIndex
Parameters ddof : integer, default 1
degrees of freedom
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.22 pandas.core.groupby.GroupBy.std
34.17.3.23 pandas.core.groupby.GroupBy.sum
GroupBy.sum(**kwargs)
Compute sum of group values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.24 pandas.core.groupby.GroupBy.var
34.17.3.25 pandas.core.groupby.GroupBy.tail
GroupBy.tail(n=5)
Returns last n rows of each group
Essentially equivalent to .apply(lambda x: x.tail(n)), except ignores as_index flag.
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
Examples
The following methods are available in both SeriesGroupBy and DataFrameGroupBy objects, but may differ
slightly, usually in that the DataFrameGroupBy version usually permits the specification of an axis argument, and
often an argument indicating whether to restrict application to columns of a specific data type.
DataFrameGroupBy.agg(arg, *args, **kwargs) Aggregate using one or more operations over the speci-
fied axis.
DataFrameGroupBy.all([skipna]) Returns True if all values in the group are truthful, else
False
DataFrameGroupBy.any([skipna]) Returns True if any value in the group is truthful, else
False
DataFrameGroupBy.bfill([limit]) Backward fill the values
DataFrameGroupBy.corr Compute pairwise correlation of columns, excluding
NA/null values
DataFrameGroupBy.count() Compute count of group, excluding missing values
DataFrameGroupBy.cov Compute pairwise covariance of columns, excluding
NA/null values.
DataFrameGroupBy.cummax([axis]) Cumulative max for each group
DataFrameGroupBy.cummin([axis]) Cumulative min for each group
DataFrameGroupBy.cumprod([axis]) Cumulative product for each group
DataFrameGroupBy.cumsum([axis]) Cumulative sum for each group
DataFrameGroupBy.describe(**kwargs) Generates descriptive statistics that summarize the cen-
tral tendency, dispersion and shape of a dataset’s distri-
bution, excluding NaN values.
DataFrameGroupBy.diff First discrete difference of element.
DataFrameGroupBy.ffill([limit]) Forward fill the values
DataFrameGroupBy.fillna Fill NA/NaN values using the specified method
DataFrameGroupBy.filter(func[, dropna]) Return a copy of a DataFrame excluding elements from
groups that do not satisfy the boolean criterion specified
by func.
DataFrameGroupBy.hist Make a histogram of the DataFrame’s.
DataFrameGroupBy.idxmax Return index of first occurrence of maximum over re-
quested axis.
DataFrameGroupBy.idxmin Return index of first occurrence of minimum over re-
quested axis.
DataFrameGroupBy.mad Return the mean absolute deviation of the values for the
requested axis
DataFrameGroupBy.pct_change([periods, . . . ]) Calcuate pct_change of each value to previous entry in
group
DataFrameGroupBy.plot Class implementing the .plot attribute for groupby ob-
jects
DataFrameGroupBy.quantile Return values at the given quantile over requested axis,
a la numpy.percentile.
DataFrameGroupBy.rank([method, ascending, Provides the rank of values within each group.
. . . ])
DataFrameGroupBy.resample(rule, *args, Provide resampling when using a TimeGrouper Return
**kwargs) a new grouper with our resampler appended
Continued on next page
34.17.3.26 pandas.core.groupby.DataFrameGroupBy.agg
Notes
Examples
>>> df
A B C
0 1 1 0.362838
1 1 2 0.227877
2 2 3 1.267767
3 2 4 -0.562860
>>> df.groupby('A').agg('min')
B C
A
1 1 0.227877
2 3 -0.562860
Multiple aggregations
34.17.3.27 pandas.core.groupby.DataFrameGroupBy.all
DataFrameGroupBy.all(skipna=True)
Returns True if all values in the group are truthful, else False
Parameters skipna : bool, default True
Flag to ignore nan values during truth testing
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.28 pandas.core.groupby.DataFrameGroupBy.any
DataFrameGroupBy.any(skipna=True)
Returns True if any value in the group is truthful, else False
Parameters skipna : bool, default True
Flag to ignore nan values during truth testing
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.29 pandas.core.groupby.DataFrameGroupBy.bfill
DataFrameGroupBy.bfill(limit=None)
Backward fill the values
Parameters limit : integer, optional
limit of how many values to fill
See also:
Series.backfill, DataFrame.backfill, Series.fillna, DataFrame.fillna
34.17.3.30 pandas.core.groupby.DataFrameGroupBy.corr
DataFrameGroupBy.corr
Compute pairwise correlation of columns, excluding NA/null values
Parameters method : {‘pearson’, ‘kendall’, ‘spearman’}
• pearson : standard correlation coefficient
• kendall : Kendall Tau correlation coefficient
• spearman : Spearman rank correlation
min_periods : int, optional
Minimum number of observations required per pair of columns to have a valid result.
Currently only available for pearson and spearman correlation
Returns
y [DataFrame]
34.17.3.31 pandas.core.groupby.DataFrameGroupBy.count
DataFrameGroupBy.count()
Compute count of group, excluding missing values
34.17.3.32 pandas.core.groupby.DataFrameGroupBy.cov
DataFrameGroupBy.cov
Compute pairwise covariance of columns, excluding NA/null values.
Compute the pairwise covariance among the series of a DataFrame. The returned data frame is the covariance
matrix of the columns of the DataFrame.
Both NA and null values are automatically excluded from the calculation. (See the note below about bias
from missing values.) A threshold can be set for the minimum number of observations for each value created.
Comparisons with observations below this threshold will be returned as NaN.
This method is generally used for the analysis of time series data to understand the relationship between different
measures across time.
Parameters min_periods : int, optional
Minimum number of observations required per pair of columns to have a valid result.
Returns DataFrame
The covariance matrix of the series of the DataFrame.
See also:
Notes
Returns the covariance matrix of the DataFrame’s time series. The covariance is normalized by N-1.
For DataFrames that have Series that are missing data (assuming that data is missing at random) the returned
covariance matrix will be an unbiased estimate of the variance and covariance between the member Series.
However, for many applications this estimate may not be acceptable because the estimate covariance matrix
is not guaranteed to be positive semi-definite. This could lead to estimate correlations having absolute values
which are greater than one, and/or a non-invertible covariance matrix. See Estimation of covariance matrices
for more details.
Examples
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(1000, 5),
... columns=['a', 'b', 'c', 'd', 'e'])
>>> df.cov()
a b c d e
a 0.998438 -0.020161 0.059277 -0.008943 0.014144
b -0.020161 1.059352 -0.008543 -0.024738 0.009826
c 0.059277 -0.008543 1.010670 -0.001486 -0.000271
d -0.008943 -0.024738 -0.001486 0.921297 -0.013692
e 0.014144 0.009826 -0.000271 -0.013692 0.977795
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(20, 3),
... columns=['a', 'b', 'c'])
>>> df.loc[df.index[:5], 'a'] = np.nan
>>> df.loc[df.index[5:10], 'b'] = np.nan
>>> df.cov(min_periods=12)
a b c
a 0.316741 NaN -0.150812
b NaN 1.248003 0.191417
c -0.150812 0.191417 0.895202
34.17.3.33 pandas.core.groupby.DataFrameGroupBy.cummax
DataFrameGroupBy.cummax(axis=0, **kwargs)
Cumulative max for each group
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.34 pandas.core.groupby.DataFrameGroupBy.cummin
DataFrameGroupBy.cummin(axis=0, **kwargs)
Cumulative min for each group
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.35 pandas.core.groupby.DataFrameGroupBy.cumprod
34.17.3.36 pandas.core.groupby.DataFrameGroupBy.cumsum
34.17.3.37 pandas.core.groupby.DataFrameGroupBy.describe
DataFrameGroupBy.describe(**kwargs)
Generates descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distri-
bution, excluding NaN values.
Analyzes both numeric and object series, as well as DataFrame column sets of mixed data types. The output
will vary depending on what is provided. Refer to the notes below for more detail.
Parameters percentiles : list-like of numbers, optional
The percentiles to include in the output. All should fall between 0 and 1. The default
is [.25, .5, .75], which returns the 25th, 50th, and 75th percentiles.
include : ‘all’, list-like of dtypes or None (default), optional
A white list of data types to include in the result. Ignored for Series. Here are the
options:
• ‘all’ : All columns of the input will be included in the output.
• A list-like of dtypes : Limits the results to the provided data types. To limit the
result to numeric types submit numpy.number. To limit it instead to object
columns submit the numpy.object data type. Strings can also be used in
the style of select_dtypes (e.g. df.describe(include=['O'])).
To select pandas categorical columns, use 'category'
• None (default) : The result will include all numeric columns.
exclude : list-like of dtypes or None (default), optional,
A black list of data types to omit from the result. Ignored for Series. Here are the
options:
• A list-like of dtypes : Excludes the provided data types from the result. To
exclude numeric types submit numpy.number. To exclude object columns
submit the data type numpy.object. Strings can also be used in the style of
select_dtypes (e.g. df.describe(include=['O'])). To exclude
pandas categorical columns, use 'category'
• None (default) : The result will exclude nothing.
Returns
summary: Series/DataFrame of summary statistics
See also:
DataFrame.count, DataFrame.max, DataFrame.min, DataFrame.mean, DataFrame.std,
DataFrame.select_dtypes
Notes
For numeric data, the result’s index will include count, mean, std, min, max as well as lower, 50 and upper
percentiles. By default the lower percentile is 25 and the upper percentile is 75. The 50 percentile is the same
as the median.
For object data (e.g. strings or timestamps), the result’s index will include count, unique, top, and freq.
The top is the most common value. The freq is the most common value’s frequency. Timestamps also include
the first and last items.
If multiple object values have the highest count, then the count and top results will be arbitrarily chosen from
among those with the highest count.
For mixed data types provided via a DataFrame, the default is to return only an analysis of numeric columns.
If the dataframe consists only of object and categorical data without any numeric columns, the default is to
return an analysis of both the object and categorical columns. If include='all' is provided as an option,
the result will include a union of attributes of each type.
The include and exclude parameters can be used to limit which columns in a DataFrame are analyzed for the
output. The parameters are ignored when analyzing a Series.
Examples
>>> df.describe(include='all')
categorical numeric object
count 3 3.0 3
unique 3 NaN 3
top f NaN c
freq 1 NaN 1
mean NaN 2.0 NaN
std NaN 1.0 NaN
min NaN 1.0 NaN
25% NaN 1.5 NaN
50% NaN 2.0 NaN
75% NaN 2.5 NaN
max NaN 3.0 NaN
>>> df.numeric.describe()
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Name: numeric, dtype: float64
>>> df.describe(include=[np.number])
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
>>> df.describe(include=[np.object])
object
count 3
unique 3
top c
freq 1
>>> df.describe(include=['category'])
categorical
count 3
unique 3
top f
freq 1
>>> df.describe(exclude=[np.number])
categorical object
count 3 3
unique 3 3
top f c
freq 1 1
>>> df.describe(exclude=[np.object])
categorical numeric
count 3 3.0
unique 3 NaN
top f NaN
freq 1 NaN
mean NaN 2.0
std NaN 1.0
min NaN 1.0
25% NaN 1.5
50% NaN 2.0
75% NaN 2.5
max NaN 3.0
34.17.3.38 pandas.core.groupby.DataFrameGroupBy.diff
DataFrameGroupBy.diff
First discrete difference of element.
Calculates the difference of a DataFrame element compared with another element in the DataFrame (default is
the element in the same column of the previous row).
Parameters periods : int, default 1
Periods to shift for calculating difference, accepts negative values.
axis : {0 or ‘index’, 1 or ‘columns’}, default 0
Take difference over rows (0) or columns (1).
New in version 0.16.1..
Returns
diffed [DataFrame]
See also:
Examples
>>> df.diff()
a b c
0 NaN NaN NaN
1 1.0 0.0 3.0
2 1.0 1.0 5.0
3 1.0 1.0 7.0
4 1.0 2.0 9.0
5 1.0 3.0 11.0
>>> df.diff(axis=1)
a b c
0 NaN 0.0 0.0
1 NaN -1.0 3.0
2 NaN -1.0 7.0
3 NaN -1.0 13.0
4 NaN 0.0 20.0
5 NaN 2.0 28.0
>>> df.diff(periods=3)
a b c
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 3.0 2.0 15.0
4 3.0 4.0 21.0
5 3.0 6.0 27.0
>>> df.diff(periods=-1)
a b c
0 -1.0 0.0 -3.0
1 -1.0 -1.0 -5.0
2 -1.0 -1.0 -7.0
3 -1.0 -2.0 -9.0
4 -1.0 -3.0 -11.0
5 NaN NaN NaN
34.17.3.39 pandas.core.groupby.DataFrameGroupBy.ffill
DataFrameGroupBy.ffill(limit=None)
Forward fill the values
Parameters limit : integer, optional
limit of how many values to fill
See also:
Series.pad, DataFrame.pad, Series.fillna, DataFrame.fillna
34.17.3.40 pandas.core.groupby.DataFrameGroupBy.fillna
DataFrameGroupBy.fillna
Fill NA/NaN values using the specified method
Parameters value : scalar, dict, Series, or DataFrame
Value to use to fill holes (e.g. 0), alternately a dict/Series/DataFrame of values spec-
ifying which value to use for each index (for a Series) or column (for a DataFrame).
(values not in the dict/Series/DataFrame will not be filled). This value cannot be a
list.
method : {‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None
Method to use for filling holes in reindexed Series pad / ffill: propagate last valid
observation forward to next valid backfill / bfill: use NEXT valid observation to fill
gap
a dict of item->dtype of what to downcast if possible, or the string ‘infer’ which will
try to downcast to an appropriate equal type (e.g. float64 to int64 if possible)
Returns
filled [DataFrame]
See also:
reindex, asfreq
Examples
>>> df.fillna(0)
A B C D
0 0.0 2.0 0.0 0
1 3.0 4.0 0.0 1
2 0.0 0.0 0.0 5
3 0.0 3.0 0.0 4
>>> df.fillna(method='ffill')
A B C D
0 NaN 2.0 NaN 0
1 3.0 4.0 NaN 1
2 3.0 4.0 NaN 5
3 3.0 3.0 NaN 4
Replace all NaN elements in column ‘A’, ‘B’, ‘C’, and ‘D’, with 0, 1, 2, and 3 respectively.
34.17.3.41 pandas.core.groupby.DataFrameGroupBy.filter
Notes
Each subframe is endowed the attribute ‘name’ in case you need to know which group you are working on.
Examples
34.17.3.42 pandas.core.groupby.DataFrameGroupBy.hist
DataFrameGroupBy.hist
Make a histogram of the DataFrame’s.
A histogram is a representation of the distribution of data. This function calls matplotlib.pyplot.
hist(), on each series in the DataFrame, resulting in one histogram per column.
Parameters data : DataFrame
The pandas object holding the data.
Examples
This example draws a histogram based on the length and width of some animals, displayed in three bins
>>> df = pd.DataFrame({
... 'length': [1.5, 0.5, 1.2, 0.9, 3],
... 'width': [0.7, 0.2, 0.15, 0.2, 1.1]
... }, index= ['pig', 'rabbit', 'duck', 'chicken', 'horse'])
>>> hist = df.hist(bins=3)
34.17.3.43 pandas.core.groupby.DataFrameGroupBy.idxmax
DataFrameGroupBy.idxmax
Return index of first occurrence of maximum over requested axis. NA/null values are excluded.
Parameters axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
Returns
idxmax [Series]
Raises ValueError
• If the row/column is empty
See also:
Series.idxmax
Notes
34.17.3.44 pandas.core.groupby.DataFrameGroupBy.idxmin
DataFrameGroupBy.idxmin
Return index of first occurrence of minimum over requested axis. NA/null values are excluded.
Parameters axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
Returns
idxmin [Series]
Raises ValueError
Notes
34.17.3.45 pandas.core.groupby.DataFrameGroupBy.mad
DataFrameGroupBy.mad
Return the mean absolute deviation of the values for the requested axis
Parameters
axis [{index (0), columns (1)}]
skipna : boolean, default True
Exclude NA/null values when computing the result.
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing
into a Series
numeric_only : boolean, default None
Include only float, int, boolean columns. If None, will attempt to use everything,
then use only numeric data. Not implemented for Series.
Returns
mad [Series or DataFrame (if level specified)]
34.17.3.46 pandas.core.groupby.DataFrameGroupBy.pct_change
34.17.3.47 pandas.core.groupby.DataFrameGroupBy.plot
DataFrameGroupBy.plot
Class implementing the .plot attribute for groupby objects
34.17.3.48 pandas.core.groupby.DataFrameGroupBy.quantile
DataFrameGroupBy.quantile
Return values at the given quantile over requested axis, a la numpy.percentile.
Parameters q : float or array-like, default 0.5 (50% quantile)
Examples
Specifying numeric_only=False will also compute the quantile of datetime and timedelta data.
34.17.3.49 pandas.core.groupby.DataFrameGroupBy.rank
Returns
—–
DataFrame with ranking of values within each group
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.50 pandas.core.groupby.DataFrameGroupBy.resample
34.17.3.51 pandas.core.groupby.DataFrameGroupBy.shift
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.52 pandas.core.groupby.DataFrameGroupBy.size
DataFrameGroupBy.size()
Compute group sizes
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.17.3.53 pandas.core.groupby.DataFrameGroupBy.skew
DataFrameGroupBy.skew
Return unbiased skew over requested axis Normalized by N-1
Parameters
axis [{index (0), columns (1)}]
skipna : boolean, default True
Exclude NA/null values when computing the result.
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing
into a Series
numeric_only : boolean, default None
Include only float, int, boolean columns. If None, will attempt to use everything,
then use only numeric data. Not implemented for Series.
Returns
skew [Series or DataFrame (if level specified)]
34.17.3.54 pandas.core.groupby.DataFrameGroupBy.take
DataFrameGroupBy.take
Return the elements in the given positional indices along an axis.
This means that we are not indexing according to actual values in the index attribute of the object. We are
indexing according to the actual position of the element in the object.
Examples
We may take elements using negative integers for positive indices, starting from the end of the object, just like
with Python lists.
34.17.3.55 pandas.core.groupby.DataFrameGroupBy.tshift
DataFrameGroupBy.tshift
Shift the time index, using the index’s frequency if available.
Parameters periods : int
Number of periods to move, can be positive or negative
freq : DateOffset, timedelta, or time rule string, default None
Increment to use from the tseries module or time rule (e.g. ‘EOM’)
axis : int or basestring
Corresponds to the axis that contains the Index
Returns
shifted [NDFrame]
Notes
If freq is not specified then tries to use the freq or inferred_freq attributes of the index. If neither of those
attributes exist, a ValueError is thrown
The following methods are available only for SeriesGroupBy objects.
34.17.3.56 pandas.core.groupby.SeriesGroupBy.nlargest
SeriesGroupBy.nlargest
Return the largest n elements.
Parameters n : int
Return this many descending sorted values
keep : {‘first’, ‘last’}, default ‘first’
Where there are duplicate values: - first : take the first occurrence. - last : take
the last occurrence.
Returns top_n : Series
The n largest values in the Series, in sorted order
See also:
Series.nsmallest
Notes
Examples
34.17.3.57 pandas.core.groupby.SeriesGroupBy.nsmallest
SeriesGroupBy.nsmallest
Return the smallest n elements.
Parameters n : int
Return this many ascending sorted values
keep : {‘first’, ‘last’}, default ‘first’
Where there are duplicate values: - first : take the first occurrence. - last : take
the last occurrence.
Notes
Faster than .sort_values().head(n) for small n relative to the size of the Series object.
Examples
34.17.3.58 pandas.core.groupby.SeriesGroupBy.nunique
SeriesGroupBy.nunique(dropna=True)
Returns number of unique elements in the group
34.17.3.59 pandas.core.groupby.SeriesGroupBy.unique
SeriesGroupBy.unique
Return unique values of Series object.
Uniques are returned in order of appearance. Hash table-based unique, therefore does NOT sort.
Returns ndarray or Categorical
The unique values returned as a NumPy array. In case of categorical data type,
returned as a Categorical.
See also:
Examples
>>> pd.Series(pd.Categorical(list('baabc'))).unique()
[b, a, c]
Categories (3, object): [b, a, c]
34.17.3.60 pandas.core.groupby.SeriesGroupBy.value_counts
34.17.3.61 pandas.core.groupby.SeriesGroupBy.is_monotonic_increasing
SeriesGroupBy.is_monotonic_increasing
Return boolean if values in the object are monotonic_increasing
New in version 0.19.0.
Returns
is_monotonic [boolean]
34.17.3.62 pandas.core.groupby.SeriesGroupBy.is_monotonic_decreasing
SeriesGroupBy.is_monotonic_decreasing
Return boolean if values in the object are monotonic_decreasing
New in version 0.19.0.
Returns
is_monotonic_decreasing [boolean]
The following methods are available only for DataFrameGroupBy objects.
34.17.3.63 pandas.core.groupby.DataFrameGroupBy.corrwith
DataFrameGroupBy.corrwith
Compute pairwise correlation between rows or columns of two DataFrame objects.
Parameters
other [DataFrame, Series]
axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ to compute column-wise, 1 or ‘columns’ for row-wise
drop : boolean, default False
Drop missing indices from result, default returns union of all
Returns
correls [Series]
34.17.3.64 pandas.core.groupby.DataFrameGroupBy.boxplot
Examples
34.18 Resampling
34.18.1.1 pandas.core.resample.Resampler.__iter__
Resampler.__iter__()
Groupby iterator
Returns
Generator yielding sequence of (name, subsetted object)
for each group
34.18.1.2 pandas.core.resample.Resampler.groups
Resampler.groups
dict {group name -> group labels}
34.18.1.3 pandas.core.resample.Resampler.indices
Resampler.indices
dict {group name -> group indices}
34.18.1.4 pandas.core.resample.Resampler.get_group
Resampler.get_group(name, obj=None)
Constructs NDFrame from group with provided name
Parameters name : object
the name of the group to get as a DataFrame
obj : NDFrame, default None
the NDFrame to take the DataFrame out of. If it is None, the object groupby was
called on will be used
Returns
group [type of obj]
Resampler.apply(arg, *args, **kwargs) Aggregate using one or more operations over the speci-
fied axis.
Resampler.aggregate(arg, *args, **kwargs) Aggregate using one or more operations over the speci-
fied axis.
Resampler.transform(arg, *args, **kwargs) Call function producing a like-indexed Series on each
group and return a Series with the transformed values
Resampler.pipe(func, *args, **kwargs) Apply a function func with arguments to this Resam-
pler object and return the function’s result.
34.18.2.1 pandas.core.resample.Resampler.apply
**kwargs
Keyword arguments to pass to func.
Returns
aggregated [DataFrame]
See also:
pandas.DataFrame.groupby.aggregate, pandas.DataFrame.resample.transform,
pandas.DataFrame.aggregate
Notes
Examples
>>> s = Series([1,2,3,4,5],
index=pd.date_range('20130101',
periods=5,freq='s'))
2013-01-01 00:00:00 1
2013-01-01 00:00:01 2
2013-01-01 00:00:02 3
2013-01-01 00:00:03 4
2013-01-01 00:00:04 5
Freq: S, dtype: int64
>>> r = s.resample('2s')
DatetimeIndexResampler [freq=<2 * Seconds>, axis=0, closed=left,
label=left, convention=start, base=0]
>>> r.agg(np.sum)
2013-01-01 00:00:00 3
2013-01-01 00:00:02 7
2013-01-01 00:00:04 5
Freq: 2S, dtype: int64
>>> r.agg(['sum','mean','max'])
sum mean max
2013-01-01 00:00:00 3 1.5 2
2013-01-01 00:00:02 7 3.5 4
2013-01-01 00:00:04 5 5.0 5
34.18.2.2 pandas.core.resample.Resampler.aggregate
Notes
Examples
>>> s = Series([1,2,3,4,5],
index=pd.date_range('20130101',
periods=5,freq='s'))
2013-01-01 00:00:00 1
2013-01-01 00:00:01 2
2013-01-01 00:00:02 3
2013-01-01 00:00:03 4
2013-01-01 00:00:04 5
Freq: S, dtype: int64
>>> r = s.resample('2s')
DatetimeIndexResampler [freq=<2 * Seconds>, axis=0, closed=left,
label=left, convention=start, base=0]
>>> r.agg(np.sum)
2013-01-01 00:00:00 3
2013-01-01 00:00:02 7
2013-01-01 00:00:04 5
Freq: 2S, dtype: int64
>>> r.agg(['sum','mean','max'])
sum mean max
2013-01-01 00:00:00 3 1.5 2
2013-01-01 00:00:02 7 3.5 4
2013-01-01 00:00:04 5 5.0 5
34.18.2.3 pandas.core.resample.Resampler.transform
Examples
34.18.2.4 pandas.core.resample.Resampler.pipe
>>> (df.groupby('group')
... .pipe(f)
... .pipe(g, arg1=a)
... .pipe(h, arg2=b, arg3=c))
Notes
Examples
To get the difference between each 2-day period’s maximum and minimum value in one pass, you can do
34.18.3 Upsampling
34.18.3.1 pandas.core.resample.Resampler.ffill
Resampler.ffill(limit=None)
Forward fill the values
Parameters limit : integer, optional
limit of how many values to fill
Returns
an upsampled Series
See also:
Series.fillna, DataFrame.fillna
34.18.3.2 pandas.core.resample.Resampler.backfill
Resampler.backfill(limit=None)
Backward fill the new missing values in the resampled data.
In statistics, imputation is the process of replacing missing data with substituted values [R30]. When resampling
data, missing values may appear (e.g., when the resampling frequency is higher than the original frequency).
The backward fill will replace NaN values that appeared in the resampled data with the next value in the original
sequence. Missing values that existed in the orginal data will not be modified.
Parameters limit : integer, optional
Limit of how many values to fill.
Returns Series, DataFrame
An upsampled Series or DataFrame with backward filled NaN values.
See also:
References
[R30]
Examples
Resampling a Series:
>>> s.resample('30min').backfill()
2018-01-01 00:00:00 1
2018-01-01 00:30:00 2
2018-01-01 01:00:00 2
2018-01-01 01:30:00 3
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
>>> s.resample('15min').backfill(limit=2)
2018-01-01 00:00:00 1.0
2018-01-01 00:15:00 NaN
2018-01-01 00:30:00 2.0
2018-01-01 00:45:00 2.0
2018-01-01 01:00:00 2.0
2018-01-01 01:15:00 NaN
2018-01-01 01:30:00 3.0
2018-01-01 01:45:00 3.0
2018-01-01 02:00:00 3.0
Freq: 15T, dtype: float64
>>> df.resample('30min').backfill()
a b
2018-01-01 00:00:00 2.0 1
2018-01-01 00:30:00 NaN 3
2018-01-01 01:00:00 NaN 3
2018-01-01 01:30:00 6.0 5
2018-01-01 02:00:00 6.0 5
>>> df.resample('15min').backfill(limit=2)
a b
2018-01-01 00:00:00 2.0 1.0
2018-01-01 00:15:00 NaN NaN
2018-01-01 00:30:00 NaN 3.0
2018-01-01 00:45:00 NaN 3.0
2018-01-01 01:00:00 NaN 3.0
2018-01-01 01:15:00 NaN NaN
2018-01-01 01:30:00 6.0 5.0
2018-01-01 01:45:00 6.0 5.0
2018-01-01 02:00:00 6.0 5.0
34.18.3.3 pandas.core.resample.Resampler.bfill
Resampler.bfill(limit=None)
Backward fill the new missing values in the resampled data.
In statistics, imputation is the process of replacing missing data with substituted values [R31]. When resampling
data, missing values may appear (e.g., when the resampling frequency is higher than the original frequency).
The backward fill will replace NaN values that appeared in the resampled data with the next value in the original
sequence. Missing values that existed in the orginal data will not be modified.
Parameters limit : integer, optional
Limit of how many values to fill.
Returns Series, DataFrame
An upsampled Series or DataFrame with backward filled NaN values.
See also:
References
[R31]
Examples
Resampling a Series:
>>> s.resample('30min').backfill()
2018-01-01 00:00:00 1
2018-01-01 00:30:00 2
2018-01-01 01:00:00 2
2018-01-01 01:30:00 3
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
>>> s.resample('15min').backfill(limit=2)
2018-01-01 00:00:00 1.0
2018-01-01 00:15:00 NaN
2018-01-01 00:30:00 2.0
2018-01-01 00:45:00 2.0
2018-01-01 01:00:00 2.0
2018-01-01 01:15:00 NaN
2018-01-01 01:30:00 3.0
2018-01-01 01:45:00 3.0
2018-01-01 02:00:00 3.0
Freq: 15T, dtype: float64
>>> df.resample('30min').backfill()
a b
2018-01-01 00:00:00 2.0 1
2018-01-01 00:30:00 NaN 3
2018-01-01 01:00:00 NaN 3
2018-01-01 01:30:00 6.0 5
2018-01-01 02:00:00 6.0 5
>>> df.resample('15min').backfill(limit=2)
a b
2018-01-01 00:00:00 2.0 1.0
2018-01-01 00:15:00 NaN NaN
2018-01-01 00:30:00 NaN 3.0
2018-01-01 00:45:00 NaN 3.0
2018-01-01 01:00:00 NaN 3.0
2018-01-01 01:15:00 NaN NaN
2018-01-01 01:30:00 6.0 5.0
2018-01-01 01:45:00 6.0 5.0
2018-01-01 02:00:00 6.0 5.0
34.18.3.4 pandas.core.resample.Resampler.pad
Resampler.pad(limit=None)
Forward fill the values
Parameters limit : integer, optional
limit of how many values to fill
Returns
an upsampled Series
See also:
Series.fillna, DataFrame.fillna
34.18.3.5 pandas.core.resample.Resampler.nearest
Resampler.nearest(limit=None)
Fill values with nearest neighbor starting from center
Parameters limit : integer, optional
limit of how many values to fill
New in version 0.21.0.
Returns
an upsampled Series
See also:
Series.fillna, DataFrame.fillna
34.18.3.6 pandas.core.resample.Resampler.fillna
Resampler.fillna(method, limit=None)
Fill missing values introduced by upsampling.
In statistics, imputation is the process of replacing missing data with substituted values [R32]. When resampling
data, missing values may appear (e.g., when the resampling frequency is higher than the original frequency).
Missing values that existed in the orginal data will not be modified.
Parameters method : {‘pad’, ‘backfill’, ‘ffill’, ‘bfill’, ‘nearest’}
Method to use for filling holes in resampled data
• ‘pad’ or ‘ffill’: use previous valid observation to fill gap (forward fill).
• ‘backfill’ or ‘bfill’: use next valid observation to fill gap.
• ‘nearest’: use nearest valid observation to fill gap.
limit : integer, optional
Limit of how many consecutive missing values to fill.
Returns Series or DataFrame
An upsampled Series or DataFrame with missing values filled.
See also:
References
[R32]
Examples
Resampling a Series:
>>> s = pd.Series([1, 2, 3],
... index=pd.date_range('20180101', periods=3, freq='h'))
>>> s
2018-01-01 00:00:00 1
2018-01-01 01:00:00 2
2018-01-01 02:00:00 3
Freq: H, dtype: int64
>>> s.resample('30min').fillna("backfill")
2018-01-01 00:00:00 1
2018-01-01 00:30:00 2
2018-01-01 01:00:00 2
2018-01-01 01:30:00 3
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
>>> s.resample('30min').fillna("pad")
2018-01-01 00:00:00 1
2018-01-01 00:30:00 1
2018-01-01 01:00:00 2
2018-01-01 01:30:00 2
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
>>> s.resample('30min').fillna("nearest")
2018-01-01 00:00:00 1
2018-01-01 00:30:00 2
2018-01-01 01:00:00 2
2018-01-01 01:30:00 3
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
>>> sm.resample('30min').fillna('backfill')
2018-01-01 00:00:00 1.0
2018-01-01 00:30:00 NaN
2018-01-01 01:00:00 NaN
2018-01-01 01:30:00 3.0
2018-01-01 02:00:00 3.0
Freq: 30T, dtype: float64
>>> sm.resample('30min').fillna('pad')
2018-01-01 00:00:00 1.0
2018-01-01 00:30:00 1.0
2018-01-01 01:00:00 NaN
2018-01-01 01:30:00 NaN
2018-01-01 02:00:00 3.0
Freq: 30T, dtype: float64
>>> sm.resample('30min').fillna('nearest')
2018-01-01 00:00:00 1.0
2018-01-01 00:30:00 NaN
2018-01-01 01:00:00 NaN
2018-01-01 01:30:00 3.0
2018-01-01 02:00:00 3.0
Freq: 30T, dtype: float64
DataFrame resampling is done column-wise. All the same options are available.
>>> df.resample('30min').fillna("bfill")
a b
2018-01-01 00:00:00 2.0 1
2018-01-01 00:30:00 NaN 3
2018-01-01 01:00:00 NaN 3
2018-01-01 01:30:00 6.0 5
2018-01-01 02:00:00 6.0 5
34.18.3.7 pandas.core.resample.Resampler.asfreq
Resampler.asfreq(fill_value=None)
return the values at the new freq, essentially a reindex
Parameters fill_value: scalar, optional
Value to use for missing values, applied during upsampling (note this does not fill
NaNs that already were present).
New in version 0.20.0.
See also:
Series.asfreq, DataFrame.asfreq
34.18.3.8 pandas.core.resample.Resampler.interpolate
• ‘linear’: ignore the index and treat the values as equally spaced. This is the only
method supported on MultiIndexes. default
• ‘time’: interpolation works on daily and higher resolution data to interpolate
given length of interval
• ‘index’, ‘values’: use the actual numerical values of the index
• ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘barycentric’, ‘polyno-
mial’ is passed to scipy.interpolate.interp1d. Both ‘poly-
nomial’ and ‘spline’ require that you also specify an order (int), e.g.
New in version 0.18.1: Added support for the ‘akima’ method Added interpolate
method ‘from_derivatives’ which replaces ‘piecewise_polynomial’ in scipy 0.18;
backwards-compatible with scipy < 0.18
axis : {0, 1}, default 0
• 0: fill column-by-column
• 1: fill row-by-row
limit : int, default None.
Maximum number of consecutive NaNs to fill. Must be greater than 0.
Returns
Series or DataFrame of same shape interpolated at the NaNs
See also:
reindex, replace, fillna
Examples
Filling in NaNs
34.18.4.1 pandas.core.resample.Resampler.count
Resampler.count(_method=’count’)
Compute count of group, excluding missing values
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.18.4.2 pandas.core.resample.Resampler.nunique
Resampler.nunique(_method=’nunique’)
Returns number of unique elements in the group
34.18.4.3 pandas.core.resample.Resampler.first
34.18.4.4 pandas.core.resample.Resampler.last
34.18.4.5 pandas.core.resample.Resampler.max
34.18.4.6 pandas.core.resample.Resampler.mean
34.18.4.7 pandas.core.resample.Resampler.median
34.18.4.8 pandas.core.resample.Resampler.min
34.18.4.9 pandas.core.resample.Resampler.ohlc
34.18.4.10 pandas.core.resample.Resampler.prod
34.18.4.11 pandas.core.resample.Resampler.size
Resampler.size()
Compute group sizes
See also:
pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby
34.18.4.12 pandas.core.resample.Resampler.sem
34.18.4.13 pandas.core.resample.Resampler.std
34.18.4.14 pandas.core.resample.Resampler.sum
34.18.4.15 pandas.core.resample.Resampler.var
34.19 Style
Styler(data[, precision, table_styles, . . . ]) Helps style a DataFrame or Series according to the data
with HTML and CSS.
Styler.from_custom_template(searchpath, Factory function for creating a subclass of Styler
name) with a custom template and Jinja environment.
34.19.1.1 pandas.io.formats.style.Styler
Notes
Most styling will be done by passing style functions into Styler.apply or Styler.applymap. Style
functions should return values with strings containing CSS 'attr: value' that will be applied to the
indicated cells.
If using in the Jupyter notebook, Styler has defined a _repr_html_ to automatically render itself. Otherwise
call Styler.render to get the genterated HTML.
CSS classes are attached to the generated HTML
• Index and Column names include index_name and level<k> where k is its level in a MultiIndex
• Index label cells include
– row_heading
– row<n> where n is the numeric position of the row
– level<k> where k is the level in a MultiIndex
• Column label cells include * col_heading * col<n> where n is the numeric position of the column
* evel<k> where k is the level in a MultiIndex
• Blank cells include blank
• Data cells include data
Attributes
Methods
pandas.io.formats.style.Styler.apply
Notes
The output shape of func should match the input, i.e. if x is the input row, column, or table (depending
on axis), then func(x.shape) == x.shape should be true.
This is similar to DataFrame.apply, except that axis=None applies the function to the entire
DataFrame at once, rather than column-wise or row-wise.
Examples
pandas.io.formats.style.Styler.applymap
pandas.io.formats.style.Styler.background_gradient
Notes
Tune low and high to keep the text legible by not using the entire range of the color map. These extend
the range of the data by low * (x.max() - x.min()) and high * (x.max() - x.min())
before normalizing.
pandas.io.formats.style.Styler.bar
axis: int
pandas.io.formats.style.Styler.clear
Styler.clear()
“Reset” the styler, removing any previously applied styles. Returns None.
pandas.io.formats.style.Styler.export
Styler.export()
Export the styles to applied to the current Styler. Can be applied to a second style with Styler.use.
Returns
styles: list
See also:
Styler.use
pandas.io.formats.style.Styler.format
Styler.format(formatter, subset=None)
Format the text display value of cells.
New in version 0.18.0.
Parameters
formatter: str, callable, or dict
subset: IndexSlice
An argument to DataFrame.loc that restricts which elements formatter is
applied to.
Returns
self [Styler]
Notes
Examples
pandas.io.formats.style.Styler.from_custom_template
pandas.io.formats.style.Styler.hide_columns
Styler.hide_columns(subset)
Hide columns from rendering.
New in version 0.23.0.
Parameters subset: IndexSlice
An argument to DataFrame.loc that identifies which columns are hidden.
Returns
self [Styler]
pandas.io.formats.style.Styler.hide_index
Styler.hide_index()
Hide any indices from rendering.
New in version 0.23.0.
Returns
self [Styler]
pandas.io.formats.style.Styler.highlight_max
pandas.io.formats.style.Styler.highlight_min
pandas.io.formats.style.Styler.highlight_null
Styler.highlight_null(null_color=’red’)
Shade the background null_color for missing values.
Parameters
null_color: str
Returns
self [Styler]
pandas.io.formats.style.Styler.render
Styler.render(**kwargs)
Render the built up styles to HTML
Parameters ‘**kwargs‘:
Any additional keyword arguments are passed through to self.template.
render. This is useful when you need to provide additional variables for a
custom template.
New in version 0.20.
Returns rendered: str
the rendered HTML
Notes
Styler objects have defined the _repr_html_ method which automatically calls self.render()
when it’s the last item in a Notebook cell. When calling Styler.render() directly, wrap the result
in IPython.display.HTML to view the rendered HTML in the notebook.
Pandas uses the following keys in render. Arguments passed in **kwargs take precedence, so think
carefully if you want to override them:
• head
• cellstyle
• body
• uuid
• precision
• table_styles
• caption
• table_attributes
pandas.io.formats.style.Styler.set_caption
Styler.set_caption(caption)
Set the caption on a Styler
Parameters
caption: str
Returns
self [Styler]
pandas.io.formats.style.Styler.set_precision
Styler.set_precision(precision)
Set the precision used to render.
Parameters
precision: int
Returns
self [Styler]
pandas.io.formats.style.Styler.set_properties
Styler.set_properties(subset=None, **kwargs)
Convenience method for setting one or more non-data dependent properties or each cell.
Parameters subset: IndexSlice
a valid slice for data to limit the style application to
kwargs: dict
property: value pairs to be set for each cell
Returns
self [Styler]
Examples
pandas.io.formats.style.Styler.set_table_attributes
Styler.set_table_attributes(attributes)
Set the table attributes. These are the items that show up in the opening <table> tag in addition to to
automatic (by default) id.
Parameters
attributes [string]
Returns
self [Styler]
Examples
pandas.io.formats.style.Styler.set_table_styles
Styler.set_table_styles(table_styles)
Set the table styles on a Styler. These are placed in a <style> tag before the generated HTML table.
Parameters table_styles: list
Each individual table_style should be a dictionary with selector and props
keys. selector should be a CSS selector that the style will be applied to (au-
tomatically prefixed by the table’s UUID) and props should be a list of tuples
with (attribute, value).
Returns
self [Styler]
Examples
pandas.io.formats.style.Styler.set_uuid
Styler.set_uuid(uuid)
Set the uuid for a Styler.
Parameters
uuid: str
Returns
self [Styler]
pandas.io.formats.style.Styler.to_excel
Notes
If passing an existing ExcelWriter object, then the sheet will be added to the existing workbook. This can
be used to save different DataFrames to one workbook:
>>> writer = pd.ExcelWriter('output.xlsx')
>>> df1.to_excel(writer,'Sheet1')
>>> df2.to_excel(writer,'Sheet2')
>>> writer.save()
For compatibility with to_csv, to_excel serializes lists and dicts to strings before writing.
pandas.io.formats.style.Styler.use
Styler.use(styles)
Set the styles on the current Styler, possibly using styles from Styler.export.
Parameters styles: list
list of style functions
Returns
self [Styler]
See also:
Styler.export
pandas.io.formats.style.Styler.where
kwargs : dict
pass along to cond
Returns
self [Styler]
See also:
Styler.applymap
Styler.env
Styler.template
Styler.loader
34.19.2.1 pandas.io.formats.style.Styler.env
34.19.2.2 pandas.io.formats.style.Styler.template
34.19.2.3 pandas.io.formats.style.Styler.loader
34.20 Plotting
andrews_curves(frame, class_column[, ax, . . . ]) Generates a matplotlib plot of Andrews curves, for vi-
sualising clusters of multivariate data.
bootstrap_plot(series[, fig, size, samples]) Bootstrap plot on mean, median and mid-range statis-
tics.
deregister_matplotlib_converters() Remove pandas’ formatters and converters
lag_plot(series[, lag, ax]) Lag plot for time series.
parallel_coordinates(frame, class_column[, Parallel coordinates plotting.
. . . ])
radviz(frame, class_column[, ax, color, . . . ]) Plot a multidimensional dataset in 2D.
register_matplotlib_converters([explicit]) Register Pandas Formatters and Converters with mat-
plotlib
scatter_matrix(frame[, alpha, figsize, ax, . . . ]) Draw a matrix of scatter plots.
34.20.1 pandas.plotting.andrews_curves
34.20.2 pandas.plotting.bootstrap_plot
Examples
34.20.3 pandas.plotting.deregister_matplotlib_converters
pandas.plotting.deregister_matplotlib_converters()
Remove pandas’ formatters and converters
Removes the custom converters added by register(). This attempts to set the state of the registry back to
the state before pandas registered its own units. Converters for pandas’ own types like Timestamp and Period
are removed completely. Converters for types pandas overwrites, like datetime.datetime, are restored to
their original value.
See also:
deregister_matplotlib_converters
34.20.4 pandas.plotting.lag_plot
34.20.5 pandas.plotting.parallel_coordinates
Examples
34.20.6 pandas.plotting.radviz
RadViz allow to project a N-dimensional data set into a 2D space where the influence of each dimension can be
interpreted as a balance between the influence of all dimensions.
More info available at the original article describing RadViz.
Parameters frame : DataFrame
Pandas object holding the data.
class_column : str
Column name containing the name of the data point category.
ax : matplotlib.axes.Axes, optional
A plot instance to which to add the information.
color : list[str] or tuple[str], optional
Assign a color to each category. Example: [‘blue’, ‘green’].
colormap : str or matplotlib.colors.Colormap, default None
Colormap to select colors from. If string, load colormap with that name from mat-
plotlib.
kwds : optional
Options to pass to matplotlib scatter plotting method.
Returns
axes [matplotlib.axes.Axes]
See also:
Examples
>>> df = pd.DataFrame({
... 'SepalLength': [6.5, 7.7, 5.1, 5.8, 7.6, 5.0, 5.4, 4.6,
... 6.7, 4.6],
... 'SepalWidth': [3.0, 3.8, 3.8, 2.7, 3.0, 2.3, 3.0, 3.2,
... 3.3, 3.6],
... 'PetalLength': [5.5, 6.7, 1.9, 5.1, 6.6, 3.3, 4.5, 1.4,
... 5.7, 1.0],
... 'PetalWidth': [1.8, 2.2, 0.4, 1.9, 2.1, 1.0, 1.5, 0.2,
... 2.1, 0.2],
... 'Category': ['virginica', 'virginica', 'setosa',
... 'virginica', 'virginica', 'versicolor',
... 'versicolor', 'setosa', 'virginica',
... 'setosa']
... })
>>> rad_viz = pd.plotting.radviz(df, 'Category')
34.20.7 pandas.plotting.register_matplotlib_converters
pandas.plotting.register_matplotlib_converters(explicit=True)
Register Pandas Formatters and Converters with matplotlib
This function modifies the global matplotlib.units.registry dictionary. Pandas adds custom con-
verters for
• pd.Timestamp
• pd.Period
• np.datetime64
• datetime.datetime
• datetime.date
• datetime.time
See also:
deregister_matplotlib_converter
34.20.8 pandas.plotting.scatter_matrix
Examples
describe_option(pat[, _print_desc]) Prints the description for one or more registered options.
reset_option(pat) Reset one or more options to their default value.
get_option(pat) Retrieves the value of the specified option.
set_option(pat, value) Sets the value of the specified option.
option_context(*args) Context manager to temporarily set options in the with
statement context.
34.21.1.1 pandas.describe_option
• io.hdf.[default_format, dropna_table]
• io.parquet.[engine]
• mode.[chained_assignment, sim_interactive, use_inf_as_na, use_inf_as_null]
• plotting.matplotlib.[register_converters]
Notes
display.html.table_schema [boolean] Whether to publish a Table Schema representation for frontends that
support it. (default: False) [default: False] [currently: False]
display.html.use_mathjax [boolean] When True, Jupyter notebook will process table contents using Math-
Jax, rendering mathematical expressions enclosed by the dollar symbol. (default: True) [default: True]
[currently: True]
display.large_repr [‘truncate’/’info’] For DataFrames exceeding max_rows/max_cols, the repr (and HTML
repr) can show a truncated table (the default from 0.13), or switch to the view from df.info() (the behaviour
in earlier versions of pandas). [default: truncate] [currently: truncate]
display.latex.escape [bool] This specifies if the to_latex method of a Dataframe uses escapes special characters.
Valid values: False,True [default: True] [currently: True]
display.latex.longtable :bool This specifies if the to_latex method of a Dataframe uses the longtable format.
Valid values: False,True [default: False] [currently: False]
display.latex.multicolumn [bool] This specifies if the to_latex method of a Dataframe uses multicolumns to
pretty-print MultiIndex columns. Valid values: False,True [default: True] [currently: True]
display.latex.multicolumn_format [bool] This specifies if the to_latex method of a Dataframe uses multi-
columns to pretty-print MultiIndex columns. Valid values: False,True [default: l] [currently: l]
display.latex.multirow [bool] This specifies if the to_latex method of a Dataframe uses multirows to pretty-
print MultiIndex rows. Valid values: False,True [default: False] [currently: False]
display.latex.repr [boolean] Whether to produce a latex DataFrame representation for jupyter environments
that support it. (default: False) [default: False] [currently: False]
display.max_categories [int] This sets the maximum number of categories pandas should output when printing
out a Categorical or a Series of dtype “category”. [default: 8] [currently: 8]
display.max_columns [int] If max_cols is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and
pandas will auto-detect the width of the terminal and print a truncated object which fits the screen width.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 0] [currently: 0]
display.max_colwidth [int] The maximum width in characters of a column in the repr of a pandas data struc-
ture. When the column overflows, a “. . . ” placeholder is embedded in the output. [default: 50] [currently:
50]
display.max_info_columns [int] max_info_columns is used in DataFrame.info method to decide if per column
information will be printed. [default: 100] [currently: 100]
display.max_info_rows [int or None] df.info() will usually show null-counts for each column. For large frames
this can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller
dimensions than specified. [default: 1690785] [currently: 1690785]
display.max_rows [int] If max_rows is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and
pandas will auto-detect the height of the terminal and print a truncated object which fits the screen height.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 60] [currently: 15]
display.max_seq_items [int or None] when pretty-printing a long sequence, no more then max_seq_items will
be printed. If items are omitted, they will be denoted by the addition of “. . . ” to the resulting string.
If set to None, the number of items to be printed is unlimited. [default: 100] [currently: 100]
display.memory_usage [bool, string or None] This specifies if the memory usage of a DataFrame should be
displayed when df.info() is called. Valid values True,False,’deep’ [default: True] [currently: True]
display.multi_sparse [boolean] “sparsify” MultiIndex display (don’t display repeated elements in outer levels
within groups) [default: True] [currently: True]
display.notebook_repr_html [boolean] When True, IPython notebook will use html representation for pandas
objects (if it is available). [default: True] [currently: True]
display.pprint_nest_depth [int] Controls the number of nested levels to process when pretty-printing [default:
3] [currently: 3]
display.precision [int] Floating point output precision (number of significant digits). This is only a suggestion
[default: 6] [currently: 6]
display.show_dimensions [boolean or ‘truncate’] Whether to print out dimensions at the end of DataFrame
repr. If ‘truncate’ is specified, only print out the dimensions if the frame is truncated (e.g. not display all
rows and/or columns) [default: truncate] [currently: truncate]
display.unicode.ambiguous_as_wide [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.unicode.east_asian_width [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.width [int] Width of the display in characters. In case python/IPython is running in a terminal this can
be set to None and pandas will correctly auto-detect the width. Note that the IPython notebook, IPython
qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
html.border [int] A border=value attribute is inserted in the <table> tag for the DataFrame HTML repr.
[default: 1] [currently: 1] (Deprecated, use display.html.border instead.)
io.excel.xls.writer [string] The default Excel writer engine for ‘xls’ files. Available options: auto, xlwt. [de-
fault: auto] [currently: auto]
io.excel.xlsm.writer [string] The default Excel writer engine for ‘xlsm’ files. Available options: auto, open-
pyxl. [default: auto] [currently: auto]
io.excel.xlsx.writer [string] The default Excel writer engine for ‘xlsx’ files. Available options: auto, openpyxl,
xlsxwriter. [default: auto] [currently: auto]
io.hdf.default_format [format] default format writing format, if None, then put will default to ‘fixed’ and
append will default to ‘table’ [default: None] [currently: None]
io.hdf.dropna_table [boolean] drop ALL nan rows when appending to a table [default: False] [currently:
False]
io.parquet.engine [string] The default parquet reader/writer engine. Available options: ‘auto’, ‘pyarrow’, ‘fast-
parquet’, the default is ‘auto’ [default: auto] [currently: auto]
mode.chained_assignment [string] Raise an exception, warn, or no action if trying to use chained assignment,
The default is warn [default: warn] [currently: warn]
mode.sim_interactive [boolean] Whether to simulate interactive mode for purposes of testing [default: False]
[currently: False]
mode.use_inf_as_na [boolean] True means treat None, NaN, INF, -INF as NA (old way), False means None
and NaN are null, but INF, -INF are not NA (new way). [default: False] [currently: False]
mode.use_inf_as_null [boolean] use_inf_as_null had been deprecated and will be removed in a future ver-
sion. Use use_inf_as_na instead. [default: False] [currently: False] (Deprecated, use mode.use_inf_as_na
instead.)
plotting.matplotlib.register_converters [bool] Whether to register converters with matplotlib’s units registry
for dates, times, datetimes, and Periods. Toggling to False will remove the converters, restoring any
converters that pandas overwrote. [default: True] [currently: True]
34.21.1.2 pandas.reset_option
Notes
display.latex.multirow [bool] This specifies if the to_latex method of a Dataframe uses multirows to pretty-
print MultiIndex rows. Valid values: False,True [default: False] [currently: False]
display.latex.repr [boolean] Whether to produce a latex DataFrame representation for jupyter environments
that support it. (default: False) [default: False] [currently: False]
display.max_categories [int] This sets the maximum number of categories pandas should output when printing
out a Categorical or a Series of dtype “category”. [default: 8] [currently: 8]
display.max_columns [int] If max_cols is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and
pandas will auto-detect the width of the terminal and print a truncated object which fits the screen width.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 0] [currently: 0]
display.max_colwidth [int] The maximum width in characters of a column in the repr of a pandas data struc-
ture. When the column overflows, a “. . . ” placeholder is embedded in the output. [default: 50] [currently:
50]
display.max_info_columns [int] max_info_columns is used in DataFrame.info method to decide if per column
information will be printed. [default: 100] [currently: 100]
display.max_info_rows [int or None] df.info() will usually show null-counts for each column. For large frames
this can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller
dimensions than specified. [default: 1690785] [currently: 1690785]
display.max_rows [int] If max_rows is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and
pandas will auto-detect the height of the terminal and print a truncated object which fits the screen height.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 60] [currently: 15]
display.max_seq_items [int or None] when pretty-printing a long sequence, no more then max_seq_items will
be printed. If items are omitted, they will be denoted by the addition of “. . . ” to the resulting string.
If set to None, the number of items to be printed is unlimited. [default: 100] [currently: 100]
display.memory_usage [bool, string or None] This specifies if the memory usage of a DataFrame should be
displayed when df.info() is called. Valid values True,False,’deep’ [default: True] [currently: True]
display.multi_sparse [boolean] “sparsify” MultiIndex display (don’t display repeated elements in outer levels
within groups) [default: True] [currently: True]
display.notebook_repr_html [boolean] When True, IPython notebook will use html representation for pandas
objects (if it is available). [default: True] [currently: True]
display.pprint_nest_depth [int] Controls the number of nested levels to process when pretty-printing [default:
3] [currently: 3]
display.precision [int] Floating point output precision (number of significant digits). This is only a suggestion
[default: 6] [currently: 6]
display.show_dimensions [boolean or ‘truncate’] Whether to print out dimensions at the end of DataFrame
repr. If ‘truncate’ is specified, only print out the dimensions if the frame is truncated (e.g. not display all
rows and/or columns) [default: truncate] [currently: truncate]
display.unicode.ambiguous_as_wide [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.unicode.east_asian_width [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.width [int] Width of the display in characters. In case python/IPython is running in a terminal this can
be set to None and pandas will correctly auto-detect the width. Note that the IPython notebook, IPython
qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
html.border [int] A border=value attribute is inserted in the <table> tag for the DataFrame HTML repr.
[default: 1] [currently: 1] (Deprecated, use display.html.border instead.)
io.excel.xls.writer [string] The default Excel writer engine for ‘xls’ files. Available options: auto, xlwt. [de-
fault: auto] [currently: auto]
io.excel.xlsm.writer [string] The default Excel writer engine for ‘xlsm’ files. Available options: auto, open-
pyxl. [default: auto] [currently: auto]
io.excel.xlsx.writer [string] The default Excel writer engine for ‘xlsx’ files. Available options: auto, openpyxl,
xlsxwriter. [default: auto] [currently: auto]
io.hdf.default_format [format] default format writing format, if None, then put will default to ‘fixed’ and
append will default to ‘table’ [default: None] [currently: None]
io.hdf.dropna_table [boolean] drop ALL nan rows when appending to a table [default: False] [currently:
False]
io.parquet.engine [string] The default parquet reader/writer engine. Available options: ‘auto’, ‘pyarrow’, ‘fast-
parquet’, the default is ‘auto’ [default: auto] [currently: auto]
mode.chained_assignment [string] Raise an exception, warn, or no action if trying to use chained assignment,
The default is warn [default: warn] [currently: warn]
mode.sim_interactive [boolean] Whether to simulate interactive mode for purposes of testing [default: False]
[currently: False]
mode.use_inf_as_na [boolean] True means treat None, NaN, INF, -INF as NA (old way), False means None
and NaN are null, but INF, -INF are not NA (new way). [default: False] [currently: False]
mode.use_inf_as_null [boolean] use_inf_as_null had been deprecated and will be removed in a future ver-
sion. Use use_inf_as_na instead. [default: False] [currently: False] (Deprecated, use mode.use_inf_as_na
instead.)
plotting.matplotlib.register_converters [bool] Whether to register converters with matplotlib’s units registry
for dates, times, datetimes, and Periods. Toggling to False will remove the converters, restoring any
converters that pandas overwrote. [default: True] [currently: True]
34.21.1.3 pandas.get_option
Notes
display.encoding [str/unicode] Defaults to the detected encoding of the console. Specifies the encoding to be
used for strings returned by to_string, these are generally strings meant to be displayed on the console.
[default: UTF-8] [currently: UTF-8]
display.expand_frame_repr [boolean] Whether to print out the full DataFrame repr for wide DataFrames
across multiple lines, max_columns is still respected, but the output will wrap-around across multiple
“pages” if its width exceeds display.width. [default: True] [currently: True]
display.float_format [callable] The callable should accept a floating point number and return a string with
the desired format of the number. This is used in some places like SeriesFormatter. See for-
mats.format.EngFormatter for an example. [default: None] [currently: None]
display.html.border [int] A border=value attribute is inserted in the <table> tag for the DataFrame
HTML repr. [default: 1] [currently: 1]
display.html.table_schema [boolean] Whether to publish a Table Schema representation for frontends that
support it. (default: False) [default: False] [currently: False]
display.html.use_mathjax [boolean] When True, Jupyter notebook will process table contents using Math-
Jax, rendering mathematical expressions enclosed by the dollar symbol. (default: True) [default: True]
[currently: True]
display.large_repr [‘truncate’/’info’] For DataFrames exceeding max_rows/max_cols, the repr (and HTML
repr) can show a truncated table (the default from 0.13), or switch to the view from df.info() (the behaviour
in earlier versions of pandas). [default: truncate] [currently: truncate]
display.latex.escape [bool] This specifies if the to_latex method of a Dataframe uses escapes special characters.
Valid values: False,True [default: True] [currently: True]
display.latex.longtable :bool This specifies if the to_latex method of a Dataframe uses the longtable format.
Valid values: False,True [default: False] [currently: False]
display.latex.multicolumn [bool] This specifies if the to_latex method of a Dataframe uses multicolumns to
pretty-print MultiIndex columns. Valid values: False,True [default: True] [currently: True]
display.latex.multicolumn_format [bool] This specifies if the to_latex method of a Dataframe uses multi-
columns to pretty-print MultiIndex columns. Valid values: False,True [default: l] [currently: l]
display.latex.multirow [bool] This specifies if the to_latex method of a Dataframe uses multirows to pretty-
print MultiIndex rows. Valid values: False,True [default: False] [currently: False]
display.latex.repr [boolean] Whether to produce a latex DataFrame representation for jupyter environments
that support it. (default: False) [default: False] [currently: False]
display.max_categories [int] This sets the maximum number of categories pandas should output when printing
out a Categorical or a Series of dtype “category”. [default: 8] [currently: 8]
display.max_columns [int] If max_cols is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and
pandas will auto-detect the width of the terminal and print a truncated object which fits the screen width.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 0] [currently: 0]
display.max_colwidth [int] The maximum width in characters of a column in the repr of a pandas data struc-
ture. When the column overflows, a “. . . ” placeholder is embedded in the output. [default: 50] [currently:
50]
display.max_info_columns [int] max_info_columns is used in DataFrame.info method to decide if per column
information will be printed. [default: 100] [currently: 100]
display.max_info_rows [int or None] df.info() will usually show null-counts for each column. For large frames
this can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller
dimensions than specified. [default: 1690785] [currently: 1690785]
display.max_rows [int] If max_rows is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and
pandas will auto-detect the height of the terminal and print a truncated object which fits the screen height.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 60] [currently: 15]
display.max_seq_items [int or None] when pretty-printing a long sequence, no more then max_seq_items will
be printed. If items are omitted, they will be denoted by the addition of “. . . ” to the resulting string.
If set to None, the number of items to be printed is unlimited. [default: 100] [currently: 100]
display.memory_usage [bool, string or None] This specifies if the memory usage of a DataFrame should be
displayed when df.info() is called. Valid values True,False,’deep’ [default: True] [currently: True]
display.multi_sparse [boolean] “sparsify” MultiIndex display (don’t display repeated elements in outer levels
within groups) [default: True] [currently: True]
display.notebook_repr_html [boolean] When True, IPython notebook will use html representation for pandas
objects (if it is available). [default: True] [currently: True]
display.pprint_nest_depth [int] Controls the number of nested levels to process when pretty-printing [default:
3] [currently: 3]
display.precision [int] Floating point output precision (number of significant digits). This is only a suggestion
[default: 6] [currently: 6]
display.show_dimensions [boolean or ‘truncate’] Whether to print out dimensions at the end of DataFrame
repr. If ‘truncate’ is specified, only print out the dimensions if the frame is truncated (e.g. not display all
rows and/or columns) [default: truncate] [currently: truncate]
display.unicode.ambiguous_as_wide [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.unicode.east_asian_width [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.width [int] Width of the display in characters. In case python/IPython is running in a terminal this can
be set to None and pandas will correctly auto-detect the width. Note that the IPython notebook, IPython
qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
html.border [int] A border=value attribute is inserted in the <table> tag for the DataFrame HTML repr.
[default: 1] [currently: 1] (Deprecated, use display.html.border instead.)
io.excel.xls.writer [string] The default Excel writer engine for ‘xls’ files. Available options: auto, xlwt. [de-
fault: auto] [currently: auto]
io.excel.xlsm.writer [string] The default Excel writer engine for ‘xlsm’ files. Available options: auto, open-
pyxl. [default: auto] [currently: auto]
io.excel.xlsx.writer [string] The default Excel writer engine for ‘xlsx’ files. Available options: auto, openpyxl,
xlsxwriter. [default: auto] [currently: auto]
io.hdf.default_format [format] default format writing format, if None, then put will default to ‘fixed’ and
append will default to ‘table’ [default: None] [currently: None]
io.hdf.dropna_table [boolean] drop ALL nan rows when appending to a table [default: False] [currently:
False]
io.parquet.engine [string] The default parquet reader/writer engine. Available options: ‘auto’, ‘pyarrow’, ‘fast-
parquet’, the default is ‘auto’ [default: auto] [currently: auto]
mode.chained_assignment [string] Raise an exception, warn, or no action if trying to use chained assignment,
The default is warn [default: warn] [currently: warn]
mode.sim_interactive [boolean] Whether to simulate interactive mode for purposes of testing [default: False]
[currently: False]
mode.use_inf_as_na [boolean] True means treat None, NaN, INF, -INF as NA (old way), False means None
and NaN are null, but INF, -INF are not NA (new way). [default: False] [currently: False]
mode.use_inf_as_null [boolean] use_inf_as_null had been deprecated and will be removed in a future ver-
sion. Use use_inf_as_na instead. [default: False] [currently: False] (Deprecated, use mode.use_inf_as_na
instead.)
plotting.matplotlib.register_converters [bool] Whether to register converters with matplotlib’s units registry
for dates, times, datetimes, and Periods. Toggling to False will remove the converters, restoring any
converters that pandas overwrote. [default: True] [currently: True]
34.21.1.4 pandas.set_option
Notes
display.large_repr [‘truncate’/’info’] For DataFrames exceeding max_rows/max_cols, the repr (and HTML
repr) can show a truncated table (the default from 0.13), or switch to the view from df.info() (the behaviour
in earlier versions of pandas). [default: truncate] [currently: truncate]
display.latex.escape [bool] This specifies if the to_latex method of a Dataframe uses escapes special characters.
Valid values: False,True [default: True] [currently: True]
display.latex.longtable :bool This specifies if the to_latex method of a Dataframe uses the longtable format.
Valid values: False,True [default: False] [currently: False]
display.latex.multicolumn [bool] This specifies if the to_latex method of a Dataframe uses multicolumns to
pretty-print MultiIndex columns. Valid values: False,True [default: True] [currently: True]
display.latex.multicolumn_format [bool] This specifies if the to_latex method of a Dataframe uses multi-
columns to pretty-print MultiIndex columns. Valid values: False,True [default: l] [currently: l]
display.latex.multirow [bool] This specifies if the to_latex method of a Dataframe uses multirows to pretty-
print MultiIndex rows. Valid values: False,True [default: False] [currently: False]
display.latex.repr [boolean] Whether to produce a latex DataFrame representation for jupyter environments
that support it. (default: False) [default: False] [currently: False]
display.max_categories [int] This sets the maximum number of categories pandas should output when printing
out a Categorical or a Series of dtype “category”. [default: 8] [currently: 8]
display.max_columns [int] If max_cols is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and
pandas will auto-detect the width of the terminal and print a truncated object which fits the screen width.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 0] [currently: 0]
display.max_colwidth [int] The maximum width in characters of a column in the repr of a pandas data struc-
ture. When the column overflows, a “. . . ” placeholder is embedded in the output. [default: 50] [currently:
50]
display.max_info_columns [int] max_info_columns is used in DataFrame.info method to decide if per column
information will be printed. [default: 100] [currently: 100]
display.max_info_rows [int or None] df.info() will usually show null-counts for each column. For large frames
this can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller
dimensions than specified. [default: 1690785] [currently: 1690785]
display.max_rows [int] If max_rows is exceeded, switch to truncate view. Depending on large_repr, objects
are either centrally truncated or printed as a summary view. ‘None’ value means unlimited.
In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and
pandas will auto-detect the height of the terminal and print a truncated object which fits the screen height.
The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible
to do correct auto-detection. [default: 60] [currently: 15]
display.max_seq_items [int or None] when pretty-printing a long sequence, no more then max_seq_items will
be printed. If items are omitted, they will be denoted by the addition of “. . . ” to the resulting string.
If set to None, the number of items to be printed is unlimited. [default: 100] [currently: 100]
display.memory_usage [bool, string or None] This specifies if the memory usage of a DataFrame should be
displayed when df.info() is called. Valid values True,False,’deep’ [default: True] [currently: True]
display.multi_sparse [boolean] “sparsify” MultiIndex display (don’t display repeated elements in outer levels
within groups) [default: True] [currently: True]
display.notebook_repr_html [boolean] When True, IPython notebook will use html representation for pandas
objects (if it is available). [default: True] [currently: True]
display.pprint_nest_depth [int] Controls the number of nested levels to process when pretty-printing [default:
3] [currently: 3]
display.precision [int] Floating point output precision (number of significant digits). This is only a suggestion
[default: 6] [currently: 6]
display.show_dimensions [boolean or ‘truncate’] Whether to print out dimensions at the end of DataFrame
repr. If ‘truncate’ is specified, only print out the dimensions if the frame is truncated (e.g. not display all
rows and/or columns) [default: truncate] [currently: truncate]
display.unicode.ambiguous_as_wide [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.unicode.east_asian_width [boolean] Whether to use the Unicode East Asian Width to calculate the
display text width. Enabling this may affect to the performance (default: False) [default: False] [currently:
False]
display.width [int] Width of the display in characters. In case python/IPython is running in a terminal this can
be set to None and pandas will correctly auto-detect the width. Note that the IPython notebook, IPython
qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width.
[default: 80] [currently: 80]
html.border [int] A border=value attribute is inserted in the <table> tag for the DataFrame HTML repr.
[default: 1] [currently: 1] (Deprecated, use display.html.border instead.)
io.excel.xls.writer [string] The default Excel writer engine for ‘xls’ files. Available options: auto, xlwt. [de-
fault: auto] [currently: auto]
io.excel.xlsm.writer [string] The default Excel writer engine for ‘xlsm’ files. Available options: auto, open-
pyxl. [default: auto] [currently: auto]
io.excel.xlsx.writer [string] The default Excel writer engine for ‘xlsx’ files. Available options: auto, openpyxl,
xlsxwriter. [default: auto] [currently: auto]
io.hdf.default_format [format] default format writing format, if None, then put will default to ‘fixed’ and
append will default to ‘table’ [default: None] [currently: None]
io.hdf.dropna_table [boolean] drop ALL nan rows when appending to a table [default: False] [currently:
False]
io.parquet.engine [string] The default parquet reader/writer engine. Available options: ‘auto’, ‘pyarrow’, ‘fast-
parquet’, the default is ‘auto’ [default: auto] [currently: auto]
mode.chained_assignment [string] Raise an exception, warn, or no action if trying to use chained assignment,
The default is warn [default: warn] [currently: warn]
mode.sim_interactive [boolean] Whether to simulate interactive mode for purposes of testing [default: False]
[currently: False]
mode.use_inf_as_na [boolean] True means treat None, NaN, INF, -INF as NA (old way), False means None
and NaN are null, but INF, -INF are not NA (new way). [default: False] [currently: False]
mode.use_inf_as_null [boolean] use_inf_as_null had been deprecated and will be removed in a future ver-
sion. Use use_inf_as_na instead. [default: False] [currently: False] (Deprecated, use mode.use_inf_as_na
instead.)
plotting.matplotlib.register_converters [bool] Whether to register converters with matplotlib’s units registry
for dates, times, datetimes, and Periods. Toggling to False will remove the converters, restoring any
converters that pandas overwrote. [default: True] [currently: True]
34.21.1.5 pandas.option_context
class pandas.option_context(*args)
Context manager to temporarily set options in the with statement context.
You need to invoke as option_context(pat, val, [(pat, val), ...]).
Examples
testing.assert_frame_equal(left, right[, Check that left and right DataFrame are equal.
. . . ])
testing.assert_series_equal(left, right[, Check that left and right Series are equal.
. . . ])
testing.assert_index_equal(left, right[, Check that left and right Index are equal.
. . . ])
34.21.2.1 pandas.testing.assert_frame_equal
Specify comparison precision. Only used when check_exact is False. 5 digits (False)
or 3 digits (True) after decimal points are compared. If int, then specify the digits to
compare
check_names : bool, default True
Whether to check the Index names attribute.
by_blocks : bool, default False
Specify how to compare internal data. If False, compare by columns. If True, com-
pare by blocks.
check_exact : bool, default False
Whether to compare number exactly.
check_datetimelike_compat : bool, default False
Compare datetime-like which is comparable ignoring dtype.
check_categorical : bool, default True
Whether to compare internal Categorical exactly.
check_like : bool, default False
If true, ignore the order of rows & columns
obj : str, default ‘DataFrame’
Specify object name being compared, internally used to show appropriate assertion
message
34.21.2.2 pandas.testing.assert_series_equal
34.21.2.3 pandas.testing.assert_index_equal
34.21.3.1 pandas.errors.DtypeWarning
exception pandas.errors.DtypeWarning
Warning raised when reading different dtypes in a column from a file.
Raised for a dtype incompatibility. This can happen whenever read_csv or read_table encounter non-uniform
dtypes in a column(s) of a given CSV file.
See also:
Notes
This warning is issued when dealing with larger files because the dtype checking happens per chunk read.
Despite the warning, the CSV file is read with mixed types in a single column which will be an object type. See
the examples below to better understand this issue.
Examples
This example creates and reads a large CSV file with a column that contains int and str.
Important to notice that df2 will contain both str and int for the same input, ‘1’.
>>> df2.iloc[262140, 0]
'1'
>>> type(df2.iloc[262140, 0])
<class 'str'>
>>> df2.iloc[262150, 0]
1
>>> type(df2.iloc[262150, 0])
<class 'int'>
One way to solve this issue is using the dtype parameter in the read_csv and read_table functions to explicit the
conversion:
>>> import os
>>> os.remove('test.csv')
34.21.3.2 pandas.errors.EmptyDataError
exception pandas.errors.EmptyDataError
Exception that is thrown in pd.read_csv (by both the C and Python engines) when empty data or header is
encountered.
34.21.3.3 pandas.errors.OutOfBoundsDatetime
exception pandas.errors.OutOfBoundsDatetime
34.21.3.4 pandas.errors.ParserError
exception pandas.errors.ParserError
Exception that is raised by an error encountered in pd.read_csv.
34.21.3.5 pandas.errors.ParserWarning
exception pandas.errors.ParserWarning
Warning raised when reading a file that doesn’t use the default ‘c’ parser.
Raised by pd.read_csv and pd.read_table when it is necessary to change parsers, generally from the default ‘c’
parser to ‘python’.
It happens due to a lack of support or functionality for parsing a particular attribute of a CSV file with the
requested engine.
Currently, ‘c’ unsupported options include the following parameters:
1. sep other than a single character (e.g. regex separators)
2. skipfooter higher than 0
3. sep=None with delim_whitespace=False
The warning can be avoided by adding engine=’python’ as a parameter in pd.read_csv and pd.read_table meth-
ods.
See also:
Examples
>>> import io
>>> csv = u'''a;b;c
... 1;1,8
... 1;2,1'''
>>> df = pd.read_csv(io.StringIO(csv), sep='[;,]')
... # ParserWarning: Falling back to the 'python' engine...
34.21.3.6 pandas.errors.PerformanceWarning
exception pandas.errors.PerformanceWarning
Warning raised when there is a possible performance impact.
34.21.3.7 pandas.errors.UnsortedIndexError
exception pandas.errors.UnsortedIndexError
Error raised when attempting to get a slice of a MultiIndex, and the index has not been lexsorted. Subclass of
KeyError.
New in version 0.20.0.
34.21.3.8 pandas.errors.UnsupportedFunctionCall
exception pandas.errors.UnsupportedFunctionCall
Exception raised when attempting to call a numpy function on a pandas object, but that function is not supported
by the object e.g. np.cumsum(groupby_object).
34.21.4.1 pandas.api.types.union_categoricals
Notes
Examples
If you want to combine categoricals that do not necessarily have the same categories, union_categoricals will
combine a list-like of categoricals. The new categories will be the union of the categories being combined.
By default, the resulting categories will be ordered as they appear in the categories of the data. If you want the
categories to be lexsorted, use sort_categories=True argument.
union_categoricals also works with the case of combining two categoricals of the same categories and order
information (e.g. what you could also append for).
Raises TypeError because the categories are ordered and not identical.
union_categoricals also works with a CategoricalIndex, or Series containing categorical data, but note that the
resulting array will always be a plain Categorical
34.21.4.2 pandas.api.types.infer_dtype
pandas.api.types.infer_dtype()
Efficiently infer the type of a passed val, or list-like array of values. Return a string describing the type.
Parameters
value [scalar, list, ndarray, or pandas type]
skipna : bool, default False
Ignore NaN values when inferring the type. The default of False will be deprecated
in a later version of pandas.
New in version 0.21.0.
Returns
string describing the common type of the input data.
Results can include:
- string
- unicode
- bytes
- floating
- integer
- mixed-integer
- mixed-integer-float
- decimal
- complex
- categorical
- boolean
- datetime64
- datetime
- date
- timedelta64
- timedelta
- time
- period
- mixed
Raises
TypeError if ndarray-like but cannot infer the dtype
Notes
Examples
>>> infer_dtype([pd.Timestamp('20130101')])
'datetime'
>>> infer_dtype([np.datetime64('2013-01-01')])
'datetime64'
>>> infer_dtype(pd.Series(list('aabc')).astype('category'))
'categorical'
34.21.4.3 pandas.api.types.pandas_dtype
pandas.api.types.pandas_dtype(dtype)
Converts input into a pandas only dtype object or a numpy dtype object.
Parameters
dtype [object to be converted]
Returns
np.dtype or a pandas dtype
Dtype introspection
34.21.4.4 pandas.api.types.is_bool_dtype
pandas.api.types.is_bool_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a boolean dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns
boolean [Whether or not the array or dtype is of a boolean dtype.]
Examples
>>> is_bool_dtype(str)
False
>>> is_bool_dtype(int)
False
>>> is_bool_dtype(bool)
True
>>> is_bool_dtype(np.bool)
True
>>> is_bool_dtype(np.array(['a', 'b']))
False
>>> is_bool_dtype(pd.Series([1, 2]))
False
>>> is_bool_dtype(np.array([True, False]))
True
34.21.4.5 pandas.api.types.is_categorical_dtype
pandas.api.types.is_categorical_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Categorical dtype.
Parameters arr_or_dtype : array-like
The array-like or dtype to check.
Returns boolean : Whether or not the array-like or dtype is
of the Categorical dtype.
Examples
>>> is_categorical_dtype(object)
False
>>> is_categorical_dtype(CategoricalDtype())
True
(continues on next page)
34.21.4.6 pandas.api.types.is_complex_dtype
pandas.api.types.is_complex_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a complex dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns
boolean [Whether or not the array or dtype is of a compex dtype.]
Examples
>>> is_complex_dtype(str)
False
>>> is_complex_dtype(int)
False
>>> is_complex_dtype(np.complex)
True
>>> is_complex_dtype(np.array(['a', 'b']))
False
>>> is_complex_dtype(pd.Series([1, 2]))
False
>>> is_complex_dtype(np.array([1 + 1j, 5]))
True
34.21.4.7 pandas.api.types.is_datetime64_any_dtype
pandas.api.types.is_datetime64_any_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64 dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns
boolean [Whether or not the array or dtype is of the datetime64 dtype.]
Examples
>>> is_datetime64_any_dtype(str)
False
>>> is_datetime64_any_dtype(int)
False
>>> is_datetime64_any_dtype(np.datetime64) # can be tz-naive
True
>>> is_datetime64_any_dtype(DatetimeTZDtype("ns", "US/Eastern"))
True
>>> is_datetime64_any_dtype(np.array(['a', 'b']))
False
>>> is_datetime64_any_dtype(np.array([1, 2]))
False
>>> is_datetime64_any_dtype(np.array([], dtype=np.datetime64))
True
>>> is_datetime64_any_dtype(pd.DatetimeIndex([1, 2, 3],
dtype=np.datetime64))
True
34.21.4.8 pandas.api.types.is_datetime64_dtype
pandas.api.types.is_datetime64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the datetime64 dtype.
Parameters arr_or_dtype : array-like
The array-like or dtype to check.
Returns boolean : Whether or not the array-like or dtype is of
the datetime64 dtype.
Examples
>>> is_datetime64_dtype(object)
False
>>> is_datetime64_dtype(np.datetime64)
True
>>> is_datetime64_dtype(np.array([], dtype=int))
False
>>> is_datetime64_dtype(np.array([], dtype=np.datetime64))
True
>>> is_datetime64_dtype([1, 2, 3])
False
34.21.4.9 pandas.api.types.is_datetime64_ns_dtype
pandas.api.types.is_datetime64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64[ns] dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns
boolean [Whether or not the array or dtype is of the datetime64[ns] dtype.]
Examples
>>> is_datetime64_ns_dtype(str)
False
>>> is_datetime64_ns_dtype(int)
False
>>> is_datetime64_ns_dtype(np.datetime64) # no unit
False
>>> is_datetime64_ns_dtype(DatetimeTZDtype("ns", "US/Eastern"))
True
>>> is_datetime64_ns_dtype(np.array(['a', 'b']))
False
>>> is_datetime64_ns_dtype(np.array([1, 2]))
False
>>> is_datetime64_ns_dtype(np.array([], dtype=np.datetime64)) # no unit
False
>>> is_datetime64_ns_dtype(np.array([],
dtype="datetime64[ps]")) # wrong unit
False
>>> is_datetime64_ns_dtype(pd.DatetimeIndex([1, 2, 3],
dtype=np.datetime64)) # has 'ns' unit
True
34.21.4.10 pandas.api.types.is_datetime64tz_dtype
pandas.api.types.is_datetime64tz_dtype(arr_or_dtype)
Check whether an array-like or dtype is of a DatetimeTZDtype dtype.
Parameters arr_or_dtype : array-like
The array-like or dtype to check.
Returns boolean : Whether or not the array-like or dtype is of
a DatetimeTZDtype dtype.
Examples
>>> is_datetime64tz_dtype(object)
False
>>> is_datetime64tz_dtype([1, 2, 3])
False
>>> is_datetime64tz_dtype(pd.DatetimeIndex([1, 2, 3])) # tz-naive
False
>>> is_datetime64tz_dtype(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
True
34.21.4.11 pandas.api.types.is_extension_type
pandas.api.types.is_extension_type(arr)
Check whether an array-like is of a pandas extension class instance.
Extension classes include categoricals, pandas sparse objects (i.e. classes represented within the pandas library
and not ones external to it like scipy sparse matrices), and datetime-like arrays.
Parameters arr : array-like
The array-like to check.
Returns boolean : Whether or not the array-like is of a pandas
extension class instance.
Examples
34.21.4.12 pandas.api.types.is_float_dtype
pandas.api.types.is_float_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a float dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns
Examples
>>> is_float_dtype(str)
False
>>> is_float_dtype(int)
False
>>> is_float_dtype(float)
True
>>> is_float_dtype(np.array(['a', 'b']))
False
>>> is_float_dtype(pd.Series([1, 2]))
False
>>> is_float_dtype(pd.Index([1, 2.]))
True
34.21.4.13 pandas.api.types.is_int64_dtype
pandas.api.types.is_int64_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the int64 dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns
boolean [Whether or not the array or dtype is of the int64 dtype.]
Notes
Depending on system architecture, the return value of is_int64_dtype( int) will be True if the OS uses 64-bit
integers and False if the OS uses 32-bit integers.
Examples
>>> is_int64_dtype(str)
False
>>> is_int64_dtype(np.int32)
False
>>> is_int64_dtype(np.int64)
True
>>> is_int64_dtype(float)
False
>>> is_int64_dtype(np.uint64) # unsigned
False
>>> is_int64_dtype(np.array(['a', 'b']))
False
>>> is_int64_dtype(np.array([1, 2], dtype=np.int64))
True
>>> is_int64_dtype(pd.Index([1, 2.])) # float
False
(continues on next page)
34.21.4.14 pandas.api.types.is_integer_dtype
pandas.api.types.is_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an integer dtype.
Unlike in in_any_int_dtype, timedelta64 instances will return False.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of an integer dtype
and not an instance of timedelta64.
Examples
>>> is_integer_dtype(str)
False
>>> is_integer_dtype(int)
True
>>> is_integer_dtype(float)
False
>>> is_integer_dtype(np.uint64)
True
>>> is_integer_dtype(np.datetime64)
False
>>> is_integer_dtype(np.timedelta64)
False
>>> is_integer_dtype(np.array(['a', 'b']))
False
>>> is_integer_dtype(pd.Series([1, 2]))
True
>>> is_integer_dtype(np.array([], dtype=np.timedelta64))
False
>>> is_integer_dtype(pd.Index([1, 2.])) # float
False
34.21.4.15 pandas.api.types.is_interval_dtype
pandas.api.types.is_interval_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Interval dtype.
Parameters arr_or_dtype : array-like
The array-like or dtype to check.
Returns boolean : Whether or not the array-like or dtype is
of the Interval dtype.
Examples
>>> is_interval_dtype(object)
False
>>> is_interval_dtype(IntervalDtype())
True
>>> is_interval_dtype([1, 2, 3])
False
>>>
>>> interval = pd.Interval(1, 2, closed="right")
>>> is_interval_dtype(interval)
False
>>> is_interval_dtype(pd.IntervalIndex([interval]))
True
34.21.4.16 pandas.api.types.is_numeric_dtype
pandas.api.types.is_numeric_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a numeric dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns
boolean [Whether or not the array or dtype is of a numeric dtype.]
Examples
>>> is_numeric_dtype(str)
False
>>> is_numeric_dtype(int)
True
>>> is_numeric_dtype(float)
True
>>> is_numeric_dtype(np.uint64)
True
>>> is_numeric_dtype(np.datetime64)
False
>>> is_numeric_dtype(np.timedelta64)
False
>>> is_numeric_dtype(np.array(['a', 'b']))
False
>>> is_numeric_dtype(pd.Series([1, 2]))
True
>>> is_numeric_dtype(pd.Index([1, 2.]))
True
>>> is_numeric_dtype(np.array([], dtype=np.timedelta64))
False
34.21.4.17 pandas.api.types.is_object_dtype
pandas.api.types.is_object_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the object dtype.
Examples
>>> is_object_dtype(object)
True
>>> is_object_dtype(int)
False
>>> is_object_dtype(np.array([], dtype=object))
True
>>> is_object_dtype(np.array([], dtype=int))
False
>>> is_object_dtype([1, 2, 3])
False
34.21.4.18 pandas.api.types.is_period_dtype
pandas.api.types.is_period_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Period dtype.
Parameters arr_or_dtype : array-like
The array-like or dtype to check.
Returns
boolean [Whether or not the array-like or dtype is of the Period dtype.]
Examples
>>> is_period_dtype(object)
False
>>> is_period_dtype(PeriodDtype(freq="D"))
True
>>> is_period_dtype([1, 2, 3])
False
>>> is_period_dtype(pd.Period("2017-01-01"))
False
>>> is_period_dtype(pd.PeriodIndex([], freq="A"))
True
34.21.4.19 pandas.api.types.is_signed_integer_dtype
pandas.api.types.is_signed_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a signed integer dtype.
Unlike in in_any_int_dtype, timedelta64 instances will return False.
Parameters arr_or_dtype : array-like
Examples
>>> is_signed_integer_dtype(str)
False
>>> is_signed_integer_dtype(int)
True
>>> is_signed_integer_dtype(float)
False
>>> is_signed_integer_dtype(np.uint64) # unsigned
False
>>> is_signed_integer_dtype(np.datetime64)
False
>>> is_signed_integer_dtype(np.timedelta64)
False
>>> is_signed_integer_dtype(np.array(['a', 'b']))
False
>>> is_signed_integer_dtype(pd.Series([1, 2]))
True
>>> is_signed_integer_dtype(np.array([], dtype=np.timedelta64))
False
>>> is_signed_integer_dtype(pd.Index([1, 2.])) # float
False
>>> is_signed_integer_dtype(np.array([1, 2], dtype=np.uint32)) # unsigned
False
34.21.4.20 pandas.api.types.is_string_dtype
pandas.api.types.is_string_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the string dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns
boolean [Whether or not the array or dtype is of the string dtype.]
Examples
>>> is_string_dtype(str)
True
>>> is_string_dtype(object)
True
>>> is_string_dtype(int)
False
>>>
>>> is_string_dtype(np.array(['a', 'b']))
True
(continues on next page)
34.21.4.21 pandas.api.types.is_timedelta64_dtype
pandas.api.types.is_timedelta64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the timedelta64 dtype.
Parameters arr_or_dtype : array-like
The array-like or dtype to check.
Returns boolean : Whether or not the array-like or dtype is
of the timedelta64 dtype.
Examples
>>> is_timedelta64_dtype(object)
False
>>> is_timedelta64_dtype(np.timedelta64)
True
>>> is_timedelta64_dtype([1, 2, 3])
False
>>> is_timedelta64_dtype(pd.Series([], dtype="timedelta64[ns]"))
True
>>> is_timedelta64_dtype('0 days')
False
34.21.4.22 pandas.api.types.is_timedelta64_ns_dtype
pandas.api.types.is_timedelta64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the timedelta64[ns] dtype.
This is a very specific dtype, so generic ones like np.timedelta64 will return False if passed into this function.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of the
timedelta64[ns] dtype.
Examples
>>> is_timedelta64_ns_dtype(np.dtype('m8[ns]'))
True
>>> is_timedelta64_ns_dtype(np.dtype('m8[ps]')) # Wrong frequency
False
>>> is_timedelta64_ns_dtype(np.array([1, 2], dtype='m8[ns]'))
True
>>> is_timedelta64_ns_dtype(np.array([1, 2], dtype=np.timedelta64))
False
34.21.4.23 pandas.api.types.is_unsigned_integer_dtype
pandas.api.types.is_unsigned_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an unsigned integer dtype.
Parameters arr_or_dtype : array-like
The array or dtype to check.
Returns boolean : Whether or not the array or dtype is of an
unsigned integer dtype.
Examples
>>> is_unsigned_integer_dtype(str)
False
>>> is_unsigned_integer_dtype(int) # signed
False
>>> is_unsigned_integer_dtype(float)
False
>>> is_unsigned_integer_dtype(np.uint64)
True
>>> is_unsigned_integer_dtype(np.array(['a', 'b']))
False
>>> is_unsigned_integer_dtype(pd.Series([1, 2])) # signed
False
>>> is_unsigned_integer_dtype(pd.Index([1, 2.])) # float
False
>>> is_unsigned_integer_dtype(np.array([1, 2], dtype=np.uint32))
True
34.21.4.24 pandas.api.types.is_sparse
pandas.api.types.is_sparse(arr)
Check whether an array-like is a pandas sparse array.
Parameters arr : array-like
The array-like to check.
Returns
boolean [Whether or not the array-like is a pandas sparse array.]
Examples
This function checks only for pandas sparse array instances, so sparse arrays from other libraries will return
False.
Iterable introspection
34.21.4.25 pandas.api.types.is_dict_like
pandas.api.types.is_dict_like(obj)
Check if the object is dict-like.
Parameters
obj [The object to check.]
Returns is_dict_like : bool
Whether obj has dict-like properties.
Examples
34.21.4.26 pandas.api.types.is_file_like
pandas.api.types.is_file_like(obj)
Check if the object is a file-like object.
For objects to be considered file-like, they must be an iterator AND have either a read and/or write method as
an attribute.
Note: file-like objects must be iterable, but iterable objects need not be file-like.
New in version 0.20.0.
Parameters
obj [The object to check.]
Returns is_file_like : bool
Whether obj has file-like properties.
Examples
>>> buffer(StringIO("data"))
>>> is_file_like(buffer)
True
>>> is_file_like([1, 2, 3])
False
34.21.4.27 pandas.api.types.is_list_like
pandas.api.types.is_list_like(obj)
Check if the object is list-like.
Objects that are considered list-like are for example Python lists, tuples, sets, NumPy arrays, and Pandas Series.
Strings and datetime objects, however, are not considered list-like.
Parameters
obj [The object to check.]
Returns is_list_like : bool
Whether obj has list-like properties.
Examples
34.21.4.28 pandas.api.types.is_named_tuple
pandas.api.types.is_named_tuple(obj)
Check if the object is a named tuple.
Parameters
obj [The object to check.]
Returns is_named_tuple : bool
Whether obj is a named tuple.
Examples
34.21.4.29 pandas.api.types.is_iterator
pandas.api.types.is_iterator(obj)
Check if the object is an iterator.
For example, lists are considered iterators but not strings or datetime objects.
Parameters
obj [The object to check.]
Returns is_iter : bool
Whether obj is an iterator.
Examples
Scalar introspection
api.types.is_bool
api.types.is_categorical(arr) Check whether an array-like is a Categorical instance.
api.types.is_complex
api.types.is_datetimetz(arr) Check whether an array-like is a datetime array-like
with a timezone component in its dtype.
api.types.is_float
api.types.is_hashable(obj) Return True if hash(obj) will succeed, False otherwise.
api.types.is_integer
api.types.is_interval
api.types.is_number(obj) Check if the object is a number.
api.types.is_period(arr) Check whether an array-like is a periodical index.
api.types.is_re(obj) Check if the object is a regex pattern instance.
api.types.is_re_compilable(obj) Check if the object can be compiled into a regex pattern
instance.
api.types.is_scalar Return True if given value is scalar.
34.21.4.30 pandas.api.types.is_bool
pandas.api.types.is_bool()
34.21.4.31 pandas.api.types.is_categorical
pandas.api.types.is_categorical(arr)
Check whether an array-like is a Categorical instance.
Parameters arr : array-like
The array-like to check.
Returns
boolean [Whether or not the array-like is of a Categorical instance.]
Examples
34.21.4.32 pandas.api.types.is_complex
pandas.api.types.is_complex()
34.21.4.33 pandas.api.types.is_datetimetz
pandas.api.types.is_datetimetz(arr)
Check whether an array-like is a datetime array-like with a timezone component in its dtype.
Parameters arr : array-like
The array-like to check.
Returns boolean : Whether or not the array-like is a datetime array-like with
a timezone component in its dtype.
Examples
Although the following examples are both DatetimeIndex objects, the first one returns False because it has no
timezone component unlike the second one, which returns True.
The object need not be a DatetimeIndex object. It just needs to have a dtype which has a timezone component.
34.21.4.34 pandas.api.types.is_float
pandas.api.types.is_float()
34.21.4.35 pandas.api.types.is_hashable
pandas.api.types.is_hashable(obj)
Return True if hash(obj) will succeed, False otherwise.
Some types will pass a test against collections.Hashable but fail when they are actually hashed with hash().
Distinguish between these and other types by trying the call to hash() and seeing if they raise TypeError.
Examples
>>> a = ([],)
>>> isinstance(a, collections.Hashable)
True
>>> is_hashable(a)
False
34.21.4.36 pandas.api.types.is_integer
pandas.api.types.is_integer()
34.21.4.37 pandas.api.types.is_interval
pandas.api.types.is_interval()
34.21.4.38 pandas.api.types.is_number
pandas.api.types.is_number(obj)
Check if the object is a number.
Returns True when the object is a number, and False if is not.
Parameters obj : any type
Examples
>>> pd.api.types.is_number(1)
True
>>> pd.api.types.is_number(7.15)
True
>>> pd.api.types.is_number(False)
True
>>> pd.api.types.is_number("foo")
False
>>> pd.api.types.is_number("5")
False
34.21.4.39 pandas.api.types.is_period
pandas.api.types.is_period(arr)
Check whether an array-like is a periodical index.
Parameters arr : array-like
The array-like to check.
Returns
boolean [Whether or not the array-like is a periodical index.]
Examples
34.21.4.40 pandas.api.types.is_re
pandas.api.types.is_re(obj)
Check if the object is a regex pattern instance.
Parameters
obj [The object to check.]
Returns is_regex : bool
Whether obj is a regex pattern.
Examples
>>> is_re(re.compile(".*"))
True
>>> is_re("foo")
False
34.21.4.41 pandas.api.types.is_re_compilable
pandas.api.types.is_re_compilable(obj)
Check if the object can be compiled into a regex pattern instance.
Parameters
obj [The object to check.]
Returns is_regex_compilable : bool
Whether obj can be compiled as a regex pattern.
Examples
>>> is_re_compilable(".*")
True
>>> is_re_compilable(1)
False
34.21.4.42 pandas.api.types.is_scalar
pandas.api.types.is_scalar()
Return True if given value is scalar.
This includes: - numpy array scalar (e.g. np.int64) - Python builtin numerics - Python builtin byte arrays
and strings - None - instances of datetime.datetime - instances of datetime.timedelta - Period - instances of
decimal.Decimal - Interval - DateOffset
34.22 Extensions
These are primarily intended for library authors looking to extend pandas objects.
34.22.1 pandas.api.extensions.register_dataframe_accessor
pandas.api.extensions.register_dataframe_accessor(name)
Register a custom accessor on DataFrame objects.
Parameters name : str
Name under which the accessor should be registered. A warning is issued if this
name conflicts with a preexisting attribute.
See also:
register_series_accessor, register_index_accessor
Notes
When accessed, your accessor will be initialized with the pandas object the user is interacting with. So the
signature must be
For consistency with pandas methods, you should raise an AttributeError if the data passed to your ac-
cessor has an incorrect dtype.
Examples
import pandas as pd
@pd.api.extensions.register_dataframe_accessor("geo")
class GeoAccessor(object):
def __init__(self, pandas_obj):
self._obj = pandas_obj
@property
def center(self):
# return the geographic center point of this DataFrame
lat = self._obj.latitude
lon = self._obj.longitude
return (float(lon.mean()), float(lat.mean()))
def plot(self):
(continues on next page)
34.22.2 pandas.api.extensions.register_series_accessor
pandas.api.extensions.register_series_accessor(name)
Register a custom accessor on Series objects.
Parameters name : str
Name under which the accessor should be registered. A warning is issued if this
name conflicts with a preexisting attribute.
See also:
register_dataframe_accessor, register_index_accessor
Notes
When accessed, your accessor will be initialized with the pandas object the user is interacting with. So the
signature must be
def __init__(self, pandas_object):
For consistency with pandas methods, you should raise an AttributeError if the data passed to your ac-
cessor has an incorrect dtype.
>>> pd.Series(['a', 'b']).dt
Traceback (most recent call last):
...
AttributeError: Can only use .dt accessor with datetimelike values
Examples
@pd.api.extensions.register_dataframe_accessor("geo")
class GeoAccessor(object):
def __init__(self, pandas_obj):
self._obj = pandas_obj
@property
(continues on next page)
34.22. Extensions