0% found this document useful (0 votes)
73 views

04 Introduction To Python-1

This document provides an introduction to Python and the pandas library for data analysis. It discusses: 1. Installing Python and setting up an IDE on Windows, Mac, and Linux systems. 2. Basic Python concepts like comments, importing libraries, and an introduction to the pandas Series and DataFrame data structures. 3. How to create Series and DataFrames from various data types like lists, dicts, and NumPy arrays and some common operations on these data structures like filtering, applying functions, and detecting missing values.

Uploaded by

FucKerWengie
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views

04 Introduction To Python-1

This document provides an introduction to Python and the pandas library for data analysis. It discusses: 1. Installing Python and setting up an IDE on Windows, Mac, and Linux systems. 2. Basic Python concepts like comments, importing libraries, and an introduction to the pandas Series and DataFrame data structures. 3. How to create Series and DataFrames from various data types like lists, dicts, and NumPy arrays and some common operations on these data structures like filtering, applying functions, and detecting missing values.

Uploaded by

FucKerWengie
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

UECM 1534 Programming Techniques for Data Processing Jan 18/19

Chapter 04: Introduction to


Python
In this chapter, students will learn on how to setup python and its IDE. Following the installation
is the introduction to pandas data structure in python. Students will discover the way of
extracting data by row or column, sorting the data, applying certain function or method to
the data structure and also introduced to some basic statistical method or summary on the data.

1. Settings of Python
1.1 Installation of Python
Windows
To get started on Windows, download the Anaconda installer. Starting the Python IDE:
Spyder after installation.

You may also open the Command Prompt application (also known as cmd.exe), right-click
the Start menu and select Command Prompt. Try starting the Python interpreter by typing
python. You should see a message that matches the version of Anaconda you installed:

1|Page
UECM 1534 Programming Techniques for Data Processing Jan 18/19

C:\Users\wesm>python
Python 3.5.2 |Anaconda 4.1.1 (64-bit)| (default, Jul 5 2016, 11:41:13)
[MSC v.1900 64 bit (AMD64)] on win32
>>>

To exit the shell, press Ctrl-D (on Linux or macOS), Ctrl-Z (on Windows), or type
the command exit() and press Enter.

Apple (OS X, macOS)


Download the OS X Anaconda installer, which should be named something like
Anaconda3-4.1.0-MacOSX-x86_64.pkg. Double-click the .pkg file to run the installer. When
the installer runs, it automatically appends the Anaconda executable path to your
.bash_profile file. This is located at /Users/$USER/.bash_profile.

To verify everything is working, try launching IPython in the system shell (open the Terminal
application to get a command prompt):

$ ipython

To exit the shell, press Ctrl-D or type exit() and press Enter.

1.2 Comments in Python


Any text preceded by the hash mark (pound sign) # is ignored by the Python interpreter.
This is often used to add comments to code. At times you may also want to exclude certain
blocks of code without deleting them. An easy solution is to comment out the code:

results = []
for line in file_handle:
# keep the empty lines for now
# if len(line) == 0:
# continue

Comments can also occur after a line of executed code. While some programmers prefer
comments to be placed in the line preceding a particular line of code, this can be useful at
times:

print("Reached this line") # Simple status report

1.3 Getting Started with pandas and Importing library


pandas contains data structures and data manipulation tools designed to make data
cleaning and analysis fast and easy in Python.

You may import or load in the pandas library to your console using the following code

2|Page
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [1]: import pandas as pd

Where pd is the abbreviated name of pandas that make coding short and simple. You can
choose other short name like pds as well. It is optional and depends on user preference.
Normally, it is import as pd. Thus, whenever you see pd. in code, it’s referring to pandas.
You may also find it easier to import Series and DataFrame into the local namespace
since they are so frequently used:

In [2]: from pandas import Series, DataFrame

2. Introduction to pandas Data Structures


To get started with pandas, you will need to get comfortable with its two workhorse data
structures: Series and DataFrame. While they are not a universal solution for every problem,
they provide a solid, easy-to-use basis for most applications.

2.1 Series
A Series is a one-dimensional array-like object containing a sequence of values (of
similar types to Numpy types) and an associated array of data labels, called its index. The
simplest Series is formed from only an array of data:

In [3]: obj = pd.Series([4, 7, -5, 3])

In [4]: obj
Out[4]:
0 4
1 7
2 -5
3 3
dtype: int64

The output of Series shows the index on the left and the values on the right. Since we
did not specify an index for the data, a default one consisting of the integers 0 through N
- 1 (where N is the length of the data) is created. You can get the array representation and
index object of the Series via its values and index attributes (using period (.)).

In [5]: obj.values
Out[5]: array([ 4, 7, -5, 3])

In [6]: obj.index # like range(4)


Out[6]: RangeIndex(start=0, stop=4, step=1)

3|Page
UECM 1534 Programming Techniques for Data Processing Jan 18/19

Often it will be desirable to create a Series with an index identifying each data point with
a label:

In [7]: obj2 = pd.Series([4, 7, -5, 3], index=['d', 'b', 'a','c'])

In [8]: obj2
Out[8]:
d 4
b 7
a -5
c 3
dtype: int64

In [9]: obj2.index
Out[9]: Index(['d', 'b', 'a', 'c'], dtype='object')

You can use labels in the index when selecting single values or a set of values from the
Series object using square parentheses ([ ]):

In [10]: obj2['a']
Out[10]: -5

In [11]: obj2['d'] = 6

In [12]: obj2[['c', 'a', 'd']]


Out[12]:
c 3
a -5
d 6

There are others operation like filtering, scalar multiplication and so on as shown in the
following:

In [13]: obj2[obj2 > 0]


Out[13]:
d 6
b 7
c 3
dtype: int64

In [14]: obj2 * 2
Out[14]:
d 12
b 14
a -10
c 6
dtype: int64

4|Page
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [15]: np.exp(obj2) # import numpy as np


Out[15]:
d 403.428793
b 1096.633158
a 0.006738
c 20.085537
dtype: float64

You can use an “in” operator to see if a particular element is inside the Series.

In [16]: 'b' in obj2


Out[16]: True

In [17]: 'e' in obj2


Out[17]: False

Should you have data contained in a Python dict, you can create a Series from it by
passing the dict:

In [18]: sdata = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000,


'Utah': 5000}
In [19]: obj3 = pd.Series(sdata)

In [20]: obj3
Out[20]:
Ohio 35000
Oregon 16000
Texas 71000
Utah 5000
dtype: int64

When you are only passing a dict, the index in the resulting Series will have the dict’s
keys in sorted order. You can override this by passing the dict keys in the order you want
them to appear in the resulting Series:

In [21]: states = ['California', 'Ohio', 'Oregon', 'Texas']

In [22]: obj4 = pd.Series(sdata, index=states)

In [23]: obj4
Out[23]:
California NaN
Ohio 35000.0
Oregon 16000.0
Texas 71000.0
dtype: float64

5|Page
UECM 1534 Programming Techniques for Data Processing Jan 18/19

Here, three values found in sdata were placed in the appropriate locations, but since no
value for 'California' was found, it appears as NaN (not a number), which is considered in
pandas to mark missing or NA values. Since 'Utah' was not included in states, it is excluded
from the resulting object.

The terms “missing” or “NA” can be used interchangeably to refer to missing data. The
isnull and notnull functions in pandas should be used to detect missing data:

In [24]: pd.isnull(obj4)
Out[24]:
California True
Ohio False
Oregon False
Texas False
dtype: bool

In [25]: pd.notnull(obj4)
Out[25]:
California False
Ohio True
Oregon True
Texas True
dtype: bool

Series also has these as instance methods:

In [26]: obj4.isnull()
Out[26]:
California True
Ohio False
Oregon False
Texas False
dtype: bool

2.2 DataFrame
A DataFrame represents a rectangular table of data and contains an ordered collection of
columns, each of which can be a different value type (numeric, string, boolean, etc.). The
DataFrame has both a row and column index; it can be thought of as a dict of Series all
sharing the same index. Under the hood, the data is stored as one or more two-
dimensional blocks rather than a list, dict, or some other collection of one-dimensional
arrays.

There are many ways to construct a DataFrame, though one of the most common is
from a dict of equal-length lists or NumPy arrays:

6|Page
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [27]: data = {'state': ['Ohio', ' Ohio ', ' Ohio ', 'Nevada',
....:'Nevada', ' Nevada '], 'year': [2000, 2001, 2002, 2001,
....:2002, 2003], 'pop':[ 1.5, 1.7, 3.6, 2.4, 2.9, 3.2]}
In [28]: frame = pd.DataFrame(data)

The resulting DataFrame will have its index assigned automatically as with Series, and
the columns are placed in sorted order:

In [29]: frame
Out[29]:
pop state year
0 1.5 Ohio 2000
1 1.7 Ohio 2001
2 3.6 Ohio 2002
3 2.4 Nevada 2001
4 2.9 Nevada 2002
5 3.2 Nevada 2003

For large DataFrames, the .head() method selects only the first five rows:

In [30]: frame.head()
Out[30]:
pop state year
0 1.5 Ohio 2000
1 1.7 Ohio 2001
2 3.6 Ohio 2002
3 2.4 Nevada 2001
4 2.9 Nevada 2002

If you specify a sequence of columns, the DataFrame’s columns will be arranged in that
order:

In [31]: pd.DataFrame(data, columns=['year', 'state', 'pop'])


Out[31]:
year state pop
0 2000 Ohio 1.5
1 2001 Ohio 1.7
2 2002 Ohio 3.6
3 2001 Nevada 2.4
4 2002 Nevada 2.9
5 2003 Nevada 3.2

If you pass a column that isn’t contained in the dict, it will appear with missing values
in the result:
In [32]: frame2 = pd.DataFrame(data, columns=['year', 'state',
....: 'pop', 'debt'], index=['one', 'two', 'three', 'four',
....: 'five', 'six'])

7|Page
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [33]: frame2
Out[33]:
year state pop debt
one 2000 Ohio 1.5 NaN
two 2001 Ohio 1.7 NaN
three 2002 Ohio 3.6 NaN
four 2001 Nevada 2.4 NaN
five 2002 Nevada 2.9 NaN
six 2003 Nevada 3.2 NaN

In [34]: frame2.columns
Out[34]: Index(['year', 'state', 'pop', 'debt'], dtype='object')

A column in a DataFrame can be retrieved as a Series either by dict-like notation or


by attribute:

In [35]: frame2['state']
Out[35]:
one Ohio
two Ohio
three Ohio
four Nevada
five Nevada
six Nevada
Name: state, dtype: object

In [36]: frame2.year
Out[36]:
one 2000
two 2001
three 2002
four 2001
five 2002
six 2003
Name: year, dtype: int64

Attribute-like access (e.g., frame2.year) and tab completion of column names in IPython
is provided as a convenience. frame2[column] works for any column name.

Rows can also be retrieved by position or name with the special loc attribute:

In [37]: frame2.loc['three']
Out[37]:
year 2002
state Ohio
pop 3.6
debt NaN
Name: three, dtype: object

8|Page
UECM 1534 Programming Techniques for Data Processing Jan 18/19

Columns can be modified by assignment. For example, the empty 'debt' column could be
assigned a scalar value or an array of values:

In [38]: frame2['debt'] = 16.5

In [39]: frame2
Out[39]:
year state pop debt
one 2000 Ohio 1.5 16.5
two 2001 Ohio 1.7 16.5
three 2002 Ohio 3.6 16.5
four 2001 Nevada 2.4 16.5
five 2002 Nevada 2.9 16.5
six 2003 Nevada 3.2 16.5

When you are assigning lists or arrays to a column, the value’s length must match the
length of the DataFrame.

In [40]: frame2['debt'] = np.arange(6.)

In [41]: frame2
Out[41]:
year state pop debt
one 2000 Ohio 1.5 0.0
two 2001 Ohio 1.7 1.0
three 2002 Ohio 3.6 2.0
four 2001 Nevada 2.4 3.0
five 2002 Nevada 2.9 4.0
six 2003 Nevada 3.2 5.0

Assigning a column that doesn’t exist will create a new column. The del keyword will
delete columns as with a dict. As an example of del, a new column of boolean values where
the state column equals 'Ohio' is added:

In [42]: frame2['eastern'] = frame2.state == 'Ohio'

In [43]: frame2
Out[43]:
year state pop debt eastern
one 2000 Ohio 1.5 NaN True
two 2001 Ohio 1.7 -1.2 True
three 2002 Ohio 3.6 NaN True
four 2001 Nevada 2.4 -1.5 False
five 2002 Nevada 2.9 -1.7 False
six 2003 Nevada 3.2 NaN False

New columns CANNOT be created with the frame2.eastern syntax. The del method can
then be used to remove this column:

9|Page
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [44]: del frame2['eastern']

In [45]: frame2.columns
Out[45]: Index(['year', 'state', 'pop', 'debt'], dtype='object')

3. Essential Functionality
This section will walk you through the fundamental mechanics of interacting with the data
contained in a Series or DataFrame.

3.1 Reindexing
An important method on pandas objects is reindex, which means to create a new object
with the data conformed to a new index. Consider an example:

In [46]: obj = pd.Series([4.5, 7.2, -5.3, 3.6], index=['d', 'b',


....: 'a', 'c'])

In [47]: obj
Out[47]:
d 4.5
b 7.2
a -5.3
c 3.6
dtype: float64

Calling .reindex() on this Series rearranges the data according to the new index,
introducing missing values if any index values were not already present:

In [48]: obj2 = obj.reindex(['a', 'b', 'c', 'd', 'e'])

In [49]: obj2
Out[49]:
a -5.3
b 7.2
c 3.6
d 4.5
e NaN
dtype: float64

With DataFrame, reindex can alter either the (row) index, columns, or both. When passed
only a sequence, it reindexes the rows in the result:

10 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [50]: frame = pd.DataFrame(np.arange(9).reshape((3, 3)),


....: index=['a', 'c', 'd'],
....: columns=['Ohio', 'Texas', 'California'])

In [51]: frame
Out[51]:
Ohio Texas California
a 0 1 2
c 3 4 5
d 6 7 8

In [52]: frame2 = frame.reindex(['a', 'b', 'c', 'd'])

In [53]: frame2
Out[53]:
Ohio Texas California
a 0.0 1.0 2.0
b NaN NaN NaN
c 3.0 4.0 5.0
d 6.0 7.0 8.0

The columns can be reindexed with the columns keyword:

In [54]: states = ['Texas', 'Utah', 'California']

In [55]: frame.reindex(columns=states)
Out[55]:
Texas Utah California
a 1 NaN 2
c 4 NaN 5
d 7 NaN 8

3.2 Dropping Entries from an Axis


Dropping one or more entries from an axis is easy if you already have an index array or list
without those entries. As that can require a bit of munging and set logic, the .drop()
method will return a new object with the indicated value or values deleted from an axis:

In [56]: obj = pd.Series(np.arange(5.), index=['a', 'b', 'c',


....: 'd', 'e'])

In [57]: obj
Out[57]:
a 0.0
b 1.0
c 2.0
d 3.0
e 4.0
dtype: float64

11 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [58]: new_obj = obj.drop('c')

In [59]: new_obj
Out[59]:
a 0.0
b 1.0
d 3.0
e 4.0
dtype: float64

In [60]: obj.drop(['d', 'c'])


Out[60]:
a 0.0
b 1.0
e 4.0
dtype: float64

With DataFrame, index values can be deleted from either axis. To illustrate this, we
first create an example DataFrame:

In [61]: data = pd.DataFrame(np.arange(16).reshape((4, 4)),


.....: index=['Ohio', 'Colorado', 'Utah', 'New York'],
.....: columns=['one', 'two', 'three', 'four'])

In [62]: data
Out[62]:
one two three four
Ohio 0 1 2 3
Colorado 4 5 6 7
Utah 8 9 10 11
New York 12 13 14 15

Calling drop with a sequence of labels will drop values from the row labels (axis 0):

In [63]: data.drop(['Colorado', 'Ohio'])


Out[63]:
one two three four
Utah 8 9 10 11
New York 12 13 14 15

You can drop values from the columns by passing axis=1 or axis='columns':

In [64]: data.drop('two', axis=1)


Out[64]:
one three four
Ohio 0 2 3
Colorado 4 6 7
Utah 8 10 11
New York 12 14 15

12 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [65]: data.drop(['two', 'four'], axis='columns')


Out[65]:
one three
Ohio 0 2
Colorado 4 6
Utah 8 10
New York 12 14

3.3 Indexing, Selection, and Filtering


Here are some examples of indexing, selection and filtering:

In [66]: obj = pd.Series(np.arange(4.), index=['a','b','c','d'])

In [67]: obj
Out[67]:
a 0.0
b 1.0
c 2.0
d 3.0
dtype: float64

In [68]: obj['b']
Out[68]: 1.0

In [69]: obj[1]
Out[69]: 1.0

In [70]: obj[2:4]
Out[70]:
c 2.0
d 3.0
dtype: float64

In [71]: obj[['b', 'a', 'd']]


Out[71]:
b 1.0
a 0.0
d 3.0
dtype: float64

In [72]: obj[[1, 3]]


Out[72]:
b 1.0
d 3.0
dtype: float64

13 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [73]: obj[obj < 2]


Out[73]:
a 0.0
b 1.0
dtype: float64

Slicing with labels behaves differently than normal Python slicing in that the endpoint is
inclusive:

In [74]: obj['b':'c']
Out[74]:
b 1.0
c 2.0
dtype: float64

Setting using these methods modifies the corresponding section of the Series:

In [75]: obj['b':'c'] = 5

In [76]: obj
Out[76]:
a 0.0
b 5.0
c 5.0
d 3.0
dtype: float64

Indexing into a DataFrame is for retrieving one or more columns either with a single
value or sequence:

In [77]: data = pd.DataFrame(np.arange(16).reshape((4, 4)),


.....: index=['Ohio', 'Colorado', 'Utah', 'New York'],
.....: columns=['one', 'two', 'three', 'four'])

In [78]: data
Out[78]:
one two three four
Ohio 0 1 2 3
Colorado 4 5 6 7
Utah 8 9 10 11
New York 12 13 14 15

In [79]: data['two']
Out[79]:
Ohio 1
Colorado 5
Utah 9
New York 13
Name: two, dtype: int64

14 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [80]: data[['three', 'one']]


Out[80]:
three one
Ohio 2 0
Colorado 6 4
Utah 10 8
New York 14 12

Indexing like this has a few special cases. First, slicing or selecting data with a boolean
array:

In [81]: data[:2]
Out[81]:
one two three four
Ohio 0 1 2 3
Colorado 4 5 6 7

In [82]: data[data['three'] > 5]


Out[82]:
one two three four
Colorado 4 5 6 7
Utah 8 9 10 11
New York 12 13 14 15

The row selection syntax data[:2]is provided as a convenience. Passing a single


element or a list to the []operator selects columns.

Another use case is in indexing with a boolean DataFrame, such as one produced by a
scalar comparison:

In [83]: data < 5


Out[83]:
one two three four
Ohio True True True True
Colorado True False False False
Utah False False False False
New York False False False False

In [84]: data[data < 5] = 0

In [85]: data
Out[85]:
one two three four
Ohio 0 0 0 0
Colorado 0 5 6 7
Utah 8 9 10 11
New York 12 13 14 15

15 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

3.4 Selection with loc and iloc


For DataFrame label-indexing on the rows, I introduce the special indexing operators .loc
and .iloc. They enable you to select a subset of the rows and columns from a
DataFrame using either axis labels (.loc) or integers (.iloc).

As a preliminary example, let’s select a single row and multiple columns by label:

In [86]: data.loc['Colorado', ['two', 'three']]


Out[86]:
two 5
three 6
Name: Colorado, dtype: int64

We’ll then perform some similar selections with integers using .iloc:

In [87]: data.iloc[2, [3, 0, 1]]


Out[87]:
four 11
one 8
two 9
Name: Utah, dtype: int64

In [88]: data.iloc[2]
Out[88]:
one 8
two 9
three 10
four 11
Name: Utah, dtype: int64

In [89]: data.iloc[[1, 2], [3, 0, 1]]


Out[89]:
four one two
Colorado 7 0 5
Utah 11 8 9

Both indexing functions work with slices in addition to single labels or lists of labels:

In [90]: data.loc[:'Utah', 'two']


Out[90]:
Ohio 0
Colorado 5
Utah 9
Name: two, dtype: int64

16 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [91]: data.iloc[:, :3][data.three > 5]


Out[91]:
one two three
Colorado 0 5 6
Utah 8 9 10
New York 12 13 14

So there are many ways to select and rearrange the data contained in a pandas object. For
DataFrame, Table below provides a short summary of many of them.

3.5 Function Application and Mapping


In [92]: frame = pd.DataFrame(np.random.randn(4, 3),
.....: columns=list('bde') , index=['Utah', 'Ohio', 'Texas',
.....: 'Oregon'])

In [93]: frame
Out[93]:
b d e
Utah -0.204708 0.478943 -0.519439
Ohio -0.555730 1.965781 1.393406
Texas 0.092908 0.281746 0.769023
Oregon 1.246435 1.007189 -1.296221

In [94]: np.abs(frame)
Out[94]:
b d e
Utah 0.204708 0.478943 0.519439
Ohio 0.555730 1.965781 1.393406
Texas 0.092908 0.281746 0.769023
Oregon 1.246435 1.007189 1.296221

17 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

Another frequent operation is applying a function on one-dimensional arrays to each


column or row. DataFrame’s .apply method does exactly this:

In [95]: f = lambda x: x.max() - x.min()

In [96]: frame.apply(f)
Out[96]:
b 1.802165
d 1.684034
e 2.689627
dtype: float64In

Here the function f, which computes the difference between the maximum and minimum
of a Series, is invoked once on each column in frame. The result is a Series having the
columns of frame as its index.

If you pass axis='columns'to apply.apply, the function will be invoked once per row
instead:

In [97]: frame.apply(f, axis='columns')


Out[97]:
Utah 0.998382
Ohio 2.521511
Texas 0.676115
Oregon 2.542656
dtype: float64

The function passed to apply need not return a scalar value; it can also return a Series
with multiple values:

In [98]: def f(x):


.....: return pd.Series([x.min(),x.max()],index=['min',
.....: 'max'])

In [99]: frame.apply(f)
Out[99]:
b d e
min -0.555730 0.281746 -1.296221
max 1.246435 1.965781 1.393406

Element-wise Python functions can be used, too. Suppose you wanted to compute a
formatted string from each floating-point value in frame. You can do this with .applymap:

In [100]: format = lambda x: '%.2f' % x

18 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [101]: frame.applymap(format)
Out[101]:
b d e
Utah -0.20 0.48 -0.52
Ohio -0.56 1.97 1.39
Texas 0.09 0.28 0.77
Oregon 1.25 1.01 -1.30

The reason for the name .applymap is that Series has a .map method for applying an
element-wise function:

In [102]: frame['e'].map(format)
Out[102]:
Utah -0.52
Ohio 1.39
Texas 0.77
Oregon -1.30
Name: e, dtype: object

3.6 Sorting and Ranking


Sorting a dataset by some criterion is another important built-in operation. To sort
lexicographically by row or column index, use the .sort_index method, which returns a
new, sorted object:

In [103]: obj = pd.Series(range(4), index=['d', 'a', 'b', 'c'])

In [104]: obj.sort_index()
Out[104]:
a 1
b 2
c 3
d 0
dtype: int64

With a DataFrame, you can sort by index on either axis:

In [105]: frame = pd.DataFrame(np.arange(8).reshape((2, 4)),


.....: index=['three', 'one'],
.....: columns=['d', 'a', 'b', 'c'])
In [106]: frame.sort_index()
Out[106]:
d a b c
one 4 5 6 7
three 0 1 2 3

19 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [107]: frame.sort_index(axis=1)
Out[107]:
a b c d
three 1 2 3 0
one 5 6 7 4

The data is sorted in ascending order by default, but can be sorted in descending order,
too:

In [108]: frame.sort_index(axis=1, ascending=False)


Out[108]:
d c b a
three 0 3 2 1
one 4 7 6 5

To sort a Series by its values, use its .sort_values method:

In [109]: obj = pd.Series([4, 7, -3, 2])

In [110]: obj.sort_values()
Out[110]:
2 -3
3 2
0 4
1 7
dtype: int64

Any missing values are sorted to the end of the Series by default:

In [111]: obj = pd.Series([4, np.nan, 7, np.nan, -3, 2])

In [112]: obj.sort_values()
Out[112]:
4 -3.0
5 2.0
0 4.0
2 7.0
1 NaN
3 NaN
dtype: float64

When sorting a DataFrame, you can use the data in one or more columns as the sort
keys. To do so, pass one or more column names to the by option of .sort_values:

In [113]: frame = pd.DataFrame({'b': [4, 7, -3, 2],


.....: 'a': [0, 1, 0, 1]})

20 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [114]: frame
Out[114]:
a b
0 0 4
1 1 7
2 0 -3
3 1 2

In [115]: frame.sort_values(by='b')
Out[115]:
a b
2 0 -3
3 1 2
0 0 4
1 1 7

To sort by multiple columns, pass a list of names:

In [116]: frame.sort_values(by=['a', 'b'])


Out[116]:
a b
2 0 -3
0 0 4
3 1 2
1 1 7

Ranking assigns ranks from one through the number of valid data points in an array. The
rank methods for Series and DataFrame are the place to look; by default rank breaks
ties by assigning each group the mean rank:

In [117]: obj = pd.Series([7, -5, 7, 4, 2, 0, 4])

In [118]: obj.rank()
Out[118]:
0 6.5
1 1.0
2 6.5
3 4.5
4 3.0
5 2.0
6 4.5
dtype: float64

Ranks can also be assigned according to the order in which they’re observed in thedata:

21 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [119]: obj.rank(method='first')
Out[119]:
0 6.0
1 1.0
2 7.0
3 4.0
4 3.0
5 2.0
6 5.0
dtype: float64

Here, instead of using the average rank 6.5 for the entries 0 and 2, they instead have been
set to 6 and 7 because label 0 precedes label 2 in the data.

You can rank in descending order, too:

# Assign tie values the maximum rank in the group


In [120]: obj.rank(ascending=False, method='max')
Out[120]:
0 2.0
1 7.0
2 2.0
3 4.0
4 5.0
5 6.0
6 4.0
dtype: float64

DataFrame can compute ranks over the rows or the columns:

In [121]: frame = pd.DataFrame({'b': [4.3, 7, -3, 2],


.....: 'a': [0, 1, 0, 1],
.....: 'c': [-2, 5, 8, -2.5]})

In [122]: frame
Out[122]:
a b c
0 0 4.3 -2.0
1 1 7.0 5.0
2 0 -3.0 8.0
3 1 2.0 -2.5

In [123]: frame.rank(axis='columns')
Out[123]:
a b c
0 2.0 3.0 1.0
1 1.0 3.0 2.0
2 2.0 1.0 3.0
3 2.0 3.0 1.0

22 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

4. Summarizing and Computing Descriptive


Statistics
pandas objects are equipped with a set of common mathematical and statistical methods. Most
of these fall into the category of reductions or summary statistics, methods that extract a single
value (like the sum or mean) from a Series or a Series of values from the rows or columns of
a DataFrame. Consider a small DataFrame:

In [124]: df = pd.DataFrame([[1.4, np.nan], [7.1, -4.5],


.....: [np.nan, np.nan], [0.75, -1.3]],
.....: index=['a', 'b', 'c', 'd'],
.....: columns=['one', 'two'])

In [125]: df
Out[125]:
one two
a 1.40 NaN
b 7.10 -4.5
c NaN NaN
d 0.75 -1.3

Calling DataFrame’s .sum method returns a Series containing column sums:

In [126]: df.sum()
Out[126]:
one 9.25
two -5.80
dtype: float64

Passing axis='columns'or axis=1 sums across the columns instead:

In [127]: df.sum(axis='columns')
Out[127]:
a 1.40
b 2.60
c NaN
d -0.55
dtype: float64

NA values are excluded unless the entire slice (row or column in this case) is NA. This can be
disabled with the skipna option:

23 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [128]: df.mean(axis='columns', skipna=False)


Out[128]:
a NaN
b 1.300
c NaN
d -0.275
dtype: float64

Some methods, like .idxmin and .idxmax, return indirect statistics like the index value
where the minimum or maximum values are attained:

In [129]: df.idxmax()
Out[129]:
one b
two d
dtype: object

Other methods are accumulations:

In [130]: df.cumsum()
Out[130]:
one two
a 1.40 NaN
b 8.50 -4.5
c NaN NaN
d 9.25 -5.8

Another type of method is neither a reduction nor an accumulation. .describe is one


such example, producing multiple summary statistics in one shot:

In [131]: df.describe()
Out[131]:
one two
count 3.000000 2.000000
mean 3.083333 -2.900000
std 3.493685 2.262742
min 0.750000 -4.500000
25% 1.075000 -3.700000
50% 1.400000 -2.900000
75% 4.250000 -2.100000
max 7.100000 -1.300000

On non-numeric data, .describe produces alternative summary statistics:

In [132]: obj = pd.Series(['a', 'a', 'b', 'c'] * 4)

24 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [133]: obj.describe()
Out[133]:
count 16
unique 3
top a
freq 8
dtype: object

See table below for a full list of summary statistics and related methods.

4.1 Correlation and Covariance


Some summary statistics, like correlation and covariance, are computed from pairs of
arguments. Let’s consider some DataFrames of stock prices and volumes obtained from
Yahoo! Finance using the add-on pandas-datareader package. If you don’t have it installed
already, it can be obtained via conda or pip:

25 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

1. Open cmd in your Windows


2. Type/key in pip install pandas-datareader

pandas-datareader is useful in reading financial data from yahoo. To read particular stock
data, you can use the following code.

In [134]: import pandas_datareader.data as web

In [135]: all_data = {ticker: web.get_data_yahoo(ticker)for ticker


.....: in ['AAPL', 'IBM', 'MSFT', 'GOOG']}

In [136]: price = pd.DataFrame({ticker: data['Adj Close']


.....: for ticker, data in all_data.items()})

In [137]: volume = pd.DataFrame({ticker: data['Volume']


.....: for ticker, data in all_data.items()})

It’s possible that Yahoo! Finance no longer exists since Yahoo! was acquired by Verizon in
2017. Refer to the pandas-datareader documentation online for the latest functionality or
visit the link for fixing the problem https://www.youtube.com/watch?v=eSpH6fPd5Yw

Now, let’s compute percent changes of the prices, using method .pct_change:

In [138]: returns = price.pct_change()

In [139]: returns.tail()
Out[139]:
AAPL GOOG IBM MSFT
Date
2016-10-17 -0.000680 0.001837 0.002072 -0.003483
2016-10-18 -0.000681 0.019616 -0.026168 0.007690
2016-10-19 -0.002979 0.007846 0.003583 -0.002255
2016-10-20 -0.000512 -0.005652 0.001719 -0.004867
2016-10-21 -0.003930 0.003011 -0.012474 0.042096

The .corr method of Series computes the correlation of the overlapping, non-NA,
aligned-by-index values in two Series. Relatedly, .cov computes the covariance:

In [140]: returns['MSFT'].corr(returns['IBM'])
Out[140]: 0.49976361144151144

In [141]: returns['MSFT'].cov(returns['IBM'])
Out[141]: 8.8706554797035462e-05

DataFrame’s .corr and .cov methods, on the other hand, return a full correlation or
covariance matrix as a DataFrame, respectively:

26 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In [142]: returns.corr()
Out[142]:
AAPL GOOG IBM MSFT
AAPL 1.000000 0.407919 0.386817 0.389695
GOOG 0.407919 1.000000 0.405099 0.465919
IBM 0.386817 0.405099 1.000000 0.499764
MSFT 0.389695 0.465919 0.499764 1.000000

In [143]: returns.cov()
Out[143]:
AAPL GOOG IBM MSFT
AAPL 0.000277 0.000107 0.000078 0.000095
GOOG 0.000107 0.000251 0.000078 0.000108
IBM 0.000078 0.000078 0.000146 0.000089
MSFT 0.000095 0.000108 0.000089 0.000215

Using DataFrame’s .corwith method, you can compute pairwise correlations between a
DataFrame’s columns or rows with another Series or DataFrame. Passing a DataFrame
computes the correlations of matching column names. Lets correlate the price and volume
of each stock:

In [144]: returns.corrwith(volume)
Out[144]:
AAPL -0.075565
GOOG -0.007067
IBM -0.204849
MSFT -0.092950
dtype: float64

4.2 Plotting in Pandas


In pandas, one can plot the DataFrame using method .plot. For example, we can plot the
stock’s price DataFrame simply using the following code :

In [145]: price.plot()
Out[145]:

27 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

4.3 Unique Values, Value Counts, and Membership


Another class of related methods extracts information about the values contained in a one-
dimensional Series. To illustrate these, consider this example:

In [146]: obj = pd.Series(['c', 'a', 'd', 'a', 'a', 'b', 'b', 'c',
.....: 'c'])

The first function is .unique, which gives you an array of the unique values in a Series:

In [147]: uniques = obj.unique()

In [148]: uniques
Out[148]: array(['c', 'a', 'd', 'b'], dtype=object)

Relatedly, .value_counts computes a Series containing value frequencies:

In [149]: obj.value_counts()
Out[149]:
c 3
a 3
b 2
d 1
dtype: int64

The Series is sorted by value in descending order as a convenience. .value_counts is


also available as a top-level pandas method that can be used with any array or sequence:

In [150]: pd.value_counts(obj.values, sort=False)


Out[150]:
a 3
b 2
c 3
d 1
dtype: int64

The .Index.get_indexer method, which gives you an index array from an array of
possibly non-distinct values into another array of distinct values:

In [151]: to_match = pd.Series(['c', 'a', 'b', 'b', 'c', 'a'])

In [152]: unique_vals = pd.Series(['c', 'b', 'a'])

In [153]: pd.Index(unique_vals).get_indexer(to_match)
Out[153]: array([0, 2, 1, 1, 0, 2])

28 | P a g e
UECM 1534 Programming Techniques for Data Processing Jan 18/19

In some cases, you may want to compute a histogram on multiple related columns in a
DataFrame. Here’s an example:

In [154]: data = pd.DataFrame({'Qu1': [1, 3, 4, 3, 4],


.....: 'Qu2': [2, 3, 1, 2, 3],
.....: 'Qu3': [1, 5, 2, 4, 4]})

In [155]: data
Out[155]:
Qu1 Qu2 Qu3
0 1 2 1
1 3 3 5
2 4 1 2
3 3 2 4
4 4 3 4

Passing pandas.value_counts to this DataFrame’s .apply function gives:


In [156]: result = data.apply(pd.value_counts).fillna(0)

In [157]: result
Out[157]:
Qu1 Qu2 Qu3
1 1.0 1.0 1.0
2 0.0 2.0 1.0
3 2.0 2.0 0.0
4 2.0 0.0 2.0
5 0.0 0.0 1.0

Here, the row labels in the result are the distinct values occurring in all of the columns.
The values are the respective counts of these values in each column.

29 | P a g e

You might also like