0% found this document useful (0 votes)
47 views

01 Python For Data Analysis (Ziad)

This document discusses Python libraries for data science. It introduces popular Python libraries such as NumPy, Pandas, Scikit-Learn and others that are useful for data analysis and machine learning. It provides a brief overview of the functionality of these libraries and links to their documentation.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views

01 Python For Data Analysis (Ziad)

This document discusses Python libraries for data science. It introduces popular Python libraries such as NumPy, Pandas, Scikit-Learn and others that are useful for data analysis and machine learning. It provides a brief overview of the functionality of these libraries and links to their documentation.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 53

Python for Data Analysis

• Overview of Python Libraries for Data Scientists


• Datasets
• Reading Datasets
• Selecting and Filtering the Datasets
• Data manipulation
• sorting
• grouping
• rearranging

By
Dr. Ziad Al-Sharif
Python Libraries for Data Science
Many popular Python toolboxes/libraries:
• NumPy
• SciPy
• Pandas
• SciKit-Learn

Visualization libraries
• matplotlib
• Seaborn

and many more …

2
Python Libraries for Data Science
NumPy:
 introduces objects for multidimensional arrays and matrices, as well as
functions that allow to easily perform advanced mathematical and statistical
operations on those objects

 provides vectorization of mathematical operations on arrays and matrices


which significantly improves the performance

 many other python libraries are built on NumPy

Link: http://www.numpy.org/

3
Python Libraries for Data Science
SciPy:
 pronounced “Sigh Pie”
 collection of algorithms for linear algebra, differential equations, numerical
integration, optimization, statistics and more

 part of SciPy Stack

 built on NumPy

Link: https://www.scipy.org/scipylib/

4
Python Libraries for Data Science
Pandas:
 adds data structures and tools designed to work with table-like data (similar
to Series and Data Frames in R); good for datasets

 provides tools for data manipulation: reshaping, merging, sorting, slicing,


aggregation, etc.

 allows handling missing data

Link: http://pandas.pydata.org/

5
Python Libraries for Data Science
matplotlib:
 python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats 

 a set of functionalities similar to those of MATLAB

 line plots, scatter plots, barcharts, histograms, pie charts, etc.

 relatively low-level; some effort needed to create advanced visualization


Link: https://matplotlib.org/

6
Python Libraries for Data Science
Seaborn:
 based on matplotlib 

 provides high level interface for drawing attractive statistical graphics

 Similar (in style) to the popular ggplot2 library in R

Link: https://seaborn.pydata.org/

7
Python Libraries for Data Science
SciKit-Learn:
 provides Machine Learning (ML) algorithms: classification, regression,
clustering, model validation, etc.

 built on NumPy, SciPy and matplotlib

Link: http://scikit-learn.org/

8
Python Libraries for Data Science
statsmodels:
 provides classes and functions for the estimation of many different statistical
models, as well as for conducting statistical tests, and statistical data
exploration.

Link: https://www.statsmodels.org
9
Summary

10
Reading and Loading Datasets
A Case Study
https://www.just.edu.jo/~zasharif/Web/SE412/Examples/ML/CaseStudy/Salaries.csv

https://www.just.edu.jo/~zasharif/Web/SE412/Examples/ML/CaseStudy/flights.csv

https://archive.ics.uci.edu/ml/machine-learning-databases/autos/
https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data 11
Start Jupyter nootebook
Anaconda Users:
From your shell or cmd terminal:
Click here
Type this command

$jupyter notebook

12
Loading Python Libraries

In [ ]: #Import Python Libraries


import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as plt
import seaborn as sns

Press Shift+Enter to execute the jupyter cell

13
Reading data using pandas from a public
website
In [ ]: #Read csv file
url = "https://www.just.edu.jo/~zasharif/Web/SE412/Examples/ML/CaseStudy/Salaries.csv"
df = pd.read_csv(url)

Note: The above command has many optional arguments to fine-tune the data import process.

There is a number of pandas commands to read other data formats:

pd.read_excel('myfile.xlsx',sheet_name='Sheet1', index_col=None,
na_values=['NA'])
pd.read_stata('myfile.dta')
pd.read_sas('myfile.sas7bdat')
pd.read_hdf('myfile.h5','df')
14
Using Colab from Google
• Colaboratory, or “Colab” for short, are Jupyter Notebooks hosted by Google that allow you to
write and execute Python code through your browser.
• it is an internal tool for data analysis at Google.
• Colab is easy to use and link it to your Google account.
• provides free access to GPUs and TPUs, (on the cloud)
• requires zero configuration,
• easy to share your code with the community.

• Many people have been using this tool to achieve their machine learning tasks.
• Most users utilize colab for the free resources to run their Data Science experiments.

15
Uploading files from your local file system in
Colab
• files.upload returns a dictionary of the files which were uploaded.
• The dictionary is keyed by the file name and values are the data which were
uploaded.

from google.colab import files

uploaded = files.upload()

for fn in uploaded.keys():
  print('User uploaded file "{name}" with length {length} bytes'.format(
       name=fn, length=len(uploaded[fn])))

16
Downloading files from Colab to your local file
system
• files.download will invoke a browser download of the file to your
local computer.
from google.colab import files

with open('example.txt', 'w') as f:
  f.write('some content')

files.download('example.txt')

More Information can be found about External data: Local Files, Drive, Sheets, and Cloud Storage at:
https://colab.research.google.com/notebooks/io.ipynb#scrollTo=p2E4EKhCWEC5
17
Reading data using pandas in google’s colab
In [ ]: # To upload from your local drive, start with the following code
from  google.colab  import  files
uploaded = files.upload()

# It will prompt you to select a file. Click on “Choose Files” then select 
# and upload the file. Wait for the file to be 100% uploaded. 
# You should see the name of the file once Colab has uploaded it.
#
# Finally, type in the following code to import it into a dataframe 
# (make sure the filename matches the name of the uploaded file).
# For Example: to read the uploaded file: Salaries.csv

import io
df = pd.read_csv(io.BytesIO(uploaded['Salaries.csv']))
# Dataset is now stored in a Pandas Dataframe
df.head(2)

Resource: https://towardsdatascience.com/3-ways-to-load-csv-files-into-colab-7c14fcbdcb92
18
Reading data using pandas in google’s colab

19
Introduction to pandas and its
Data Frame
part #1

By
Dr. Ziad Al-Sharif

20
Exploring data frames
In [3]: #List first 5 records
df.head()

Out[3]:

21
Hands-on exercises

 Try to read the first 10, 20, 50 records;

 Can you guess how to view the last few records; Hint:

22
Data Frame data types
Pandas Type Native Python Type Description
object string The most general dtype. Will be assigned to your column if
column has mixed types (numbers and strings).

int64 int Numeric characters. 64 refers to the memory allocated to


hold this character.

float64 float Numeric characters with decimals. If a column contains


numbers and NaNs(see below), pandas will default to
float64, in case your missing value has a decimal.

datetime64, N/A (but see the  Values meant to hold time data. Look into these for time
timedelta[ns] datetime module in series experiments.
Python’s standard library)

23
Data Frame data types
In [4]: #Check a particular column type
df['salary'].dtype

Out[4]: dtype('int64')

In [5]: #Check types for all the columns


df.dtypes

Out[4]:

24
Data Frames attributes
Pandas’ Python objects have attributes and methods.

df.attribute description
dtypes list the types of the columns
columns list the column names
axes list the row labels and column names
ndim number of dimensions
size number of elements
shape return a tuple representing the dimensionality
values numpy representation of the data

25
Hands-on exercises

 Find how many records this data frame has;

 How many elements are there?

 What are the column names?

 What types of columns we have in this data frame?

26
Data Frames methods
Unlike attributes, python methods have parenthesis.
All attributes and methods can be listed with a dir() function: dir(df)

df.method() description
head( [n] ), tail( [n] ) first/last n rows

describe() generate descriptive statistics (for numeric columns only)

max(), min() return max/min values for all numeric columns

mean(), median() return mean/median values for all numeric columns

std() standard deviation

sample([n]) returns a random sample of the data frame

dropna() drop all the records with missing values


27
Hands-on exercises

 Give the summary for the numeric columns in the dataset

 Calculate standard deviation for all numeric columns;

 What are the mean values of the first 50 records in the dataset? Hint: use

head() method to subset the first 50 records and then calculate the mean

28
Selecting a column in a Data Frame
• How do I select a subset of a DataFrame?

• Method 1: Subset the data frame using column name:


df['salary']

• Method 2: Use the column name as an attribute:


df.salary

Note: there is an attribute rank for pandas data frames, so to select a column with a name "rank" we should
use method 1.

29
Hands-on exercises

 Calculate the basic statistics for the salary column;

 Find how many values in the salary column (use count method);

 Calculate the average salary;

30
Data Frames groupby method
Using "group by" method we can:
• Split the data into groups based on some criteria (Split a table into groups)
• Calculate/Apply statistics (or apply a function) to each group (Apply some operations to each of those smaller tables )
• Combine the results
• E.g. if we want to find the mean for each group based on the rank of the professors, we may use
groupby as follows:
In [ ]: #Group data using rank
df_rank = df.groupby(['rank'])

In [ ]: #Calculate mean value for each numeric column per each group
df_rank.mean()
Out[ ]:

31
Data Frames groupby method

Once groupby object is create we can calculate various statistics for each group:

In [ ]: #Calculate mean salary for each professor rank:


df.groupby('rank')[['salary']].mean()

Out[ ]:

Note: If single brackets are used to specify the column (e.g. salary), then the output is Pandas Series object.
When double brackets are used the output is a Data Frame
32
Data Frames groupby method
groupby performance notes:
- no grouping/splitting occurs until it's needed.
- Creating the groupby object only verifies that you have passed a valid mapping
- by default the group keys are sorted during the groupby operation.
- You may want to pass sort=False for potential speedup:

In [ ]: #Calculate mean salary for each professor rank:


df.groupby(['rank'], sort=False)[['salary']].mean()
Out[ ]:

33
Data Frame: filtering
• How do I filter specific rows from a DataFrame?
• To subset the data we can apply Boolean indexing.
This indexing is commonly known as a filter.
• E.g. to subset the rows in which the salary value is
greater than $120K:

In [ ]: #Calculate mean salary for each professor rank:


df_sub = df[ df['salary'] > 120000 ]

Any Boolean operator can be used to subset the data:


> greater; >= greater or equal;
< less; <= less or equal;
== equal; != not equal;

In [ ]: #Select only those rows that contain female professors:


df_f = df[ df['sex'] == 'Female' ]
34
Data Frames: Slicing

There are a number of ways to subset the Data Frame:


• one or more columns
• one or more rows
• a subset of rows and columns

Rows and columns can be selected by their position or label

35
Data Frames: Slicing
When selecting one column, it is possible to use single set of brackets, but the
resulting object will be a Series (not a DataFrame):

In [ ]: #Select column salary:


df['salary']

When we need to select more than one column and/or make the output to be a
DataFrame, we should use double brackets:

In [ ]: #Select column salary:


df[['rank','salary']]

36
Data Frames: Selecting rows
If we need to select a range of rows, we can specify the range using ":"

In [ ]: #Select rows by their position:


df[10:20]

Out[ ]:

• Notice that the first row has a position 0, and the


last value in the range is omitted:
• So for 0:10 range the first 10 rows are returned
with the positions starting with 0 and ending with 9

37
Data Frames: method loc
If we need to select a range of rows, using their labels we can use method loc:

In [ ]: #Select rows by their labels:


df.loc[10:20,['rank','sex','salary']]

Out[ ]:

38
Data Frames: method iloc
If we need to select a range of rows and/or columns, using their positions we can
use method iloc:
In [ ]: #Select rows by their index:
df.iloc[10:20,[0, 3, 4, 5]]

Out[ ]:

39
Data Frames: method iloc (summary)
df.iloc[0] # First row of a data frame
df.iloc[i] #(i+1)th row
df.iloc[-1] # Last row

df.iloc[:, 0] # First column


df.iloc[:, -1] # Last column

df.iloc[0:7] #First 7 rows


df.iloc[:, 0:2] #First 2 columns
df.iloc[1:3, 0:2] #Second through third rows and first 2 columns
df.iloc[[0,5], [1,3]] #1st and 6th rows and 2nd and 4th columns

40
Data Frames: Sorting
We can sort the data by a value in the column. By default the sorting will occur in
ascending order and a new data frame is return.

In [ ]: # Create a new data frame from the original sorted by the column Salary
df_sorted = df.sort_values( by ='salary')
df_sorted.head()

Out[ ]:

41
Data Frames: Sorting
We can sort the data using 2 or more columns:

In [ ]: df_sorted = df.sort_values( by=['yrs.since.phd','salary'], ascending = [True, False])


df_sorted.head(10)

Out[ ]:

42
Extra Reading
Handling Missing Values

43
Missing Values
Missing values are marked as NaN

In [ ]: # Read a dataset with missing values


url = "https://www.just.edu.jo/~zasharif/Web/SE412/Examples/ML/CaseStudy/flights.csv"
df_flights = pd.read_csv(url)

In [ ]: # Select the rows that have at least one missing value


df_flights[flights.isnull().any(axis=1)].head()

Out[ ]:

44
Missing Values
There are a number of methods to deal with missing values in the data frame:

df.method description
dropna() Drop missing observations

dropna(how = 'all') Drop observations where all cells is NA

dropna(axis=1, how='all') Drop column if all the values are missing

dropna(thresh = 5) Drop rows that contain less than 5 non-missing values

fillna(0) Replace missing values with zeros

isnull() returns True if the value is missing

notnull() Returns True for non-missing values


45
Missing Values
• When summing the data, missing values will be treated as zero
• If all values are missing, the sum will be equal to NaN
• cumsum() and cumprod() methods ignore missing values but preserve
them in the resulting arrays
• Missing values in GroupBy method are excluded
• Many descriptive statistics methods have skipna option to control if missing
data should be excluded . This value is set to True by default

46
Aggregation Functions in Pandas
Aggregation - computing a summary statistic about each group, i.e.
• compute group sums or means
• compute group sizes/counts

Common aggregation functions:

min, max
count, sum, prod
mean, median, mode, mad
std, var

47
Aggregation Functions in Pandas
agg() method are useful when multiple statistics are computed per column:

In [ ]: df_flights[['DepDelay','ArrDelay']].agg(['min','mean','max'])

Out[ ]:

48
Basic Descriptive Statistics
df.method description
describe Basic statistics (count, mean, std, min, quantiles, max)

min, max Minimum and maximum values

mean, median, mode Arithmetic average, median and mode

var, std Variance and standard deviation

sem Standard error of mean

skew Sample skewness

kurt kurtosis

49
Graphics to explore the data
• Seaborn package is built on matplotlib but provides high level interface for drawing
attractive statistical graphics. It specifically targets statistical data visualization
• You can create plots in pandas too !?

To show graphs within Python jupyter notebook, you need to include inline directive:

In [ ]: %matplotlib inline

50
Graphics
description
distplot histogram
barplot estimate of central tendency for a numeric variable
violinplot  similar to boxplot, also shows the probability density of the data

jointplot Scatterplot

regplot Regression plot

pairplot Pairplot

boxplot boxplot
swarmplot categorical scatterplot
factorplot General categorical plot
51
Basic statistical Analysis
statsmodel and scikit-learn - both have a number of function for statistical analysis

The first one is mostly used for regular analysis using R style formulas, while scikit-learn is more
tailored for Machine Learning.

statsmodels:
• linear regressions
• ANOVA tests
• hypothesis testings
• many more ...

scikit-learn:
• kmeans
• support vector machines
• random forests
• many more ...

See examples in the Tutorial Notebook


52
References
• pandas: Getting started tutorials
• https://pandas.pydata.org/pandas-docs/stable/getting_started/intro_tutorials/index.html
• Pandas: How to Read and Write Files
• https://realpython.com/pandas-read-write-files/
• Kaggle:
• Tutorial: Accessing Data with Pandas
• https://www.kaggle.com/sohier/tutorial-accessing-data-with-pandas
• Datasets
• https://www.kaggle.com/datasets
• Courses
• https://www.kaggle.com/learn

53

You might also like