0% found this document useful (0 votes)
73 views

Python For Data Analysis

Here are the steps to perform the above exercises: 1. df.describe() 2. df.std() 3. df.head(50).mean() 22 Filtering and selecting data Filtering: Select rows based on condition Selecting: Pick out columns df[condition] filter rows df[['col1','col2']] select columns df[df['col'] > value] filter rows where col > value df[df['col'].isin([list])] filter rows where col is in list df.loc[labels] label-based indexing df.query('condition') filter rows using

Uploaded by

Farheen Nawazi
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views

Python For Data Analysis

Here are the steps to perform the above exercises: 1. df.describe() 2. df.std() 3. df.head(50).mean() 22 Filtering and selecting data Filtering: Select rows based on condition Selecting: Pick out columns df[condition] filter rows df[['col1','col2']] select columns df[df['col'] > value] filter rows where col > value df[df['col'].isin([list])] filter rows where col is in list df.loc[labels] label-based indexing df.query('condition') filter rows using

Uploaded by

Farheen Nawazi
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 96

Python for Data Analysis

Overview of Python Libraries for Data


Lecture Content Scientists

Reading Data; Selecting and Filtering the Data; Data manipulation,


sorting, grouping, rearranging

Plotting the data

Descriptive statistics

Inferential statistics

2
Python Libraries for Data Science
Many popular Python toolboxes/libraries:
• NumPy
• SciPy
• Pandas
• SciKit-Learn
• TensorFlow
• Keras
• PyTorch

Visualization libraries
• matplotlib
• Seaborn

and many more …

3
Python Libraries for Data Science
NumPy:
 introduces objects for multidimensional arrays and matrices, as well as
functions that allow to easily perform advanced mathematical and statistical
operations on those objects

 provides vectorization of mathematical operations on arrays and matrices


which significantly improves the performance

 many other python libraries are built on NumPy

Link: http://www.numpy.org/

4
Python Libraries for Data Science
SciPy:
 collection of algorithms for linear algebra, differential equations, numerical
integration, optimization, statistics and more

 part of SciPy Stack

 built on NumPy

Link: https://www.scipy.org/scipylib/

5
Python Libraries for Data Science
Pandas:
 adds data structures and tools designed to work with table-like data (similar
to Series and Data Frames in R)

 provides tools for data manipulation: reshaping, merging, sorting, slicing,


aggregation etc.

 allows handling missing data

Link: http://pandas.pydata.org/

6
Python Libraries for Data Science
SciKit-Learn:
 provides machine learning algorithms: classification, regression, clustering,
model validation etc.

 built on NumPy, SciPy and matplotlib

Link: http://scikit-learn.org/

7
TensorFlow
 
Features: 
• Better computational graph visualizations
• Reduces error by 50 to 60 percent in neural machine learning
• Parallel computing to execute complex models
• Seamless library management backed by Google
• Quicker updates and frequent new releases to provide you with the latest features 
• TensorFlow is particularly useful for the following applications:
• Speech and image recognition 
• Text-based applications 
• Time-series analysis
• Video detection
• Link : https://www.tensorflow.org

8
Keras
• Features:
• Keras provides a vast prelabeled datasets which can be used to directly import and load.
• It contains various implemented layers and parameters that can be used for
construction, configuration, training, and evaluation of neural networks
• Applications:
• One of the most significant applications of Keras are the deep learning models that are
available with their pretrained weights. You can use these models directly to make
predictions or extract its features without creating or training your own new model.
• Link https://keras.io

9
Python Libraries for Data Science
matplotlib:
 python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats 

 a set of functionalities similar to those of MATLAB

 line plots, scatter plots, barcharts, histograms, pie charts etc.

 relatively low-level; some effort needed to create advanced visualization


Link: https://matplotlib.org/

10
Python Libraries for Data Science
Seaborn:
 based on matplotlib 

 provides high level interface for drawing attractive statistical graphics

 Similar (in style) to the popular ggplot2 library in R

Link: https://seaborn.pydata.org/

11
Start Jupyter notebook
# On the Shared Computing Cluster
[scc1 ~] jupyter notebook

12
Loading Python Libraries
In [ ]: #Import Python Libraries
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
import seaborn as sns

Press Shift+Enter to execute the jupyter cell

13
Reading data using pandas
In [ ]: #Read csv file
df = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/Salaries.csv")

Note: The above command has many optional arguments to fine-tune the data import process.

There is a number of pandas commands to read other data formats:

pd.read_excel('myfile.xlsx',sheet_name='Sheet1', index_col=None,
na_values=['NA'])
pd.read_stata('myfile.dta')
pd.read_sas('myfile.sas7bdat')
pd.read_hdf('myfile.h5','df')
14
Exploring data frames
In [3]: #List first 5 records
df.head()

Out[3]:

15
Hands-on exercises

 Try to read the first 10, 20, 30 records;

 Can you guess how to view the last few records; Hint:

16
Data Frame data types
Pandas Type Native Python Type Description
object string The most general dtype. Will be
assigned to your column if column
has mixed types (numbers and
strings).

int64 int Numeric characters. 64 refers to


the memory allocated to hold this
character.

float64 float Numeric characters with decimals.


If a column contains numbers and
NaNs(see below), pandas will
default to float64, in case your
missing value has a decimal.

datetime64, timedelta[ns] N/A (but see the datetime module Values meant to hold time data.
in Python’s standard library) Look into these for time series
experiments.

17
Data Frame data types
In [4]: #Check a particular column type
df['salary'].dtype

Out[4]: dtype('int64')

In [5]: #Check types for all the columns


df.dtypes

Out[4]: rank object


discipline object
phd int64
service int64
sex object
salary int64
dtype: object
18
Data Frames attributes
Python objects have attributes and methods.

df.attribute description
dtypes list the types of the columns
columns list the column names
axes list the row labels and column names
ndim number of dimensions

size number of elements


shape return a tuple representing the dimensionality
values numpy representation of the data

19
Hands-on exercises

 Find how many records this data frame has; size

 How many elements are there? size

 What are the column names? columns

 What types of columns we have in this data frame? dtypes

20
Data Frames methods
Unlike attributes, python methods have parenthesis.
All attributes and methods can be listed with a dir() function: dir(df)

df.method() description
head( [n] ), tail( [n] ) first/last n rows

describe() generate descriptive statistics (for numeric columns only)

max(), min() return max/min values for all numeric columns

mean(), median() return mean/median values for all numeric columns

std() standard deviation

sample([n]) returns a random sample of the data frame

dropna() drop all the records with missing values


21
Hands-on exercises

 Give the summary for the numeric columns in the dataset

 Calculate standard deviation for all numeric columns;

 What are the mean values of the first 50 records in the dataset? Hint: use

head() method to subset the first 50 records and then calculate the mean

22
Selecting a column in a Data Frame
Method 1: Subset the data frame using column name:
df['sex']

Method 2: Use the column name as an attribute:


df.sex

Note: there is an attribute rank for pandas data frames, so to select a column with a name
"rank" we should use method 1.

23
Hands-on exercises

 Calculate the basic statistics for the salary column;

 Find how many values in the salary column (use count method);

sal2=len(df.Age) #count will give number of distinct values

sal2

 Calculate the average salary; mean()

24
Data Frames groupby method
Using "group by" method we can:
• Split the data into groups based on some criteria
• Calculate statistics (or apply a function) to each group
• Similar to dplyr() function in R
In [ ]: #Group data using rank
df_rank = df.groupby(['rank'])

In [ ]: #Calculate mean value for each numeric column per each group
df_rank.mean()

25
Data Frames groupby method

Once groupby object is create we can calculate various statistics for each group:
In [ ]: #Calculate mean salary for each professor rank:
df.groupby('rank')[['salary']].mean()

Note: If single brackets are used to specify the column (e.g. salary), then the output is Pandas Series object.
When double brackets are used the output is a Data Frame
26
Data Frames groupby method

groupby performance notes:


- no grouping/splitting occurs until it's needed. Creating the groupby object
only verifies that you have passed a valid mapping
- by default the group keys are sorted during the groupby operation. You may
want to pass sort=False for potential speedup:

In [ ]: #Calculate mean salary for each professor rank:


df.groupby(['rank'], sort=False)[['salary']].mean()

27
Data Frame: filtering

To subset the data we can apply Boolean indexing. This indexing is commonly
known as a filter. For example if we want to subset the rows in which the salary
value is greater than $120K:
In [ ]: #Calculate mean salary for each professor rank:
df_sub = df[ df['salary'] > 120000 ]

Any Boolean operator can be used to subset the data:


> greater; >= greater or equal;
< less; <= less or equal;
== equal; != not equal;
In [ ]: #Select only those rows that contain female professors:
df_f = df[ df['sex'] == 'Female' ]
28
Data Frames: Slicing

There are a number of ways to subset the Data Frame:


• one or more columns
• one or more rows
• a subset of rows and columns

Rows and columns can be selected by their position or label

29
Data Frames: Slicing

When selecting one column, it is possible to use single set of brackets, but the
resulting object will be a Series (not a DataFrame):
In [ ]: #Select column salary:
df['salary']

When we need to select more than one column and/or make the output to be a
DataFrame, we should use double brackets:
In [ ]: #Select column salary:
df[['rank','salary']]

30
Data Frames: Selecting rows

If we need to select a range of rows, we can specify the range using ":"

In [ ]: #Select rows by their position:


df[10:20]

Notice that the first row has a position 0, and the last value in the range is omitted:
So for 0:10 range the first 10 rows are returned with the positions starting with 0
and ending with 9

31
Data Frames: method loc

If we need to select a range of rows, using their labels we can use method loc:

In [ ]: #Select rows by their labels:


df_sub.loc[10:20,['rank','sex','salary']]

Out[ ]:

32
Data Frames: method iloc

If we need to select a range of rows and/or columns, using their positions we can
use method iloc:
In [ ]: #Select rows by their labels:
df_sub.iloc[10:20,[0, 3, 4, 5]]

Out[ ]:

33
Data Frames: method iloc (summary)
df.iloc[0] # First row of a data frame
df.iloc[i] #(i+1)th row
df.iloc[-1] # Last row

df.iloc[:, 0] # First column


df.iloc[:, -1] # Last column

df.iloc[0:7] #First 7 rows


df.iloc[:, 0:2] #First 2 columns
df.iloc[1:3, 0:2] #Second through third rows and first 2 columns
df.iloc[[0,5], [1,3]] #1st and 6th rows and 2nd and 4th columns

34
Data Frames: Sorting

We can sort the data by a value in the column. By default the sorting will occur in
ascending order and a new data frame is return.

In [ ]: # Create a new data frame from the original sorted by the column Salary
df_sorted = df.sort_values( by ='service')
df_sorted.head()

Out[ ]:

35
Data Frames: Sorting

We can sort the data using 2 or more columns:


In [ ]: df_sorted = df.sort_values( by =['service', 'salary'], ascending = [True, False])
df_sorted.head(10)

Out[ ]:

36
Missing Values
Missing values are marked as NaN
In [ ]: # Read a dataset with missing values
flights = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/flights.csv")

In [ ]: # Select the rows that have at least one missing value


flights[flights.isnull().any(axis=1)].head()

Out[ ]:

37
Missing Values
There are a number of methods to deal with missing values in the data frame:
df.method() description
dropna() Drop missing observations

dropna(how='all') Drop observations where all cells is NA

dropna(axis=1, how='all') Drop column if all the values are missing

dropna(thresh = 5) Drop rows that contain less than 5 non-missing values

fillna(0) Replace missing values with zeros

isnull() returns True if the value is missing

notnull() Returns True for non-missing values


38
Missing Values
• When summing the data, missing values will be treated as zero
• If all values are missing, the sum will be equal to NaN
• cumsum() and cumprod() methods ignore missing values but preserve them in
the resulting arrays
• Missing values in GroupBy method are excluded (just like in R)
• Many descriptive statistics methods have skipna option to control if missing
data should be excluded . This value is set to True by default (unlike R)

39
Aggregation Functions in Pandas
Aggregation - computing a summary statistic about each group, i.e.
• compute group sums or means
• compute group sizes/counts

Common aggregation functions:

min, max
count, sum, prod
mean, median, mode, mad
std, var

40
Aggregation Functions in Pandas
agg() method are useful when multiple statistics are computed per column:
In [ ]: flights[['dep_delay','arr_delay']].agg(['min','mean','max'])

Out[ ]:

41
Basic Descriptive Statistics
df.method() description
describe Basic statistics (count, mean, std, min, quantiles, max)

min, max Minimum and maximum values

mean, median, mode Arithmetic average, median and mode

var, std Variance and standard deviation

sem Standard error of mean

skew Sample skewness

kurt kurtosis

42
Graphics to explore the data
Seaborn package is built on matplotlib but provides high level
interface for drawing attractive statistical graphics, similar to ggplot2
library in R. It specifically targets statistical data visualization

To show graphs within Python notebook include inline directive:

In [ ]: %matplotlib inline

43
Graphics
description
distplot histogram
barplot estimate of central tendency for a numeric variable
violinplot  similar to boxplot, also shows the probability density of the
data
jointplot Scatterplot
regplot Regression plot
pairplot Pairplot
boxplot boxplot
swarmplot categorical scatterplot
factorplot General categorical plot

44
Basic statistical Analysis
statsmodel and scikit-learn - both have a number of function for statistical analysis

The first one is mostly used for regular analysis using R style formulas, while scikit-learn is
more tailored for Machine Learning.

statsmodels:
• linear regressions
• ANOVA tests
• hypothesis testings
• many more ...

scikit-learn:
• kmeans
• support vector machines
• random forests
• many more ...

See examples in the Tutorial Notebook


45
Descriptive vs. Inferential Statistics
• Descriptive: e.g., Median; describes data you have but can't be
generalized beyond that
• We’ll talk about Exploratory Data Analysis
• Inferential: e.g., t-test, that enable inferences about the population
beyond our data
• These are the techniques we’ll leverage for Machine Learning and Prediction
EDA Tools
• Python and R language are the two most commonly used data science tools
to create an EDA
• Perform k-means clustering. It is an unsupervised learning algorithm where
the data points are assigned to clusters, also known as k-groups. K-means
clustering is commonly used in market segmentation, image compression,
and pattern recognition.
• EDA can be used in predictive models such as linear regression, where it is
used to predict outcomes.
• It is also used in univariate, bivariate, and multivariate visualization for
summary statistics, establishing relationships between each variable, and for
understanding how different fields in the data interact with each other.
47
Outline
• Exploratory Data Analysis
• Chart types
• Some important distributions
• Hypothesis Testing
Examples of Business Questions
• Simple (descriptive) Stats
• “Who are the most profitable customers?”
• Hypothesis Testing
• “Is there a difference in value to the company of these customers?”
• Segmentation/Classification
• What are the common characteristics of these customers?
• Prediction
• Will this new customer become a profitable customer? If so, how profitable?

adapted from Provost and Fawcett, “Data Science for Business”


Applying techniques
• Most business questions are causal: what would happen if? (e.g. I show
this ad)
• But its easier to ask correlational questions, (what happened in this past
when I showed this ad).
• Supervised Learning:
• Classification and Regression
• Unsupervised Learning:
• Clustering and Dimension reduction
• Note: Unsupervised Learning is often used inside a larger Supervised
learning problem.
• E.g. auto-encoders for image recognition neural nets.
Applying techniques
• Supervised Learning:
• kNN (k Nearest Neighbors)
• Naïve Bayes
• Logistic Regression
• Support Vector Machines
• Random Forests
• Unsupervised Learning:
• Clustering
• Factor analysis
• Latent Dirichlet Allocation
Exploratory Data Analysis 1977
• Based on insights developed at Bell Labs in the 60’s
• Techniques for visualizing and summarizing data
• What can the data tell us? (in contrast to “confirmatory”
data analysis)
• Introduced many basic techniques:
• 5-number summary, box plots, stem and leaf
diagrams,…
• 5 Number summary:
• extremes (min and max)
• median & quartiles
• More robust to skewed & longtailed distributions
The Trouble with Summary Stats
Looking at Data
Data Presentation
• Data Art

55
The “R” Language
• An evolution of the “S” language developed at Bell labs for EDA.
• Idea was to allow interactive exploration and visualization of data.
• The preferred language for statisticians, used by many other data
scientists.
• Features:
• Probably the most comprehensive collection of statistical models and
distributions.
• CRAN: a very large resource of open source statistical models.

Chart examples from Jeff Hammerbacher’s 2012 CS194 class 56


Chart types
• Single variable
• Dot plot
• Jitter plot
• Error bar plot
• Box-and-whisker plot
• Histogram
• Kernel density estimate
• Cumulative distribution function

(note: examples using qplot library from R)

Chart examples from Jeff Hammerbacher’s 2012 CS194 class 57


Chart types

• Dot plot

58
Chart types
• Jitter plot
• Noise added to the y-axis to spread the points

59
Chart types
• Error bars: usually based on confidence intervals (CI). 95% CI means 95%
of points are in the range,
so 2.5% of points are above or below the bar.
• Not necessarily symmetric:

60
Chart types
• Box-and-whisker plot : a graphical form of 5-number summary (Tukey)

61
Chart types

• Histogram

62
Chart types
• Kernel density estimate

63
Chart types
• Histogram and Kernel Density Estimates
• Histogram
• Proper selection of bin width is important
• Outliers should be discarded
• KDE (like a smooth histogram)
• Kernel function
• Box, Epanechnikov, Gaussian
• Kernel bandwidth

64
Chart types
• Cumulative distribution function
• Integral of the histogram – simpler to build than KDE (don’t need
smoothing)

65
Chart types
• Two variables
• Bar chart
• Scatter plot
• Line plot
• Log-log plot

66
Chart types
• Bar plot: one variable is discrete

67
Chart types
• Scatter plot

68
Chart types
• Line plot

69
Chart types
• Log-log plot: Very useful for power law data

Frequency of
words in tweets

slope ~ -1

Rank of words in tweets, most frequent to least:


I, the, you,…
70
Chart types
• More than two variables
• Stacked plots
• Parallel coordinate plot

71
Chart types
• Stacked plot: stack variable is discrete:

72
Chart types
• Parallel coordinate plot: one discrete variable, an arbitrary number of
other variables:

73
Normal Distributions, Mean, Variance
The mean of a set of values is just the average of the values.
Variance a measure of the width of a distribution. Specifically, the variance is the mean squared
deviation of samples from the sample mean:
𝑛
 1 2
𝑉𝑎𝑟 ( 𝑋 )= ∑ ( 𝑋 𝑖 − 𝑋 )
´
𝑛 𝑖=1
The standard deviation is the square root of variance.
The normal distribution is completed characterized by mean and variance.

mean
Standard deviation
Central Limit Theorem
The distribution of the sum (or mean) of a set of n identically-distributed random variables Xi
approaches a normal distribution as n  .
The common parametric statistical tests, like t-test and ANOVA assume normally-distributed data,
but depend on sample mean and variance measures of the data.
They typically work reasonably well for data that are not normally distributed as long as the
samples are not too small.
Correcting distributions
Many statistical tools, including mean and variance, t-test, ANOVA etc. assume data are
normally distributed.
Very often this is not true. The box-and-whisker plot is a good clue

Whenever its asymmetric, the data cannot be normal. The histogram gives even more
information
Correcting distributions
In many cases these distribution can be corrected before any other processing.
Examples:
• X satisfies a log-normal distribution, Y=log(X) has a normal dist.

• X poisson with mean k and sdev. sqrt(k). Then sqrt(X) is approximately normally
distributed with sdev 1.
Distributions
Some other important distributions:
• Poisson: the distribution of counts that occur at a certain “rate”.
• Observed frequency of a given term in a corpus.
• Number of visits to a web site in a fixed time interval.
• Number of web site clicks in an hour.
• Exponential: the interval between two such events.
• Zipf/Pareto/Yule distributions: govern the frequencies of different terms in a
document, or web site visits.
• Binomial/Multinomial: The number of counts of events (e.g. die tosses = 6) out of n
trials.

• You should understand the distribution of your data before applying any model.
Hypothesis Testing
• We want to prove a hypothesis HA, but its hard so we try to disprove a null
hypothesis H0.
• A test statistic is some measurement we can make on the data which is likely to be
big under HA but small under H0.
• We chose a test statistic whose distribution we know if H0 is true: e.g.
• Two samples a and b, normally distributed, from A and B.
• H0 hypothesis that mean(A) = mean(B), test statistic is:
s = mean(a) – mean(b).
• s has mean zero and is normally distributed under H0.
• But its “large” if the two means are different.
Hypothesis Testing – contd.
• s = mean(a) – mean(b) is our test statistic,
H0 the hypothesis that mean(A)=mean(B)
• We reject if Pr(x > s | H0 ) < p
• p is a suitable “small” probability, say 0.05.

• This threshold probability is called a p-value.


• P directly controls the false positive rate (rate at which we expect to observe large s
even if is H0 true).
• As we make p smaller, the false negative rate increase – situations where mean(A),
mean(B) differ but the test fails.
• Common values 0.05, 0.02, 0.01, 0.005, 0.001
From G.J. Primavera, “Statistics for the Behavioral Sciences”
Two-tailed Significance

From G.J. Primavera, “Statistics for the Behavioral Sciences”

When the p value is less than 5% (p < .05), we reject the null
hypothesis
Hypothesis Testing

From G.J. Primavera, “Statistics for the Behavioral Sciences”


Three important tests
• T-test: compare two groups, or two interventions on one group.

• CHI-squared and Fisher’s test. Compare the counts in a “contingency


table”.

• ANOVA: compare outcomes under several discrete interventions.


T-test
• 
Single-sample: Compute the test statistic:

where is the sample mean and is the sample standard deviation,


which is the square root of the sample variance Var(X).

If X is normally distributed, t is almost normally distributed, but


not quite because of the presence of .

You use the single-sample test for one group of individuals in two
conditions. Just subtract the two measurements for each person,
and use the difference for the single sample t-test.
This is called a within-subjects design.
T-statistic and T-distribution
• We use the t-statistic from the last slide to test whether the mean of our sample
could be zero.
• If the underlying population has mean zero, the t-distribution should be distributed
like this:

• The area of the tail beyond


our measurement tells us how
likely it is under the null
hypothesis.

• If that probability is low


(say < 0.05) we reject the null
hypothesis.
Two sample T-test
In• this
  test, there are two samples and . A t statistic is
constructed from their sample means and sample standard
deviations:

where:

You should try to understand the formula, but you shouldn’t


need to use it. most stat software exposes a function that takes
the samples and as inputs directly.

This design is called a between-subjects test.


Chi-squared test
Often you will be faced with discrete (count) data. Given a table like this:

Prob(X) Count(X)
X=0 0.3 10
X=1 0.7 50
Where Prob(X) is part of a null hypothesis about the data (e.g. that a coin is fair).
The CHI-squared statistic lets you test whether an observation is consistent with the data:

Oi is an observed count, and Ei is the expected value of that count. It has a chi-squared
distribution, whose p-values you compute to do the test.
Fisher’s exact test
In case we only have counts under different conditions

Count1(X) Count2(X)
X=0 a b
X=1 c d

We can use Fisher’s exact test (n = a+b+c+d):

Which gives the probability directly (its not a statistic).


One-Way ANOVA
ANOVA (ANalysis Of VAriance) allows testing of multiple differences in a single test.
Suppose our experiment design has an independent variable Y with four levels:

Y
Primary School High School College Grad degree

4.1 4.5 4.2 3.8

The table shows the mean values of a response variable (e.g. avg number of
Facebook posts per day) in each group.
We would like to know in a single test whether the response variable depends on
Y, at some particular significance such as 0.05.
ANOVA
In ANOVA we compute a single statistic (an F-statistic) that compares variance between
groups with variance within each group.
VARbetween
F
VAR within
The higher the F-value is, the less probable is the null hypothesis that the samples all
come from the same population.
We can look up the F-statistic value in a cumulative F-distribution (similar to the other
statistics) to get the p-value.
ANOVA tests can be much more complicated, with multiple dependent variables,
hierarchies of variables, correlated measurements etc.
Closing Words
All the tests so far are parametric tests that assume the data are normally distributed,
and that the samples are independent of each other and all have the same
distribution (IID).

They may be arbitrarily inaccurate is those assumptions are not met. Always make sure
your data satisfies the assumptions of the test you’re using. e.g. watch out for:
• Outliers – will corrupt many tests that use variance estimates.
• Correlated values as samples, e.g. if you repeated measurements on the same subject.
• Skewed distributions – give invalid results.
Non-parametric tests
These tests make no assumption about the distribution of the input data,
and can be used on very general datasets:

• K-S test

• Permutation tests

• Bootstrap confidence intervals


K-S test
The K-S (Kolmogorov-Smirnov) test is a very useful test for checking whether two
(continuous or discrete) distributions are the same.
In the one-sided test, an observed distribution (e.g. some observed values or a
histogram) is compared against a reference distribution.
In the two-sided test, two observed distributions are compared.
The K-S statistic is just the max
distance between the CDFs of
the two distributions.
While the statistic is simple, its
distribution is not!
But it is available in most stat
packages.
K-S test
The K-S test can be used to test whether a data sample has a normal distribution or not.
Thus it can be used as a sanity check for any common parametric test (which assumes
normally-distributed data).

It can also be used to compare distributions of data values in a large data pipeline: Most
errors will distort the distribution of a data parameter and a K-S test can detect this.
Non-parametric tests
Permutation tests
Bootstrap confidence intervals

• We wont discuss these in detail, but its important to know that non-parametric tests
using one of the above methods exist for many forms of hypothesis.

• They make no assumptions about the distribution of the data, but in many cases are
just as sensitive as parametric tests.

• They use computational cycles to simulate sample data, to derive p-value estimates
approximately, and accuracy improves with the amount of computational work done.

You might also like