0% found this document useful (0 votes)
32 views16 pages

PSNM - Ch. 1

Uploaded by

atayappa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views16 pages

PSNM - Ch. 1

Uploaded by

atayappa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

FACULTY OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF APPLIED SCIENCE AND HUMANITIES


4th SEMESTER B.TECH PROGRAMME
PROBABILITY, STATISTICS AND NUMERICAL METHODS
(203191251)
ACADEMIC YEAR 2019-2020

UNIT 1: CORRELATION AND REGRESSION

Correlation Analysis: we have studied problems relating to one variable only. In practice we
come across a large number of problems involving the use of two or more variables. If two
quantities vary in such a way that change in one variable are effects a change in the value of
other. These quantities are correlated.

Types of correlation: There are three types of correlation.

(i) Positive or Negative correlation: If two variables are changing in the same
direction, correlation is said to be positive or direct correlation. If two variables are
changing in the opposite direction, correlation is said to be negative or inverse
correlation.
For example: The correlation between heights and weights of group of people is
positive and the correlation between pressure and volume of a gas is negative.
(i) Simple, partial or multiple: The difference between the simple, partial or multiple
correlation is based on the number of variable studied. When only two variable are
studied correlation is said to be simple correlation. When three or more variable are
involved then the problem may be either partial or multiple correlation.

(ii) Linear or Non-linear correlation: If the amount of change in one variable tends to
bear a constant ratio to the amount of change in the other variable then the correlation
is said to be linear correlation.
For example: consider to variables X and Y
X 5 10 15 20 25 30
Y 50 100 150 200 250 300
It is clear shows that the ratio of change in both the variables is same.

If the amount of change in one variable does not tend to bear a constant ratio to the
amount of change in the other variable then the correlation is said to be Non-linear
correlation or curly linear correlation.

Methods of studying correlation: There are mainly three types of methods.


(i) Scatter Diagram
(ii) Karl Pearson’s method
(iii) Spearman’s method of rank correlation
(i) Scatter diagram: This is a very simple method studying the relationship between
two variables. In this method one variable is taken on X-axis and the other variable is
taken on Y-axis and for each pair of values, points are plotted as follows:

Perfectly positvie Perfectly Negative


Scatter diagram Scatter diagram
350
350
300
300
250
250
Y-AXIS

200

Y-AXIS
200
150
150
100
100
50
50
0
0
0 10 20 30 40
0 10 20 30 40
X-AXIS
X-AXIS

(ii) Karl Pearson’s coefficient of correlation: The several mathematical methods of


measuring correlation the Karl Pearson’s popularly known as Pearson’s coefficient of
correlation is most widely used. It is denoted by r. The formula for computing the
coefficient of correlation is as follows:

∑(𝒙 − 𝒙
̅ ) (𝒚 − 𝒚
̅)
𝒓=
𝒙)𝟐 √√∑(𝒚 − 𝒚
√∑(𝒙 − ̅ ̅ )𝟐

∑ 𝒅𝒙𝒅𝒚
𝒓=
√∑ 𝒅𝟐 𝒙 √∑ 𝒅𝟐 𝒚

Where, 𝑑𝑥 = 𝑥 − 𝑥̅ 𝑑𝑦 = 𝑦 − 𝑦̅
This formula also can be written as follow:

𝑛(∑(𝑥𝑦)) − (∑ 𝑥 )(∑ 𝑦)
𝑟=
√𝑛(∑ 𝑥 2 ) − (∑ 𝑥 )2 √𝑛(∑ 𝑦 2 ) − (∑ 𝑦)2

Correlation coefficient for the grouped data the formula can be written as follows:

𝑛 ∑ 𝑓𝑢𝑣 − ∑ 𝑢𝑓𝑢 ∑ 𝑣𝑓𝑣


𝑟=
√𝑛 ∑ 𝑢2 𝑓𝑢 − (∑ 𝑢𝑓𝑢 )2 √𝑛 ∑ 𝑣 2 𝑓𝑣 − (∑ 𝑣𝑓𝑣 )2
OR

𝑛 ∑ 𝑓𝑑𝑥 𝑑𝑦 − (∑ 𝑓𝑑𝑥 ) (∑ 𝑓𝑑𝑦 )


𝑟=
2
√𝑛 ∑ 𝑓𝑑2 𝑥 − (∑ 𝑓𝑑𝑥 )2 √𝑛 ∑ 𝑓𝑑2 𝑦 − (∑ 𝑓𝑑𝑦 )

Properties of the coefficient of correlation:


(1) The coefficient of correlation always lies between -1 and 1 including -1 and 1.
i.e. −1 ≤ 𝑟 ≤ 1
(2) The correlation coefficient is independent of change of origin and scale.
(3) The correlation coefficient is an absolute number and it is independent of units of
measurements.

Example:Find the Pearson’s Correlation Coefficient of the following data:

x 100 101 102 102 100 99 97 98 96 95


y 98 99 99 97 95 92 95 94 90 91

Solution:

x y
xx
_ _
 _

2
 _

2
 _
 _

yy  x  x  y  y  x  x  y  y 
      
100 98 1 3 1 9 3
101 99 2 4 4 16 8
102 99 3 4 9 16 12
102 97 3 2 9 4 6
100 95 1 0 1 0 0
99 92 0 -3 0 9 0
97 95 -2 0 4 0 0
98 94 -1 -1 1 1 1
96 90 -3 -5 9 25 15
95 91 -4 -4 16 16 16
x y 
  
 x 
_
x
 

 y 
_
y

  _

  x  x 
2
 _

  y  y 
2

  
 x 
_
x

 y 
_
y


=990 =950  
=0 =0 =54 =96 =61

_
x
 x  990  99
n 10
_
y
 y  950  95
n 10
 _
 _

  x  x  y  y  61
Correlatio n Coefficient , r    0.85
2 2
 
_
 _
 54 96
 x  x  y  y
   

Calculated by following formula:


6 ∑ 𝑑2
𝑟 =1−
𝑛 (𝑛 − 1)

Where, n=number of pairs

In case finding out rank correlation coefficient when the observations are paired the above
formula can be written as:

𝑚 𝑚
6 {∑ 𝑑 2 + 12 (𝑚2 − 1) + 12 (𝑚2 − 1) + ⋯ … … … . .}
𝑟 = 1−
𝑛(𝑛 2 − 1 )

In d 2m 2
12
 
m  1 is added where m is the number of times an item is repeated.
,
The value of correlation coefficient by Spearman’s method also lies between -1 and +1. If the
ranks are same for each pair of two series then each value of d=0. Hence ∑ 𝑑 2 =0 and the
value of r=+1, which shows that perfect positive correlation between the two variables. If the
ranks are exactly in reverse order for each pair of two series, then the value of r = −1 which
shown perfect negative correlation between the variables.

Example: Two judges have given ranks to 10 students for their honesty. Find the rank
correlation coefficient of the following data:
1st 3 5 8 4 7 10 2 1 6 9
Judge
2nd 6 4 9 8 1 2 3 10 5 7
judge

Solution:
Rank given by 1st Rank given by 2nd Difference in ranks d2
judge judge d
3 6 -3 9
5 4 1 1
8 9 -1 1
4 8 -4 16
7 1 6 36
10 2 8 64
2 3 -1 1
1 10 -9 81
6 5 1 1
9 7 2 4
d 2
=214
6 d 2 6 * 214 1284
Rank Correlation , r  1   1  1  1  1.30  0.30

n n 1
2
 10100  1 990

Example: Find the Coefficient of rank correlation of the following data:

x 35 40 42 43 40 53 54 49 41 55
y 102 101 97 98 38 101 97 92 95 95

Solution:
x y Ranks in x Ranks in y Difference d d2
35 102 10 1 9 81
40 101 8.5 2.5 6 36
42 97 6 5.5 0.5 0.25
43 98 5 4 1 1
40 38 8.5 10 -1.5 2.25
53 101 3 2.5 0.5 0.25
54 97 2 5.5 -3.5 10.25
49 92 4 9 -5 25
41 95 7 7.5 -0.5 0.25
55 95 1 7.5 -6.5 42.25
d 2
=200.25


6 d 2 
m 2
m 1  m 2

m 1 m 2

m 1 
m 2
 
m 1    
Rank Correlatio n , r  1   
12 12 12 12
n n 1
2
 
6200.50  0.5  0.5  0.5  0.5
 1
990
 0.227

Regression Analysis: By studying the correlation we can know the existence degree and
direction of relationship between two variables but we can not the answer the question of the
type if there is a certain amount of change in one variable, what will be the corresponding
change in the other variable. The above type of question can be answered if we can establish
a quantitative relationship between two related variables.
The statistical tool by which it is possible to predict or estimate the unknown values of one
variable from known values of another variable is called regression. A line of regression is
straight line.
This equation is called regression line 𝑌 on 𝑋 and 𝑏𝑦𝑥 is called regression coefficient. The
formula can be computed as:

(𝑦 − 𝑦̅) = 𝑏𝑦𝑥 (𝑥 − 𝑥̅ )

𝑛 ∑ 𝑥𝑦−(∑ 𝑥)(∑ 𝑦)
Where, 𝑏𝑦𝑥 = 𝑛 ∑ 𝑥 2 −(∑ 𝑥)2
This formula can be used to compute the value of y for given value of x.
Similarly, the regression line 𝑋on 𝑌 and 𝑏𝑥𝑦 is called regression coefficient. The formula can
be computed as;

(𝑥 − 𝑥̅ )= 𝑏𝑥𝑦 (𝑦 − 𝑦̅)

𝑛 ∑ 𝑥𝑦−(∑ 𝑥)(∑ 𝑦)
Where, 𝑏𝑥𝑦 = 𝑛 ∑ 𝑦 2 −(∑ 𝑦)2
This formula can be used to compute the value of x for the given value of y.

NOTE:
(1) 𝑏𝑥𝑦 and 𝑏𝑦𝑥 are also computed using the following formula
𝑟𝜎 𝑟𝜎
𝑏𝑥𝑦 = 𝜎 𝑥 and𝑏𝑦𝑥 = 𝜎 𝑦
𝑦 𝑥
(2) Angle between the two regression lines are as follows:

𝑟 2−1
( ) 𝜎𝑥 𝜎𝑦
𝑟
𝜃 = tan−1 | |
𝜎2𝑥 + 𝜎2𝑦
𝜋
When 𝑟 = 0 and 𝜃 = in this case both the regression lines are perpendicular to each
2
other. If 𝑟 = ±1 and 𝜃 = 0 in this case both the regression lines are same line because
point (𝑥,
̅ 𝑦̅) is common point.

Properties of regression coefficient:


(1) 𝑟 = ±√𝑏𝑥𝑦 𝑏𝑦𝑥 , the sign of r should be taken before the square root is that of the
regression coefficient.
(2) Since (𝑏𝑥𝑦 )(𝑏𝑦𝑥 ) = 𝑟 2 ≤ 1 , both the regression coefficient cannot be greater than
unity (1).
(3) Arithmetic mean of regression coefficients is greater than or equal to the coefficient of
correlation.
𝑏 +𝑏
i.e.( 𝑥𝑦 𝑦𝑥 ) ≥ 𝑟
2
(4) Regression coefficient are independent of origin but not of scale.
Example:The following data regarding the heights (y) and weights (x) of 100 college students
are given:  x  15000 ,  x 2  2272500 ,  xy  1022250 ,  y  6800 ,  y 2  463025 Find
the coefficient of correlation between height and weight and also the equation of regression of
height and weight.
Solution:
Here, n=100
n xy   x  y
b yx   0. 1
n x 2   x 
2

n xy   x  y
bxy   3.6
n y 2   y 
2

r  bxy  b yx  3.6  0.1  0.6


_
x
 x  15000  150
n 100
_
y
 y  6800  68
n 100
The equation of the line of regression of y on x is:
_
 _

y  y  b yx  x  x 
 
y  68  0.1 x  150 
y  0.1 x  53
The equation of the line of regression of x on y is:
_
 _

x  x  bxy  y  y 
 
x  150  3.6 y  68
x  3.6 y  94.8
Example: Find the equation of regression line from the following data and also estimate y for
x  1 and x for y  4 .

Curve Fitting

Q: Where does this given function come from in the first place?
• Analytical models of phenomena (e.g. equations from physics)
• Create an equation from observed data 1)
Interpolation (connect the data-dots) If data is reliable,
we can plot it and connect the dots This is piece-wise,
linear interpolation.

This has limited use as a general function Sinceits really a group of small functions, connecting
one point to the next it doesn’t work very well for data that has built in random error (scatter)
2)
Curve fitting - capturing the trend in the data by assigning a single function across the entire
range. The example below uses a straight line function

A straight line is described by f ( x)  ax  b


The goal is to identify the coefficients ‘ a ’ and ‘ b ’ such that ‘ f (x ) ’ fits the data well

Other examples of data sets that we can fit a function to


Is a straight line suitable for each of these cases ?
No. But we’re not stuck with just straight line fits. We’ll start with straight lines, then expand
the concept.

Linear curve fitting (linear regression)


Given the general form of a straight line

f ( x)  ax  b
How can we pick the coefficients that best fits the line
to the data?
First question: What makes a particular straight line a
‘good’ fit?
Why does the blue line appear to us to fit the trend
better?
• Consider the distance between the data and points
on the line
• Add up the length of all the red and blue verticle
lines
• This is an expression of the ‘error’ between data and
fitted line
• The one line that provides a minimum error is then
the ‘best’ straight line
Quantifying errors in a curve fit

Assumption:
(1) positive or negative error have the same
value(data point is above or below the line)

(2) Weight greater errors more heavily


we can do both of these things by squaring the
distancedenote data values as (x, y) ======>>
denote points on the fitted line as (x, f(x))
sum the error at the four data points

n
err   d i   y1  f x1    y 2  f x 2   ........ y n  f x n 
2 2 2 2

i 1

  y1  ax1  b    y 2  ax 2  b   ........   y n  ax n  b 


2 2 2

n
   y i  axi  b 
2

i 1

Error is minimum if first ordered partial derivatives=0


err  n  err  n
   2 xi  y i  axi  b   0    2 y i  axi  b   0
a i 1 b i 1
n n n n n n
  xi y i  a  xi  b xi  0 and   y i  a  xi  b1  0
2

i 1 i 1 i 1 i 1 i 1 i 1
n n n n n
  xi y i  a  xi  b xi   y i  a  xi  n b
2

i 1 i 1 i 1 i 1 i 1
Solve the equations
n n

y
i 1
i  a  xi  nb
i 1
(1)
n n n

 xi y i  a  xi  b xi
2
(2)
i 1 i 1 i 1

Example: Fit a straight line using least square method

xi 0 0.5 1 1.5 2 2.5


yi 0 1.5 3 4.5 6 7.5
Solution:
2
xi yi xi xi y i
0 0 0 0
0.5 1.5 0.25 0.75
1 3 1 3
1.5 4.5 2.25 6.75
2 6 4 12
n n n n

 x =2.5 y x x y
2
i i =7.5 i =6.25 i i =18.75
i 1 i 1 i 1 i 1

Now, Solve the equations


n n

y
i 1
i  a  xi  n b
i 1
(1)
n n n

 xi y i  a  xi  b xi
2
(2)
i 1 i 1 i 1

Substitute the values from the table, here n=6.

7.5  2.5 a  6b
18.75  6.25a  2.5b

a  3.561 and b  0.975

Hence, the best fit line is y  3.561 x  0.975 .

So, what we do if the straight line is not suitable for the data?

Straight line will not predict diminishing returns that data shows
Curve fitting - higher order polynomials
We started the linear curve fit by choosing a generic form of the straight line f(x) = ax + b
This is just one kind of function. There are an infinite number of generic forms we could choose
from for almost any shape we want. Let’s start with a simple extension to the linear regression
concept recall the examples of sampled data

Is a straight line suitable for each of these cases ? Top left and bottom right don’t look linear in
trend, so why fit a straight line? No reason to, let’s consider other options. There are lots of
functions with lots of different shapes that depend on coefficients. We can choose a form
based on experience and trial/error. Let’s develop a few options for non-linear curve fitting.
We’ll start with a simple extension to linear regression...higher order polynomials

Curve fitting – Quadratic polynomial


Let the general form of second order polynomial f ( x)  a  bx  cx 2 .
Just as was the case for linear regression, we ask:
How can we pick the coefficients that best fits the
curve to the data? We can use the same idea:
The curve that gives minimum error between y
data and the fit f (x ) is ‘best’
Quantify the error for these two second order
curves...
• Add up the length of all the red and blue
verticle lines
• pick curve with minimum total error

Error - Least squares approach


n
err   d i   y1  f  x1    y 2  f  x 2   ........ y n  f x n 
2 2 2 2

i 1

 
 y1  a  bx1  cx1
2
  y  a  bx
2
2 2  cx 2
2
 2
 ........   y n  a  bx n  cx n 
2

  y  a  bx  cx 
n
2 2
i i i
i 1

To minimize the error, derivatives with respect to a, b and c equal to 0.


err  n
a
   2 y i  a  bxi  cxi  0
2
  
i 1

err  n
b
   2 xi y i  a  bxi  cxi  0
2
  
i 1

err  n
b
   2 xi yi  a  bxi  cxi  0
2 2
  
i 1
Simplify these equations,We get
n n n

 y i  a n  b xi  c  xi
2

i 1 i 1 i 1
n n n n

 xi y i  a  xi  b xi  c  xi
i 1 i 1 i 1
2

i 1
3

n n n n

x y i  a  xi  b xi  c  xi
2 2 3 4
i
i 1 i 1 i 1 i 1

Example: Fit a second order polynomial equation to following data

xi 0 0.5 1.0 1.5 2.0 2.5


yi 0 0.25 1.0 2.25 4.0 6.25

Solution:
2 3 4 2
xi yi xi xi xi xi y i xi y i
0 0 0 0 0 0 0
0.5 0.25 0.25 0.125 0.0625 0.125 0.0625
1 1 1 1 1 1 1
1.5 2.25 2.25 3.375 5.0625 3.375 5.0625
2 4 4 8 16 8 16
2.5 6.25 6.25 15.625 39.0625 15.625 39.0625
x i =7.5 y i x i
2
x i
3
x i
4
x y i i x y i i
=13.75 =13.75 =28.125 =61.1875 =28.125 =61.1875
Substitute these values in equations
n n n

y  a n  b xi  c  xi
2
i
i 1 i 1 i 1
n n n n

 xi y i  a  xi  b xi  c  xi
i 1 i 1 i 1
2

i 1
3

n n n n

 xi y i  a  xi  b xi  c  xi
2 2 3 4

i 1 i 1 i 1 i 1

Hence, y  x is required equation which fits the data.


2

Curve fitting - Other nonlinear fits (exponential)


Q: Will a polynomial of any order necessarily fit any set of data?
A: Nope, lots of phenomena don’t follow a polynomial form. They may be, for example,
exponential

(1) General exponential equation f ( x)  C e Ax


Now, take log on both side, we get
ln y  ln C  Ax
Y  b  aX ; where Y  ln y, X  x, ln C  b and a  ln A
Which is equation of line, the original data in xy- plane mapped into XY-plane. This is called linearization.
The data x, y  transformed as x, ln y  .
To find the value of a and b we will use the equations
n n

 Yi  a X i  n b
i 1 i 1
(1)
n n n

X Y  a  X i  b X i
2
i i (2)
i 1 i 1 i 1

After getting values of a and b , A  antilog a, C  antilog b .


Example: An experiment gave the following values:
X 1 5 7 9
Y 10 15 12 21
Fit an exponential curve y  Ce Ax

Solution:

X i  yi yi Yi  ln yi Xi
2
X i Yi
1 10 2.302585 1 2.302585
5 15 2.70805 25 13.54025
7 12 2.484906 49 17.39435
9 15 2.70805 81 24.37245
12 21 3.044522 144 36.53427
5 5 5 5

X Y X X Y
2
i =34 i =13.24811 i =300 i I =94.1439
i 1 i 1 i 1 i 1
13.24811  34 A  5B
94.1439  300 A  34B

A=2.00479, B=2.248664
a=antilog2.00479=7.424536

b=antilog(2.248664)=9.475068

Hence, best fit curve is y  9.475068 e 2.248664

(2) y  bx a

Taking log10 on both the side

log 10 y  log 10 b  a log 10 x


Y  B  AX ; where Y  log 10 y, X  log 10 x and a  A, B  log 10 b

n n

 Yi  nB  A X i
i 1 i 1
(1)
n n n

X Y  B  X i  A X i
2
i i (2)
i 1 i 1 i 1

Example: An experiment gave the following values:


v (ft/min) 350 400 500 600
t (min) 61 26 7 2.6

It is known that v and t are connected by the relation v  bt a , find the best possible values of a and b.

v t Y=logv X=logt X2 XY
350 61 2.544068 1.78533 3.18740262 4.542001
400 26 2.60206 1.414973 2.002149575 3.681846
500 7 2.69897 0.845098 0.714190697 2.280894
600 2.6 2.778151 0.414973 0.17220288 1.152859
4 4 2 3

Y X
4 4

i 1
i 10.62325
i 1
i =4.460375  X i =6.075945772
i 1
 X i =11.6576
i 1

Substitute in given equation,

n n

 Yi  nB  A X i
i 1 i 1
(1)
n n n

X Y  B  X i  A X i
2
i i (2)
i 1 i 1 i 1
10.62325  4B  4.460375 A
11.6575  4.460375B  6.075945772 A

On solving these equations B=2.845 A=a= - 0.17.


b  anti log( 2.845)  699.842

You might also like