0% found this document useful (0 votes)
61 views5 pages

8-Curve Fitting: Least-Squares Criterion (Linear Regression)

1) The document discusses curve fitting using the least squares method to find the best fit polynomial model for a set of observed data points. 2) It explains that the least squares method finds the polynomial coefficients that minimize the total error between the model and observed data. This results in a system of equations that can be solved to find the coefficients. 3) An example is given to demonstrate fitting a first order (straight line) and second order (quadratic) polynomial model to a set of 6 data points using the least squares method. The coefficients for each model are calculated.

Uploaded by

Malak Shati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views5 pages

8-Curve Fitting: Least-Squares Criterion (Linear Regression)

1) The document discusses curve fitting using the least squares method to find the best fit polynomial model for a set of observed data points. 2) It explains that the least squares method finds the polynomial coefficients that minimize the total error between the model and observed data. This results in a system of equations that can be solved to find the coefficients. 3) An example is given to demonstrate fitting a first order (straight line) and second order (quadratic) polynomial model to a set of 6 data points using the least squares method. The coefficients for each model are calculated.

Uploaded by

Malak Shati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Numerical Analysis / Civil Eng. / 3rd Class Prepared by: Dr.

Ahmed Sagban Saadoon

8- Curve Fitting

Least-squares criterion (linear regression)


Let ( x1 , y1 ) , ( x2 , y2 ) , ….., ( xn , yn ) a set of observations to be modeled, g (x) is
the approximating model, and e is the local error (residual) between the observations
and the model, that is ei  g i  yi . In the least squares method, to get a good
approximating model, the total error (which is the sum of the squares of the local
n
errors around the regression line) E   ei2 must be minimized.
i 1
y g(x)
Let g ( x)  ao  a1 x (1st order polynomial, i.e. a straight line),
n n
E   ( g i  yi ) 2  E   (ao  a1 xi  yi ) 2 ,
i 1 i 1
gi
E E yi
The total error E is minimized if  0 and  0.
ao a1 x

E n n
 2 (ao  a1 xi  yi )  2 (ao  a1 xi  yi )  0 ,
ao i 1 i 1

n n n n
 ao   a1 xi   yi  0 . But  ao  n.ao ,
i 1 i 1 i 1 i 1

n n
 n.ao   xi a1   yi . ………….. (1)
i 1 i 1

E n n
Similarly  2 (ao  a1 xi  yi ) xi  2 (ao xi  a1 xi2  xi yi )  0 ,
a1 i 1 i 1

n n n n n n
 xi ao   xi2 a1   xi yi  0 ,   xi ao   xi2 a1   xi yi ….... (2)
i 1 i 1 i 1 i 1 i 1 i 1

In matrix form:
 n
  n 
 n  i  a o    y i 
x
    n
i 1 i 1
n n .
2  a1 
 xi  xi   xi yi 
 i 1 i 1   i 1 

- 46 -
Numerical Analysis / Civil Eng. / 3rd Class Prepared by: Dr. Ahmed Sagban Saadoon

Generally, if g ( x)  ao  a1 x  a2 x 2  ....... ak x k (kth order polynomial), we will have


 n n n
k   n 
 n  xi  xi2 ....  i x   yi 
 ao   n
i 1 i 1 i 1 i 1
 n n n n 
k 1  
  xi  xi2  xi3 ....  xi   a1    xi yi 
 in1 i 1 i 1 i 1     in1 
 x2 k  2  a 2    .
n n n

  xi3  xi4  i   ...   i i  2


i .... x x y
i 1 i 1 i 1 i 1 i 1
 .... .... .... .... ....  a   .... 
n k n n n  k   n k 
  xi  xik 1  xik 2  xi    xi y i 
2k
....
 i 1 i 1 i 1 i 1   i 1 

Statistical definitions
y is the mean of y.
n
Em is the total sum of the squares around the mean of y, that is Em   ( yi  y ) 2 .
i 1

Em  E
r 2 is the determination coefficient which is given by r 2  .
Em
r is the correlation coefficient which is given by r  r 2 .

For a perfect fit ( E  0)  r  r 2  1, signifying that the approximating


model g (x) explains 100% of the variability of the data (observations).

Example 1: Given the following data:


x 0 1 2 3 4 5
f (x) 2.1 7.7 13.6 27.2 40.9 61.6
Using the least squares criterion:
1- Fit a 1st order polynomial (straight line) to this data.
2- Fit a 2nd order polynomial (quadratic equation) to this data.
Solution:
1- Let the straight line is g ( x)  ao  a1 x , then we have
 n
  n 
 n  xi  ao    yi 
    n
i 1 i 1
n n ,
2  a1 
 xi  xi   xi yi 
 i 1 i 1   i 1 
n n
n  6,  xi  0  1  2  3  4  5  15 ,  xi2  0 2  12  2 2  32  4 2  52  55 ,
i 1 i 1

- 46 -
Numerical Analysis / Civil Eng. / 3rd Class Prepared by: Dr. Ahmed Sagban Saadoon

n
 yi  2.1  7.7  13.6  27.2  40.9  61.6  152.6 ,
i 1
n
 xi yi  0(2.1)  1(7.7)  2(13.6)  3(27.2)  4(40.9)  5(61.6)  585.6 .
i 1

 6 15  ao  152.6 
     .
15 55  a1  585.6
Use Cramer's rule:
152.6 15 
585.6 55
ao     152.6(55)  15(585.6)   391  3.72381,
 6 15  6(55)  15(15) 105
15 55
 
 6 152.6 
15 585.6
a1     6(585.6)  152.6(15)  1224.6  11 .66286 .
 6 15  6(55)  15(15) 105
15 55
 

 g ( x)  3.72381  11 .66286 x .

2- Let the 2nd order polynomial is q( x)  bo  b1 x  b2 x 2 , then we have


   
  
n n n

 n xi xi2  bo   yi 
 
 n i 1 i 1
   i 1

 
 x  
n n n
 x 2
xi3  b1    xi y i  ,
 i 1 i i
  
  
i 1 i 1 i 1
 n 
 
 x  
n n n
 xi2 3
xi4  b   xi y i 
2

 i 1   2   
i
i 1 i 1 i 1

n n
x 3
i  0  1  2  3  4  5  225 ,
3 3 3 3 3 3
 xi4  0 4  14  2 4  34  4 4  54  979 ,
i 1 i 1
n
 xi2 yi  0 2 (2.1)  12 (7.7)  2 2 (13.6)  32 (27.2)  4 2 (40.9)  52 (61.6)  2488.8 .
i 1

 6 15 55  bo   152.6 
 
 15 55 225 b1    585.6  .
   
55 225 979 b2  2488.8
Use Cramer's rule:

- 44 -
Numerical Analysis / Civil Eng. / 3rd Class Prepared by: Dr. Ahmed Sagban Saadoon

152.6 15 55
585.6 55 225 55 225 15 55 15 55
152.6  (1)(585.6)  2488.8
2488.8 225 979 225 979 225 979 55 225
b0    2.47857
6 15 55 55 225 15 55 15 55
6  (1)(15)  55
15 55 225 225 979 225 979 55 225
55 225 979
6 152.6 55
15 585.6 225 15 225 6 55 6 55
 152.6  585.6  (1)(2488.8)
55 2488.8 979 55 979 55 979 15 225
b1    2.35929
6 15 55 55 225 15 55 15 55
6  (1)(15)  55
15 55 225 225 979 225 979 55 225
55 225 979
6 15 152.6
15 55 585.6 15 55 6 15 6 15
152.6  (1)(585.6)  2488.8
55 225 2488.8 55 225 55 225 15 55
b2    1.86071
6 15 55 55 225 15 55 15 55
6  (1)(15)  55
15 55 225 225 979 225 979 55 225
55 225 979

 q( x)  2.47857  2.35929 x  1.86071x 2 .


Statistical comparison
For g (x) For q(x)
xi yi E m  ( yi  y ) 2
i
Ei  ( g ( xi )  yi ) 2 Ei  (q( xi )  yi ) 2
0 2.1 548.340278 33.9167629 0.14331524
1 7.7 317.433611 0.0571449 1.00286204
2 13.6 142.006944 36.0229236 1.0815792
3 27.2 2.83361111 16.5223552 0.80491401
4 40.9 236.646944 4.11128342 0.61951067
5 61.6 1302.00694 49.1332304 0.65162027
 153.1 2549.3 139.8 4.3

153.1
y E E 2549.3  139.8 2549.3  4.3
 yi 6 r  m
2 r2  r2 
y 2549.3 2549.3
n  25.51667 Em
 0.9452  0.9983

Since r 2 , for q(x) , is closer to one, thus the quadratic equation q(x) is better than
the linear equation g (x) in representing the given data.

- 46 -
Numerical Analysis / Civil Eng. / 3rd Class Prepared by: Dr. Ahmed Sagban Saadoon

Example 2: (Final 2014) The volume of water pumped by a pump is measured as a


function of time as tabulated below:
Time, t, sec 0 1 5 8
Volume, V, m3 2.1 7.7 13.6 27.2
Fit the equation V  at  bt 3 (where a and b are constants) to the above
data using the least squares method.
Solution:
Since the required equation V  at  bt 3 is a 3rd order polynomial, thus, to make use
of the general least squares matrix, we compare it with the general form of a 3 rd order
polynomial g (t )  ao  a1t  a2t 2  a3t 3 . It is obvious that the first and third constants
do not exist in the required equation, thus we cancel the first and third row and
column of the general least squares (4  4) matrix,
  a  
   
n n n n

 n ti t 2
t  o  
3
Vi 
i i
 
 n i 1 i 1 i 1
   i 1

 t t  
n n n n
 t 2 3
t i4   a1   t iVi 
 i 1 i i i
   
   
i 1 i 1 i 1 i 1
 n ,
 t t  
n n n n
 t i2 3 4
t i  a 2  
5
t i Vi 
2

 i 1    
i i
i 1 i 1 i 1 i 1
 n 3    
 t t  
n n n n

 ti
4 5
t i6     t i3Vi 
  a3  
i i
 i 1 i 1 i 1 i 1 i 1 
to get,
n 2 n
  n 
  t  a    tiVi 
4
ti i

    n
i 1 i 1 i 1
n n ,
6 b 
 ti4  ti   ti Vi 
3

 i 1 i 1   i 1 
n n n
 ti2  0 2  12  52  82  90 ,  ti4  4722 ,  ti6  277770 ,
i 1 i 1 i 1

n n
 tiVi  232.2 , and  ti3Vi  13081.8 .
i 1 i 1

 90 4722  a   232.2 
  b   13081.8 .
 4722 277770    
Solving the above matrix, we get: a  1.008852  1 and b  0.029946  0.03 .
 The required equation is V  t  0.03t 3 .
- 46 -

You might also like