0% found this document useful (0 votes)
6 views24 pages

Linear Regression

The document discusses the concepts of regression analysis, specifically focusing on simple linear regression and the method of least squares for fitting a model to data. It explains how to derive the equations for estimating parameters and emphasizes the importance of minimizing the sum of squared errors in curve fitting. Additionally, it touches on the correlation coefficient as a measure of the goodness of fit for the regression model.

Uploaded by

dhrisyaboban2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views24 pages

Linear Regression

The document discusses the concepts of regression analysis, specifically focusing on simple linear regression and the method of least squares for fitting a model to data. It explains how to derive the equations for estimating parameters and emphasizes the importance of minimizing the sum of squared errors in curve fitting. Additionally, it touches on the correlation coefficient as a measure of the goodness of fit for the regression model.

Uploaded by

dhrisyaboban2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Regnsson 3 trie polakto

ModuleCeowe i

Sople

Vaniable bzsny pevced is called

the

he depende vaioble
poedict te Voniable
4 The Vahi able or Vaables wbasLo
vancb
ooe collajindeperdend
Valu a ha cependend Vakate

Vaiable in wbich ta telationsk

Vaiablee anl one deperdenl ine


blw the Voiables s by asaaift
9ptocimabed
Ib ls Called Simple nec
moie independet
involving tavo
Regtesion ananysis
malpe poesion brass
Voiabes is called

appsoach to molelay
is a oneas

O indepat Nahiable,
Vantables. In the Case
independers

Voniabe and y be the dependet


indepen Send
Let X be tte

Vaiade, Y=mx+C

msope
csntes cepk

X-Ois

d Ihis 1s the ieguaton to, a ane'm'ts the slope a he bine

ase @uation to toan


We Cu this
tha
and prodict
model wtth a
dodatbeetSel

Value af'y'Por ay en value a'x


or Vo detetmine he Vae. al 'm"and c', that

eIOs r tho ata Set. we wst be


Minimu

Using the. Least Bqus mehol.

Eruatoni
RsosMolel ard Reqtessioo

teom is callel tha

* Dotoe y
The
fressi on, modal aed in šimple biraan

follows

the paametessa the, model,


* Po as'B e Betee to as

Vaiable eteroied to as. the ot


and'e" is a Tenolor
teim.

teom accowds fot the Vonh abili ty intat Ca

be enplairal by
blw " and y

a The equeton that desesibes heuw a eapted vaa ay


denotel EC) is lad to is colled te "egseson euaktos
Equetid
Rgeesso
Estimoded
parametesS Bo and B,
ese
"bo'and "b,")
Poplabsorn ore
the. "b,
I the Valie a

Cderoedl by 'boand
stalistics poand e,
OwD y Sarople parormatens
estumats a
ta population
beand bi
ab
Compated Statistics
Valas a tte 'saritle We obtaln tho
Suhsutedig the

euato
astimatelesson
Gqyodror

Squctas method:
Least

procadua
tor
s a
Sauahes metbod
Uhe leobt
euodton.
to ind the estimcded
(auotey
Restaunenb
stuolenb.
(lo00s) soes
(tiooc
Popalabeo

2 88
4
5 134
16

144 bo.
tbix
4
Q0
matC;Y-
8D Y=
1604
to eprcent te eldiond,
We Can chooae a Simple^e model
t blw nd Auolenl populeo.
blw qugdey sales
into this tabe
Own Deyt task s to wse the sanple do a

in Stimated
to detetmind he vae al 'bond b the

Sample ineoth aegessjon auaion

bo t bii

Wherej

Sestimaled Vale al aucs 9elu salas 10005) s *e

th sestcuwen.

b te slope al! He asimadel

- Size ta student populhon C1000 5) forthreshas

slope and y-indercept lor ta estrsaded areson qyen

(a,-ã)

where

obsen vaßton

yi Vale a the dependent Vaiable for hejt cbsevaston

n totdl nenben' al obsenvaions.


eStimafe equatin
os the deast Squones
Caloulatrons

Restauaaba)
–12 144
58 864
I05 64
88 -42 36
4 8 118 -6 -12 36
117 -13l
a-l406
10 137 (4 4
Qo 157 36
Kl64 34 Q34 36
y-300
8
I49 |5 64
130 12 864 1444
568
&840
l40 I300

bi QB40 =5
568
CbooHouWs,
The calaulafion ot the yintercept

bo = -h
= 130- 5x(14)
= 130 5 60
bo = 6O

=6045
orta tolowhy
hest paabola y = atbr4c
tiiyy
data

8 11

choose' step devia lioo metbol

Let o= 455;
to

ar

evh
nt

Y a tbX+CX
eyuation perabola becomeb
er
tt
Normal equatons ar

aIx +bzx4 Cxt=Zxy


X3

2 -4 456 Q4 -46
2 6 -18
3
4
8 16
1
-4
5 10

3
3 4 16 6
18
4 1 Q56 4 16
5=0 Zy=2 Sx²=60
at2-e
Gob + Oc =5|
Oat 08C = -69
Goa t Ob
t
=) Gob 5|

.
From
b=5s0
a= 0043 -0Q63

-0.a.Gt3X*=y
Q.0043 +(0-85)6+
-oa673 X2
0043 + O86X
(x-5)-
o-d673
Ca-s
o-85) tI0
y-8= 004
4.Q5 -0-Q67332*(a5
8a5 t673
y-8 =
0043t(0-85r 6-G
«Q613r
0043 t 0.85T-4:85-0
y-8

0.4Q84+
3•5Q3-O.6430

Jeast Squeras
Multiple nean
A

Vavables
Ponc tloo
Sineon
to He dub a (Z, 4),%
+a,4 is AHed
tet on Z= a, + a,2
Su should be mininu

(Zm, o>Y) Hen the


"Thase cquators to
gliy

ma, ta, at a X4 =)Zi


2
ao t a, ,t 02 214i

wbcb, a, $ asCon be Calala

Is Aked to tha data Ca,yi 2) gNen below

Co,o,2 Ct,l4), (2,3,5),(4,a,l6) 3 (G,8,6

TO
Chapter

Least Squares and


Fourier Transforms

4.1 INTRODUCTION
a curve to
In experimental work, we often encounter the problem of fitting
is to derive an
data which are subject to errors. The strategy for such cases

approximating function that broadly fits the data without necessarily passing
through the given points. The curve drawn is such that the discrepancy
between the data points and the curve is least. In the method of least squares,
the sum of the squares of the errors is minimized. For continuous functions,

the method is discussed in Section 4.4.

The problem of approximating a function by means of Chebyshev


polynomials is described in Section 4.5.This is important from the standpoint
of digital computation.
In Chapter 3,we concentrated on polynomial interpolation, i.e., interpola
tion based on a linear combination of functions 1, x, x, .., *. On the other

hand, trigonometric interpolation, i.e., interpolation based on trigonometric


functions such as, cos x, sin x, cos 2x, sin 2x, ... plays an important role in

modelling vibrating systems. The Fourier series


a useful toolfor dealing with
is

periodic systems; but for aperiodic systems, the Fourier transform is the
primary tool available. The computations of discrete Fourier transform and the
Fast Fourier Transform (FFT) are discussed in detail in Section 4.6.
SECTION 4.2: Least Squares Curve Fitting Procedures 127

corresponding value on the fitting curve is f(*). If e, is the error of


approximation at x X, = then we have
(4.1)
If we write

S=y -f)'+'2 -f(,)' +.. +[ym -fGm)'


=e +ej t..tems (4.2)

then the method of least squares consists minimizing S, i.e., the sum of the
in

squares of the errors. In the following sections, we shall study the linear and
nonlinear least squares fitting to given data (,y), = 1, 2, i ..", m.

4.2.1 Fitting a Straight Line

Y= do t ax be the straight line to be fitted to the given data, viz. (xb Y),
Let
i = 1, 2, ..., m. Then, corresponding to Eq. (4.2), we have

S= - (a t ax)] + [y, - (ao t ajx,)]

+ ..t ym - (ao + am)] (4.3)

For S to be minimum, we have

as
dao
= 0= -2)y (a t ar)] - 2'2- (ao t ajx)l
-2[m -(a t axm)] (4.4a)
and

- =0 = -2x,[v1 - (ao t ajx)] - 2x, [v2 - + ajx)]


(ao
da
... -2x, 'm -(ao t axm)l (4.4b)

The above equations simplify to

.)
(4.5a)
and

aox + xt + a+xýt+xm=
+ xy t xy, t .tX}m
(4.5b)

or more compactly to

(4.6a)
i=1 i=1

and
128 CHAPTER 4: Least Squares and Fourier Transforms

Equations (4.6) are called the normal equations,and can be solved for do
and a, since x, and
We can obtain
y,
easily

a=
are known

m
i=l

m>
-Y quantities.

i=1

-
(4.7)

and thene attinol


(4.8)

Since and are both positive at the points a, and aj, it follows that

these values provide a minimum of S. In Eq. (4.8), x and y are the means
of x and y, respectively. From Eq. (4.8), we have

which shows that the fitted straight line passes through the centroid of the
data points.
Sometimes, a goodness of fit is adopted. The correlation coefficient (cc)
is defined as

ixS,
= -S CC
(4.9)
S,
where
m
(4.10)
i=1

and S defined by Eq. (4.3).


If cc is close to 1,then the fit is considered to be good, although this
is not always true.

Example 4.1 Find the best values of do and a if the straight line
Y=a t ax is fitted to the data (r V):

(1, 0.6), (2, 2.4), (3, 3.5), (4, 4.8), (5, 5.7)
Find also the correlation coefficient.

From the table of values given below, we find x=3, y=3.4, and
5(63.6) -1517)_1 6
Procedures 129
SECTION 4.2: Least Squares Curve Fitting

0.6 0.6 7.84 0.0784

1
2 2.4 4.8 1.00 0.0676
3 3.5 10.5 0.01 0.0100
4 4.8 16 19.2 1.96 0.0196
5.7 25 28.5 5.29 0.0484
15 17.0 55 63.6 16.10 0.2240

16.10 – 0.2240
The correlation coefficient = =0.9930.
16.10

Example 4.2 Certain experimental values ofx and y are given below:
(0, -1), (2, 5), (5, 12), (7, 20)
If the straight line Y = ah t ax is fitted to the above data, find the
approximate values of ao and a.
The table of values is given below.

0
5 4 10
12 25 60
7 20 49 140
14 36 78 210

The normal equations are

4a t 14a = 36
and
14a0 + 78a, = 210
Solving the two equations, we obtain

do =-1.1381 and a = 2.8966


Hence the best straight line fit is given by
Y = -1.1381 + x(2.8966).
4.2.2 Multiple Linear Least Squares

Suppose that z is a linear functionof two variables x and


z= do t ax t ay is fitted to the data (z1, X1, V), (Z2, X2, V2),
y. If the function

then the sum ..Zm» Xm» Ym)s


130 CHAPTER 4: Least Squaresand Fourier Transforms

should be minimum. For this, we bave

s
2)( -a -4 -a,y) =0
as
da
-=-2y(-a - aj- a,y;) =0,

and

=-2y;4-ao -j4 - a, Y;) =0.

These equations simplify to

map t aj x, t a,y,= z,
(4.11)

ay Zy, + a, Eyx, + a, y z,y,


from which a, a and a, can be determined.

Example 4.3 Find the values of do a and a, so that the function


z = do t ax t agy is fitted to the data (r, y, z) given below.
(0, 0, 2), (1, 1, 4), (2, 3, 3), (4, 2, 16) and (6. 8, 8).

We form the following table of values

1
0
1
2
4
0
1 1
00
4 1 4
2 3 4 6 6 9 9
4 2 16 16 8 64 4 32
6 8 36 48 48 64 64

13 14 33 57 63 122 78 109

The normal equations are


5ao + 13a + 14a, = 33
13a, + 57a + 63a, = 122H
14a0 + 63 + 78a, = 109 a

The solution of the above system is

do = 2, aj = 5 and a, =-3.

4.2.3 Linearization of Nonlinear Laws


The given data may not always follow a linear relationship. This can be
SECTION 4.2: Least Squares Curve Fitting Procedures 131

(a) y =ax+
This can be written as

Put xy = Y, = X. With these transformations,it becomes a linear


model.
(b) xya =b
Taking logarithms of both sides, we get
log10x +a logiay = logiob.
In this case, we put

logo = Y, logiox = X,
logob
a =Ao and a
= A1

so that
Y= A, t AX.
(c) y = ab
Taking logarithms of both sides, we obtain
logio = logi04 +x logob
Y=A t AX,
where
Y = logioy, A = log104,
X=X, and = log,ob
(yWe= a
have
A

log1o = logioa + blogo*


Y=A, t AX,
where

Y=logjoy, Ao = log1o4, A, = b
and

X= log10X.

In this case, we write

In y = In a + br
where
Y=A, + AX,

and
Y= Iny, A = In a,A, = b
132 CHAPTER 4: Least Squares and Fourier Transforms

We have
y= aex
Therefore,
In y = In a + bx

en Y=A + 4X,
where
Y = In y, Ap = ln a, A, = b and X= x.
The table of values is given below

XY= 0.905
In y

1
XY
0.905
3 1.905 5.715

5 2.905 25 14.525

7 3.905 49 27.335
9 4.905 81 44.145

25 14.525 165 92.625

We obtain
X=5, Y =2.905
5(92.625)- 25(14.525)-0.5=b.
A, = 5(165)–625
Then
A =Y-AX=2,905-0.5(5) =0.405.
Hence,
a=e0 =e0405 =1.499.
curve is of the form
It follows that the required

y= 1.499,0.Sx
a curve of the form
Using the method of least squares, fit

Example 4.5

y= a+bx
to the following data

(8, 13.509), (12, 16.434).


(3, 7.148), (5, 10.231),
We have

at bx
a+bx =b+

r=, + AX,
Procedures 133
Fitting
Least Squares Curve
SECTION 4.2:

is
The table of values
XY
X Y
0.140 0.111 0.047
0.333
0.098 0.040 0.020
0.200
0.074 0.016 0.009
0.125
0.083 0.061 0.007 0.005

0.741 0.373 0.174 0.081

We obtain
4(0.081) –0.741(0.373)
A =a= =0.324, X =0.185, Y= 0.093
4(0.174) –- (0.741)

and A b= Y- aX = 0.0331.
Hence the required fit is Y = 0.0331 + 0.324(X), which simplifies to

y=
0.324 +0.033 1(x)

Note: The given data is obtained from the relation y = 0.3162 +0.0345x
4.2.4 Curve Fitting by Polynomials
Let the polynomial of the nth degree,

Y= ao +ax + d,x + .. + ax" (4.12)


be fitted to the data points (X, V), i = 1, 2, ..., m. We then have

+ -(ao taty t ayij t..* a,)


(4.13)

Equating to zero the first partial derivatives and simplifying, we obtain the
normal equations:

mag + a,x +a,Ex+.+ a, E =Ly;,

(4.14)
134 CHAPTER 4: Least Squares and Fourier Transforms

The system (4.14) constitutes (n + 1) equations in (n + 1) unknowns, and


hence can be solved for o, , ...ap. Equation (4.12) then gives the
required polynomial of the nth degree.
For larger values of n, system (4.14) becomes unstable with the result

thatround off errors in the data may cause large changes in the solution.
Such systems occur quite often in practical problems and are called ill
conditioned systems.Orthogonal polynomials are most suited to solve such
systems and one particular form of these polynomials, the Chebyshev
polynomials,will be discussed later in this chapter.

Example 4.6 Fit a polynomial of the second degree to the data points

(r, y) given by
(0, 1), (1, 6) and (2, 17).

For n = 2, Eq. (4.14) requires E, Ex, E,x,Ey, x,y, and Exy;.


The table of values is as follows:
y
1 0 0 0
6 1 1
6 6
1
1

2 17 4 8 16 34 68

3 24 5 9 17 40 74

The normal equations are


3ao + 5a, = 24
+ 3a
3ao + 5a, + 9a, = 40
5ao + 9a, + 17a, = 74

Solving the above system, we obtain

do = 1, a = 2 and a, = 3.
The required polynomial is given by Y= +2x+ 3x, and
1 it can be seen that

this fitting is exact.

Example 4.7 Fit a second degree parabola y = d t ax t ar to the data

(1, 0.63), 3, 2.05), (4, 4.08), (6, 10.78).

The table of values is

y
0.63 1 1 1 0.63 0.63

3 2.05 9 27 81 6.15 18.45


SECTION 4.2: Least Squares
Curve Fitting Procedures 135
The normal equations are

+ 14a, + 62a, = 17.54


4dp

14do 62a1 + 308a, = 87.78


62d) + 308a, + 1634a, =
472.44,
from which we obtain

ao = 1.24, aj =-1.05 and d, = 0.44


4.2.5 Curve Fitting by a Sum of Exponentials
A frequently encountered problem in engineering and physics is that of
a sum of exponentials of the form
fitting

y=f(r) Ae + A,e 4...+ A,en (4.15)


to a set of data points (;, v),
than 2n. i=1, 2, ..., m, where m is much greater
We describe here a computational
technique due to Moore [1974]. For
easy of presentation, we assume n
Then the function
= 2.

y= Ae"+A,elex (4.16)
is to be fitted

that y(x) satisfieS a


to the data (x;, y),i=1,2, ..., m, and m > 4. It is known
differential equation of the form

d'y dy +
dy?dx ayy
(4.17)

where the constants a, and a, have to be determined.


Integrating Eq. (4.17),
we obtain

y')-y)=a[y) -y(o]+ 4, y(0d, (4.18)

where xo is the initial value of x and y'o) =dy


dx
Integrating Eq. (4.18), we
get

y(3) –y(0) -(*-)y()=a y()dx – a(r-) y(%0)


136 CHAPTER 4: Least Squares and Fourier Transforms

Now,

fwdrde=fr-nyod

Hence, Eq. (4.19) becomes

y() – y(0) -(r-)y'()=a y(r) d -a(r- x)y(x6)

(4.20)

In Eq. (4.20),y(ro) is eliminated in the following way. Let x and x, be two


data points such that

(4.21)
Then Eq. (4.20) gives

y4)- y(9) -(h -)y'o) =a|y) dx - a(5 -)y()

+ a, (-) y() dt (4.22)

and

y(z)- y(p)- (9 -x)y(o) =aJ y)dr- (y-

+as (-)
aq

y() t
) y(g)

(4.23)

Adding Eqs. (4.22) and (4.23) and using Eq. (4.21), we obtain

y)+yz) -2y(r9) =a| J>)ds+ f>)de


L0

(4.24)
|
20
SECTION 4.2: Least Squares Curve Fitting Procedures 137

(4.25)
A, and A, can be obtained by
Finally, the method of least squares or by the
method of averages.

Example 4.8 function of the form

)
Fit a

y= Ae +Ayeer (i)

to the data defined by (,


(1, 1.54), (1.1, 1.67), (1.2, 1.81), (1.3, 1.97), (1.4, 2.15),
(1.5, 2.35), (1.6, 2.58), (1.7, 2.83), (1.8, 3.11).

Let xo = 1.2, x = 1.0, x, = 1.4. Then, Eq. (4.24) gives


12 1.4

007 =4-f>)dk+ fv)


de
1.0 1.2

.2 1.4

1.0 1.2

Evaluatingthe integrals by Simpson's rule* and simplifying, the above equation


becomes
1.8laj + 2.180a, = 2.10 (i)
Again, choosing x 1.4, xo 1.6 and = = x = 1.8, and evaluating the integrals
as before, we obtain the equation

2.88a + 3.104d,= 3.00 (ii)


Solving Eqs. (ii) and (iii), we get
a = 0.03204 and a =,0.9364.
Equation (4.25) now gives

22- 0.032042 - 0.9364 = 0,


from which we obtain

and
= 0.988 = 0.99,
= -0.96.
Using the method of least squares, we finally obtain
A = 0.499 and A, = 0.491.
The above data was actually constructed from the function y = cosh x so
that A = A, = 0.5, 2, = 1.0 and =1.0.
138 CHAPTER 4: Least Squares and Fourier Transforms

4.3 WEIGHTED LEAST SQUARES APPROXIMATION


In the previous section,we have minimized the sum of squares of the errorS.
A more general approach is to minimize the weighted sum of the squares of
the errors taken over all data points. If this sum is denoted by S, then
instead

of Eq. (4.2), we have

S=W,Ly -f)l + W, y,- f() ++Wn Lym - f(am)


(4.26)
= W;e + Wzet+We n:
positive numbers
and are called weights.
In Eq. (4.26),the W,are prescribed of a data point. If
to the relative accuracy
A weight is prescribed according i. We consider

data points are accurate, we set W,


1 for all = again
all the

the linear and nonlinear cases below.

Least Squares Approximation


4.3.1 Linear Weighted
be fitted to the given data points,

Let Y = do t ax be the straight line to

viz. (X1, y).....(m Ym). Then


m

S(a,a)= W Ly;- (ag + aj4)]. (4.27)

or minima, we have
For maxima
(4.28)
-=0,
dag da
which give
(4.29)
-=-2) W, Ly; -(a t ax;)]=0
dao i=l

and
(4.30)
-=-2W, Ly; -(a, t ax)] =0.
da i=l

for ao and aj:


the system of equations
Simplificationyields
m m
(4.31)
o W ta W; = WY
i=1 i=1
i=1

and

2_w.. (4 32)
SECTION 4.3:
Weighted Least
Squares
which are the Approximation 139
normal equations in
f aj. We this case and
consider Example are solved to
4.2 again to obtain ao and
illustrate the use
Example 4.9 of weights.
Suppose that in the data
known to be more of Example
reliable than the 4.2, the point
(5, 12) is
10) others. Then we prescribe a
corresponding to this point weight (say,
The only and all other weights
following table is then obtained. are taken as unity.
X
W Wx Wx2 Wy Wxy
2 10-1A12pe
5
0eE 0 -1ots 0
1

5 ae 2 4 5 10
12 10 50 250 120
7 600
20 1 7 49 20 140
14 36 13 59 303 144 750
The normal Eqs. (4.31)
and (4.32)

)
then give

13a0 + 59a =144

5940 +303a, =750.


(i)
Solution to Eqs. (i) and (ii) gives

do =-1.349345 and a =2.73799.


The linear least squares approximation' is, therefore, given by

y=-1.349345 + 2.73799x.

Example 4.10 We
consider Example 4.9 again with
an increased weight,
say 100, corresponding to y(5.0). The
following table is then obtained.

X
Wx Wx2 Wy Wxy
0 -1 1
-1
5 1 2 4 5 10
5 12 100 500 2500 1200 6000
7 20 1
7 49 20 140
14 36 103 509 2553 1224 6150

The normal equations in this

103a,
case areAe
+ 509a, =1224
T2AR
CHAPTER 4: Least Squares and Fourier Transforms
140

we obtain
Solving the preceding equations,
ao =-1l.41258 and a =2.69056.
is therefore given by
The required linear least squares approximation'

dly=-141258 -+ 2.69056x,

=12.0402.
and the value of y(5)
when the weight is increased.
It follows that the
approximation becomes better

Nonlinear Weighted Least Squares Approximation


4.3.2
squares approximation of a set of m data points
We now consider the least
Let
(x, y), i = 1, 2, ..., m, bya polynomial of degree n < m.

y=a t ax+a,x +...+a,x" (4.33)

be fitted to the given data points. We then have

m
S (a0, 4js .,a,)=W; =l
Ly; - (a tax t +t a, x)]. (4.34)

If a minimum occurs at (an, aj, .... a,), then we have

=0. (4.35)

These conditions yield the normal equations

40 W tt a, W;x= W;y

m,
ta W:*;
i=1 i=1 i=1 i=1

40 Wta i=1
Wít.ta,
i=l
Wx*= i=1
W;xy; (4.36)

7 m

a W;xta) Wx* +.+ a, W;x"=) W,x:


0i=1 i=l

Equations (4.36) are (n


the x are distinct with n
+ 1)equations
< m, then the equations
in (n + 1) unknowns ao, ai:
possess a 'unique'
, ay
solution.

4.4 METHOD OF LEAST SQUARES FOR CONTINUOUS FUNCTIONS


SECTION 4.4: Method of Least Squares for
Continuous Functions 141

Let

y(x) =do taxt agx t.+a, x"


(4.37)
be chosen to minimize

(4.38)
a

The necessary conditions for a minimum are given by

=0, (4.39)
dao da,
which yield

-2|W)[y() -(ag tax+ azx t..+ ant')] du =0

-2 W(>)[y() -(ag tax+ayx +. +a,z)]xdu =0


a

b (4.40)

-2 W() [y() -(a+ aqx+ a,r +..+a,')] x du =0

-2 W)y(r) - (ao +ax+a,x +.t anz")]x" ds = 0.

Rearrangement of terms in Eq. (4.40) gives the system


b b b b

a W() dr+ a xW() du + .+a,|


a
W) dr=| W()y()dx

(4.41)

The system in Eq. (4.41) comprises (n + 1) normal equations 1) in (n +

unknowns, viz. ao, 4,, d,, ...,a, and they always possess a 'unique'solution.

You might also like