0% found this document useful (0 votes)
77 views29 pages

ESO208 Asignment

The document provides solutions to three questions related to numerical methods for finding roots of equations. For the first question, it uses five different root-finding methods - bisection, regula falsi, fixed point, Newton-Raphson, and secant - to find a root of the equation sin(x) - (x/2)^2 = 0. For each method, it calculates the true and approximate relative errors at each iteration and plots the errors versus iteration number on a log scale. The second question uses Mueller's method to find a root of the equation x^4 - 2x^3 - 53x^2 + 54x + 504 = 0 to within 0.1% error,

Uploaded by

yhjkl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views29 pages

ESO208 Asignment

The document provides solutions to three questions related to numerical methods for finding roots of equations. For the first question, it uses five different root-finding methods - bisection, regula falsi, fixed point, Newton-Raphson, and secant - to find a root of the equation sin(x) - (x/2)^2 = 0. For each method, it calculates the true and approximate relative errors at each iteration and plots the errors versus iteration number on a log scale. The second question uses Mueller's method to find a root of the equation x^4 - 2x^3 - 53x^2 + 54x + 504 = 0 to within 0.1% error,

Uploaded by

yhjkl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Tutorial 1

1. The computation of the expression

√1 + 8𝑥 2 − 1
𝑓(𝑥) =
2

involves the difference of small numbers when x<< 1. Obtain the value of f(x) for x = 0.002,
and also the relative error (True Value is 0.7999936001023980×10−5), performing operations
by rounding all mantissas to six decimals: (a) using the expression above, (b) employing a
Taylor’s series expansion and using the first three terms, and (c) using the equivalent
4𝑥 2
expression𝑓(𝑥) = √1+8𝑥 2 . For the case (a), perform a backward error analysis to find the
+1
relative error in x required to make the computed result exact.

2. On a plot of land, which is in the shape of a right-angled triangle, the two perpendicular sides
were measured as a = 300.0  0.1 m and b = 400.0  0.1 m. How accurately is it possible to
estimate the hypotenusec?

3. The following set of equations is to be solved to get the value of x for a given. For what
values of  will this problem be well-conditioned?

𝑥+𝑦 =2
𝑥 + (1 − 𝛿)𝑦 = 1
Solution for Tutorial 1
1. (a) For x=0.002, 1 + 8𝑥𝑥 2 = 0.100003 × 101 , √1 + 8𝑥𝑥 2 = 0.100001 × 101 ,
and 𝑓𝑓(𝑥𝑥) = 0.500000 × 10−5 . True Value is 0.7999936001023980×10−5.
Relative error = 37.5%.

Note: Depending on how the computations are done before rounding-off, it is


possible that the rounding-off of the square-root term results in 0.100002 ×
101 .

4𝑥𝑥 4 32𝑥𝑥 2
(b) 𝑓𝑓′(𝑥𝑥) = 2
; 𝑓𝑓 ′′ (𝑥𝑥) = 2
− (1+8𝑥𝑥 2)3/2 ; Expanding about 0, 𝑓𝑓(𝑥𝑥 ) =
√1+8𝑥𝑥 √1+8𝑥𝑥
𝑥𝑥 2
0 + 𝑥𝑥 × 0 + × 4. For x=0.002, 𝑓𝑓(𝑥𝑥) = 0.800000 × 10−5 . Relative error =
2
−8x10−4%.

(c) For x=0.002, 1 + 8𝑥𝑥 2 = 0.100003 × 101 , √1 + 8𝑥𝑥 2 = 0.100001 × 101 ,


√1 + 8𝑥𝑥 2 + 1 = 0.200001 × 101 and 𝑓𝑓(𝑥𝑥) = 0.799996 × 10−5 . Relative
error = −3x10−4%.

For the case (a), for 0.500000 × 10−5 to be exact result, we need 𝑥𝑥 =
(1+2×0.500000×10−5 )2 −1
� = 0.00158114 (the given value is 0.002). Relative
8
error = 20.9%.

𝜕𝜕𝜕𝜕 𝑎𝑎 𝜕𝜕𝜕𝜕 𝑏𝑏
2. 𝑐𝑐 = √𝑎𝑎2 + 𝑏𝑏 2 ; = ; = . For given values, a=300 m, b=400
𝜕𝜕𝜕𝜕 √𝑎𝑎2 +𝑏𝑏 2 𝜕𝜕𝜕𝜕 √𝑎𝑎2 +𝑏𝑏2
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕
m, = 0.6; = 0.8. Using first order approximation (∆a=∆b=±0.1 m)
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕
∆𝑐𝑐 = ∆𝑎𝑎 + ∆𝑏𝑏 = ±0.14 m
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕
(You may want to mention the second order approximation, but it is not really
required)
2𝛿𝛿−1
3. Solving for x, we get 𝑥𝑥 = 𝑓𝑓(𝛿𝛿 ) = . The condition number of the problem is
𝛿𝛿
1
𝛿𝛿𝛿𝛿′(𝛿𝛿) 𝛿𝛿 2 1
𝐶𝐶𝑃𝑃 = � �=� 𝛿𝛿
2𝛿𝛿−1 �=� �. Well conditioned for δ>1 orδ<0.
𝑓𝑓(𝛿𝛿) 2𝛿𝛿−1
𝛿𝛿

(You may want to show the values of x for a δ value of 2 and 2.02 and another
for 0.6 and 0.606 to illustrate the relative change in x for a 1% change in δ)
Tutorial 2

1. Find a root (one root is, obviously, x = 0) of the equation: f(x) = sin x – (x/2)2 =
0 using Bisection method, Regula-Falsi method, Fixed Point method, Newton-
Raphson method and Secant method. In each case, calculate true relative error
and approximate relative error at each iteration (the true root may be taken as
1.933753762827021). Plot both of these errors (on log scale) vs. iteration
number for each of the methods. Terminate the iterations when the approximate
relative error is less than 0.01 %. Use starting points for Bisection, Regula-
Falsi and Secant methods as x = 1 and x= 2 and for Fixed Point and Newton
methods, x = 1.5.

2. Find a root of the following equation using Mueller’s method to an


approximate error of εr ≤ 0.1%:

𝑥𝑥 4 − 2𝑥𝑥 3 − 53𝑥𝑥 2 + 54𝑥𝑥 + 504 = 0

Take the three starting values as 1, 2, and 3.

3. Find all the roots of the above polynomial using Bairstow’s method with εr ≤
0.1%. Use the starting guess as α0=2 and α1=2.
Tutorial 2 Solution
Question1
True Value=1.93375376
er=absolute(((True Value - x(i))/True Value)*100) %
εr=absolute(((x(i)-x(i-1))/x(i))*100) %
Bisection Method
Iteration 1: x=(1+2)/2=1.5, f(1.5)= 0.434994987 thus root lies between 1.5 and 2 so x for next iteration
x=(1.5+2)/2=1.75.
Calculate er and εr with the formula given above. Note that in figures et and ea are used for
true and approximate relative errors.
Iteration x sinx-(x/2)^2 er (%) εr (%)
1 0.591470985
-
2
0.090702573
1 1.5 0.434994987 22.43066
2 1.75 0.218360947 9.502439 14.28571
3 1.875 0.075179532 3.038327 6.666667
-
4 1.9375 0.193729 3.225806
0.004962282
5 1.90625 0.035813793 1.422299 1.639344
6 1.921875 0.015601413 0.614285 0.813008
7 1.929688 0.005363397 0.210278 0.404858
8 1.933594 0.000211505 0.008275 0.20202
-
9 1.935547 0.092727 0.100908
0.002372653
10 1.93457 -0.00107989 0.042226 0.05048
-
11 1.934082 0.016976 0.025246
0.000434021
-
12 1.933838 0.00435 0.012625
0.000111215
13 1.933716 5.01558E-05 0.001962 0.006313
Error in Bisection Method

100
et (%)
10
ea(%)

% error
1

0.1

0.01

0.001
0 5 10 15
Iteration Number

Secant Method
Iteration 1: x(3)=x(2)-((x(2)-x(1))/(f2-f1))*(f2)=2-((2-1)/( -0.090702573-0.591470985))*( -0.090702573)=
1.867039 and so on.

Iteration x sinx-(x/2)^2 er (%) εr (%)


1 0.591470985
2 -0.090702573
1 1.867039 0.084981622 3.450020524
2 1.931355 0.003167407 0.124069284 3.330083
3 1.933845 -0.000119988 0.004693665 0.128757
4 1.933754 1.56453E-07 6.12036E-06 0.0047

Error in Secant Method

1
% error

0.01
et (%)

0.0001 ea(%)

0.000001
0 Iteration Number 5
False Position
Iteration 1: x(3)=2-((1-2)/( 0.591470985-(-0.090702573)))*( -0.090702573)= 1.867039
Iteration 2: x(4)=2-((1.867039 -2)/( 0.084981622-(-0.090702573)))*( -0.090702573)= 1.931355 and so on.

Iteration x sinx-(x/2)^2 er (%) εr (%)


1 0.591470985
2 -0.090702573
1 1.867039 0.084981622 3.450020524
2 1.931355 0.003167407 0.124069284 3.330083
3 1.933671 0.000109618 0.004288389 0.119786
4 1.933751 3.78371E-06 0.000148017 0.00414

Error in Regula-Falsi

10
et (%)
1
ea(%)
%error

0.1
0.01
0.001
0.0001
0 2 4 6
Iteration Number

Fixed Point
Iteration1: f(x)=2*sqrt(sin(x))=2*sqrt(sin(1.5))= 1.997493416 and so on…
Iteration x 2sqrt(sinx) er (%) εr (%)
1.5 1.997493416
1 1.997493 1.908232351 3.296162 24.90589
2 1.908232 1.942788325 1.319786 4.677683
3 1.942788 1.930393907 0.467203 1.778679
4 1.930394 1.934981664 0.173748 0.642067
5 1.934982 1.933302092 0.063498 0.237096
6 1.933302 1.933919512 0.023357 0.086876
7 1.93392 1.933692885 0.008571 0.031926
8 1.933693 1.933776116 0.003148 0.01172
9 1.933776 1.933745555 0.001156 0.004304

Error in Fixed Point Method

100
et (%)
10
ea(%)
% error

1
0.1
0.01
0.001
0 5 10
Iteration Number

Newton Raphson
f(x)= sinx-(x/2)^2, f’(x)=cos(x)-x/2
Iteration 1: x(2)=x(1)-(f1)/(cos(x(1))-x(1)/2)=1.5-(0.434994987)/(cos(1.5)-1.5/2)= 2.140393 and so on..

Iteration x sinx-(x/2)^2 er (%) εr (%)


1.5 0.434994987
1 2.140393 -0.303201628 10.68590084 29.9194
2 1.952009 -0.024370564 0.944028342 9.650767
3 1.933931 -0.000233752 0.009143414 0.934799
4 1.933754 -2.24233E-08 8.77191E-07 0.009143

Error in Newton-Raphson

100
10
1
%error

0.1
0.01 et (%)
0.001
0.0001 ea(%)
0.00001
0.000001
0.0000001
0 2 4 6
Iteration Number

Question 2
504 c0
54 c1
-53 c2
-2 c3
1 c4

f − f i −1 f − f i−2
i
(i ) ( i −1)
− (i −i −11) (i − 2)
a = x − x (i ) x(i − 2 ) − x
x −x
2c
f i − f i −1 ∆x (i ) = −
b = (i )
x −x ( i −1)
(
+ a x (i ) − x (i −1) ) b + Sign(b) b 2 − 4ac
c = fi
Note: ∆x2 in the table below shows the other root of the quadratic equation, i.e., with a negative sign in
denominator.

i x(i) f a b c ∆x ∆x2 x(i+1) εr (%)


1 504
2 400
0 3 216 -40
-224 216 0.8387 -6.439 3.83868
-
1 3.83868 34.3133 -17.75 34.313 0.1466 -13.19 3.98524 3.67764
231.5
-
2 3.98524 3.10289 3.7396 3.1029 0.0146 56.783 3.99986 0.36532
212.4
3 3.99986 0.0302 16.562 -210 0.0302 0.0001 12.682 4 0.00359

Therefore, one of the roots is 4.

Question 3 :
Initial α0=2,α1=2
dn=cn, dn-1=cn-1+ α1*dn, dj=cj+ α1*dj+1+ α0*dj+2 for j=n-2 to 0
δn-1=dn, δn-2=dn-1+ α1*δn-1, δj=dj+1+ α1*δj+1+ α0*δj+2 for j=n-3 to 0
δ1(i ) ∆α 0(i ) + δ 0 (i ) ∆α1(i ) = −d 0(i )
∆α0 and ∆α1 can be obtained by solving simultaneous linear equations:
(i ) (i )
δ 2 ∆α 0 + δ1(i ) ∆α1(i ) = −d1(i )
(
r1, 2 = 0.5 α1 ± α12 + 4α 0
Finally, roots are obtained by: )
The relative error is computed as the maximum of the relative errors in α0 and α1 and is
highlighted.
− 45∆α 0 − 134∆α 1 = −306
(For iteration 1: Max(8.81/10.81,0.6751/1.3249)*100% = 81.5%)
2∆α 0 − 45∆α 1 = 48
c d δ ⇒ ∆α 0 = 8.8103, ∆α 1 = −0.6751
0 504 306 -134
1 54 -48 -45
2 -53 -51 2
3 -2 0 1
4 1 1

∆α0 8.810292 ∆α1 -0.6751 εr (%)


α0new 10.81029 α1new 1.324902 81.50

c d δ
0 504 24.494941 -44.9748
1 54 -10.38027 -31.4129
2 -53 -43.08415 0.649804
3 -2 -0.675098 1
4 1 1

∆α0 1.216843 ∆α1 -0.30527 εr (%)


α0new 12.02713 α1new 1.019627 29.94

c d δ
0 504 -1.407564 -30.6075
1 54 -0.587363 -29.9053
2 -53 -41.97248 0.039255
3 -2 -0.980373 1
4 1 1

∆α0 -0.026929 ∆α1 -0.01968 εr (%)


α0new 12.00021 α1new 0.999951 1.97

c d δ
0 504 -0.004703 -29.9979
1 54 0.0014621 -29.9997
-
2 -53 -9.7E-05
41.999794
3 -2 -1.000049 1
4 1 1

∆α0 -0.000206 ∆α1 4.87E-05 εr (%)


α0new 12.000000 α1new 1.000000 0.0049

Roots
-3 4
=

Reduced Polynomial: x^2 - 1.000049x - 41.999794


Roots = 7.000010 -5.999962
Tutorial 3

1. Solve the following system of equations by Gauss Elimination, Doolittle


method, Crout method and Cholesky decomposition:

 9.3746 3.0416 − 2.4371  x1  9.2333


 3.0416 6.1832 1.2163   x  = 8.2049
  2   
− 2.4371 1.2163 8.4429   x3  3.9339

2. Solve the following system of equations using Thomas algorithm:

 1 − 1 0 0   x1  0
 − 1 2 − 1 0   x  1 
  2  =  
 0 − 1 2 − 1  x3  2
    
 0 0 0 1   x 4  1 

3. Consider the following set of equations:

 0.123 0.345 2.00   x1   6.81 


− 2.34 0.789 1.98   x2  =  5.17 

 12.3 − 5.67 − 0.678  x3  − 1.08

a) Solve the system using Gaussian elimination, without pivoting, using 3-digit
floating-point arithmetic with round-off. Perform calculations more precisely
but round-off to 3 significant digits when storing a result, and use this rounded-
off value for further calculations.
b) Perform partial pivoting and carry out Gaussian elimination steps once again
using 3-digit floating-point arithmetic with round-off. Comment on the results.
Tutorial-3
Question 1
Gauss Elimination
9.3746 3.0416 -2.4371 9.2333
Step 3.0416 6.1832 1.2163 8.2049
1 -
2.4371 1.2163 8.4429 3.9339

9.3746 3.0416 -2.4371 9.2333


Step
0 5.196349 2.00702 5.209145
2
0 2.00702 7.809331 6.334266

9.3746 3.0416 -2.4371 9.2333


Step
0 5.196349 2.00702 5.209145
3
0 0 7.034147 4.322304

x= 0.896424 0.76513 0.614475

Doolittle method
9.3746 3.0416 -2.4371
A= 3.0416 6.1832 1.2163
-2.4371 1.2163 8.4429

b
1 0 0 9.2333
L= 0.324451 1 0 8.2049
-0.25997 0.386237 1 3.9339

y
9.3746 3.0416 -2.4371 9.2333
U= 0 5.196349 2.00702 5.209145
0 0 7.034147 4.322304

x= 0.896424 0.76513 0.614475

Crout Method
9.3746 3.0416 -2.4371
3.0416 6.1832 1.2163
A=
-
2.4371 1.2163 8.4429

b
9.3746 0 0 9.2333
3.0416 5.196349 0 8.2049
L=
-
2.4371 2.00702 7.034147 3.9339

y
1 0.324451 -0.25997 0.984927
U= 0 1 0.386237 1.002462
0 0 1 0.614475
x= 0.896424 0.76513 0.614475

Cholesky Decomposition:
b
9.3746 3.0416 -2.4371 = 9.2333
A= 3.0416 6.1832 1.2163 = 8.2049
-2.4371 1.2163 8.4429 = 3.9339

Cholesky: L11[=sqrt(A11)],L21(=A21/L11),L22[=sqrt(A22-
L21^2)],L31(=A31/L11),L32[=(A32-L31*L21)/L22],L33[=sqrt(A33-L31^2-L32^2)]

3.061797 0 0 y1= 3.015647 b1/L11


L 0.993404 2.27955 0 y2= 2.285163 (b2-L21*y1)/L22
-0.79597 0.880446 2.652197 y3= 1.629707 (b3-L31*y1-L32*y2)/L33

3.061797 0.993404 -0.79597 x1= 0.896424 (y1-U12*x2-U13*x3)/U11


U 0 2.27955 0.880446 x2= 0.76513 (y2-U23*x3)/U22
0 0 2.652197 x3= 0.614475 y3/U33

Question 2
Thomas Algorithm
index l d u b alpha beta x
1 1 -1 0 1 0 5
2 -1 2 -1 1 1 1 5
3 -1 2 -1 2 1 3 4
4 0 1 1 1 1 1

Question 3
Gauss Elimination
0.123 0.345 2.00 6.81
Step 1 -2.34 0.789 1.98 5.17
12.3 -5.67 -0.678 -1.08

0.123 0.345 2.00 6.81


Step 2 0 7.35 40.0 135
0 -40.2 -201 -682

0.123 0.345 2.00 6.81


Step 3 0 7.35 40.0 135
0 0 17.8 56.4

x= 0.680 1.12 3.17


Pivoting
12.3 -5.67 -0.678 -1.08
Step 1 -2.34 0.789 1.98 5.17
0.123 0.345 2.00 6.81

12.3 -5.67 -0.678 -1.08


Step 2 0 -0.290 1.85 4.96
0 0.402 2.01 6.82

12.3 -5.67 -0.678 -1.08


Step 3 0 0.402 2.01 6.82
0 -0.290 1.85 4.96

12.3 -5.67 -0.678 -1.08


Step 4 0 0.402 2.01 6.82
0 0 3.3 9.88

x= 1.01 2.02 2.99

Gauss Elimination: Exact Soln


0.123 0.345 2.00 6.81
Step
-2.34 0.789 1.98 5.17
1
12.3 -5.67 -0.678 -1.08

0.123 0.345 2 6.81


Step
0 7.352415 40.02878 134.7261
2
0 -40.17 -200.678 -682.08

0.123 0.345 2 6.81


Step
0 7.352415 40.02878 134.7261
3
0 0 18.01969 53.99755

x= 1.003813 2.009739 2.996586


Tutorial 4

1. Solve the following system of equations by Gauss Jacobi and Gauss Seidel
methods, with εr ≤ 0.1%. Use starting guess of (0,0,0) for both the methods.

 9.3746 3.0416 − 2.4371  x1  9.2333


 3.0416 6.1832 1.2163   x  = 8.2049
  2   
− 2.4371 1.2163 8.4429   x3  3.9339

2. Solve the following equations using (a) fixed-point iteration and (b) Newton-
Raphson method, starting with an initial guess of x=1 and y=1 and εr ≤ 0.1%.

x2 − x + y − 0.5 = 0
x2 − 5xy − y = 0

3. Consider the following Matrix:


7 −2 1
�−2 10 −2�
1 −2 7
a) Find the largest eigenvalue and the corresponding eigenvector using the
Power method with εr ≤ 0.1%. Take the starting z vector as {1,0,0}T.
b) Obtain the equation of the characteristic polynomial using Fadeev-Leverrier
Method.
c) Perform two iterations of the QR algorithm and compute the approximate
eigenvalues of the matrix after this iteration.
Tutorial 4
Question 1
A x b
9.3746 3.0416 -2.4371 x1 9.2333
=
3.0416 6.1832 1.2163 x2 8.2049
-2.4371 1.2163 8.4429 x3 3.9339

Jacobi

i x1 x2 x3 εr (%)
0 0 0 0
1 0.984927 1.326967 0.465942 100
2 0.675522 0.750812 0.559082 76.73757
3 0.886669 0.884691 0.552772 23.81358
4 0.841592 0.782066 0.594435 13.12232
5 0.885719 0.796045 0.596207 4.982136
6 0.881645 0.773989 0.606931 2.849612
7 0.891589 0.773884 0.608932 1.115299
8 0.892143 0.768599 0.611818 0.687639
9 0.894608 0.767758 0.612739 0.275531
10 0.89512 0.766365 0.613572 0.181869
11 0.895789 0.765949 0.61392 0.074644

Gauss Seidel

i x1 x2 x3 εr (%)
0 0 0 0
1 0.984927 0.842467 0.62888 100
2 0.875077 0.772797 0.607208 12.55325
3 0.892047 0.768712 0.612695 1.902424
4 0.894799 0.766279 0.61384 0.317513
5 0.895886 0.765519 0.614263 0.121335
6 0.896243 0.765261 0.614403 0.039787
Question 2
Fixed Point Iteration

Note: x=φ1(x,y)=√(x-y+0.5) and y=φ2(x,y)=(x2-y)/5x

Iteration εr (%)
1 x 1 φ1(x,y) 0.707107
-
y 1 φ2(x,y)
0.141421
2 x 0.707107 φ1(x,y) 1.161261
-
y φ2(x,y) 0.256609 807.107
0.141421
3 x 1.161261 φ1(x,y) 1.18518
y 0.256609 φ2(x,y) 0.193733 155.112
4 x 1.18518 φ1(x,y) 1.221248
y 0.193733 φ2(x,y) 0.212523 32.455
5 x 1.221248 φ1(x,y) 1.228302
y 0.212523 φ2(x,y) 0.211056 8.841
6 x 1.228302 φ1(x,y) 1.231765
y 0.211056 φ2(x,y) 0.212084 0.695
7 x 1.231765 φ1(x,y) 1.232753
y 0.212084 φ2(x,y) 0.212142 0.485
8 x 1.232753 φ1(x,y) 1.233131
y 0.212142 φ2(x,y) 0.212219 0.080
9 x 1.233131
y 0.212219

Newton Raphson

Note: f1=x2-x+y-0.5 and f2=x2-5xy-y. Derivatives are f1(x,y): (2x-1,1); f2(x,y): (2x-5y, -5x-1)

Iteration x y f1 f2 f1'x f1'y f2'x f2'y


- -
1 1.000000 1.000000 0.500000 1.000000 1.000000 -6.000000
5.000000 3.000000
-
2 1.666667 0.444444 4.333333 2.333333 1.000000 4.166667 -9.333333
0.166667
3 1.339757 0.151677 0.106870 0.627218 1.679515 1.000000 1.921128 -7.698787
4 1.242124 0.208784 0.009532 0.037410 1.484248 1.000000 1.440328 -7.210621
5 1.233383 0.212226 0.000076 0.000227 1.466766 1.000000 1.405635 -7.166914

xnew ynew εr (%)


1.666667 -0.166667 700.000
1.339757 0.151677 209.882
1.242124 0.208784 27.352
1.233383 0.212226 1.622
1.233318 0.212245 0.009
Question 3:
Part (a)
A
7 -2 1
-2 10 -2
1 -2 7

Power Method: Using L2 norm for normalization

k Normalized z(k) Az(k) λ εr (%)


0 1 7 7.348469
0 -2
0 1

1 0.952579344 7.348469 9.165151 24.72191


-0.272165527 -4.89898
0.136082763 2.44949

2 0.801783726 6.948792 10.87592 18.66606


-0.534522484 -7.48331
0.267261242 3.741657

3 0.638915143 6.192562 11.66936 7.295328


-0.688062462 -8.84652
0.344031231 4.423259

4 0.530668631 5.609926 11.91348 2.092003


-0.758098044 -9.40042
0.379049022 4.700208

5 0.470888855 5.268864 11.97811 0.542519


-0.789057 -9.6214
0.3945285 4.810702

6 0.439874292 5.087242 11.99451 0.136901


-0.803248707 -9.71548
0.401624354 4.857742

7 0.424130777 4.993901 11.99863 0.034305


-0.809994116 -9.7582
0.404997058 4.879098
Power Method: Using L∞ norm for normalization

k Normalized zk Azk λ εr (%)


0 1 7 7
0 -2
0 1

1 1 7.714286 7.714286 10.20408


-0.285714286 -5.14286
0.142857143 2.571429

2 1 8.666667 9.333333 20.98765


-0.666666667 -9.33333
0.333333333 4.666667

3 0.928571429 9 12.85714 37.7551


-1 -12.8571
0.5 6.428571

4 0.7 7.4 12.4 3.555556


-1 -12.4
0.5 6.2

5 0.596774194 6.677419 12.19355 1.664932


-1 -12.1935
0.5 6.096774

6 0.547619048 6.333333 12.09524 0.806248


-1 -12.0952
0.5 6.047619

7 0.523622047 6.165354 12.04724 0.396801


-1 -12.0472
0.5 6.023622

8 0.511764706 6.082353 12.02353 0.196847


-1 -12.0235
0.5 6.011765

9 0.505870841 6.041096 12.01174 0.098039


-1 -12.0117
0.5 6.005871
Part (b)
Faddeev Le Verrier Method- Characteristic Eqn: (-1) (λ3 – a2 λ2 – a1 λ1 – a0 ) = 0

Step 0: A2 = A ; a2 = trace(A2) = 24 (where trace of a matrix is sum of its diagonal elements)


Iteration Step: Ai = A(Ai+1 – ai+1I) and ai = trace(Ai)/(n-i) where i = 1, 0 (n = 3, I is a 3x3 identity
matrix)

i ai
(Ai+1 – ai+1I) Ai = A(Ai+1 – ai+1I)

1 -17 -2 1 -114 12 -6 -180


-2 -14 -2 12 -132 12
1 -2 -17 -6 12 -114

0 66 12 -6 432 0 0 432
12 48 12 0 432 0
-6 12 66 0 0 432

Characteristic Eqn: − (λ3 – 24.λ2 + 180.λ – 432) = 0 (Eigenvalues: 6, 6, 12)

Part (c) :
Note: After 2 iterations, the values are highlighted. The complete solution is not needed for
tutorial.

A
7 -2 1
-2 10 -2
1 -2 7

εr
Iteration Ak = Qk.Rk and Ak+1 = Rk.Qk Qk Rk (%)
- -
1 7.0000 -2.0000 1.0000 0.9526 0.2910 0.0891 7.3485 4.8990 2.4495
- - -
-2.0000 10.0000 2.0000 0.2722 0.9456 0.1782 0.0000 9.1652 2.6186
-
1.0000 -2.0000 7.0000 0.1361 0.1455 0.9800 0.0000 0.0000 6.4143

- -
2 8.6667 -2.8508 0.8729 0.9456 0.3168 0.0741 9.1652 5.5988 1.7143 19.82
- - -
-2.8508 9.0476 0.9331 0.3110 0.9471 0.0792 0.0000 7.7143 0.9331 -18.81
-
0.8729 -0.9331 6.2857 0.0952 0.0518 0.9941 0.0000 0.0000 6.1101 -4.98
19.82
3 10.5714 -2.4884 0.5819 0.9720 0.2299 - 10.8759 - 0.9631 15.73
0.0487 4.1183
- - -
-2.4884 7.3545 0.3168 0.2288 0.9731 0.0265 0.0000 6.5894 0.2633 -17.07
-
0.5819 -0.3168 6.0741 0.0535 0.0146 0.9985 0.0000 0.0000 6.0280 -1.36
17.07
- -
4 11.5652 -1.5217 0.3225 0.9911 0.1306 0.0269 11.6694 2.3473 0.4975 6.80
- - -
-1.5217 6.4161 0.0882 0.1304 0.9914 0.0074 0.0000 6.1628 0.0681 -6.92
-
0.3225 -0.0882 6.0187 0.0276 0.0038 0.9996 0.0000 0.0000 6.0070 -0.35
6.92
- -
5 11.8851 -0.8055 0.1660 0.9976 0.0676 0.0138 11.9135 1.2171 0.2508 2.05
- - -
-0.8055 6.1103 0.0227 0.0676 0.9977 0.0019 0.0000 6.0418 0.0172 -2.00
-
0.1660 -0.0227 6.0047 0.0139 0.0010 0.9999 0.0000 0.0000 6.0018 -0.09
2.05
- -
6 11.9708 -0.4088 0.0836 0.9994 0.0341 0.0070 11.9781 0.6143 0.1257 0.54
- - -
-0.4088 6.0280 0.0057 0.0341 0.9994 0.0005 0.0000 6.0105 0.0043 -0.52
-
0.0836 -0.0057 6.0012 0.0070 0.0002 1.0000 0.0000 0.0000 6.0004 -0.02
0.54
- -
7 11.9927 -0.2051 0.0419 0.9998 0.0171 0.0035 11.9945 0.3079 0.0629 0.14
- - -
-0.2051 6.0070 0.0014 0.0171 0.9999 0.0001 0.0000 6.0026 0.0011 -0.13
-
0.0419 -0.0014 6.0003 0.0035 0.0001 1.0000 0.0000 0.0000 6.0001 -0.01
0.14
- -
8 11.9982 -0.1027 0.0210 1.0000 0.0086 0.0017 11.9986 0.1540 0.0314 0.03
- - -
-0.1027 6.0018 0.0004 0.0086 1.0000 0.0000 0.0000 6.0007 0.0003 -0.03
0.0210 -0.0004 6.0001 0.0017 0.0000 1.0000 0.0000 0.0000 6.0000 0.00
0.03
9 11.9995 -0.0513 0.0105
-
-0.0513 6.0004 0.0001
0.0105 -0.0001 6.0000

(Eigenvalues: 6.0000, 6.0004, 11.9995)


Tutorial 5

1. Approximate the function, 𝑓𝑓(𝑡𝑡) = 𝑒𝑒 𝑡𝑡 in the interval [−1, 3].


(a) Use a second-degree polynomial using both the conventional form of polynomials and the
Legendre polynomials. Then, use a third-degree polynomial and comment on the additional
computational effort required in both the methods.
(b) Obtain the second-degree Tchebycheff fit and compare the error with second-degree
Legendre fit.

The Legendre Polynomials are: P0(x)=1; P1(x) = x; P2(x) = (−1+3x2)/2; P3(x) = (−3x+5x3)/2

The Tchebycheff Polynomials are: T0(x)=1; T1(x) = x; T2(x) = −1+2x2

Following integrals are useful:


� 𝑥𝑥𝑒𝑒 2𝑥𝑥+1 𝑑𝑑𝑑𝑑 = 𝑒𝑒 2𝑥𝑥+1 (2𝑥𝑥 − 1)/4; � 𝑥𝑥 2 𝑒𝑒 2𝑥𝑥+1 𝑑𝑑𝑑𝑑 = 𝑒𝑒 2𝑥𝑥+1 (2𝑥𝑥 2 − 2𝑥𝑥 + 1)/4;

� 𝑥𝑥 3 𝑒𝑒 𝑥𝑥 𝑑𝑑𝑑𝑑 = 𝑒𝑒 2𝑥𝑥+1 (4𝑥𝑥 3 − 6𝑥𝑥 2 + 6𝑥𝑥 − 3)/8


1 1 1
𝑒𝑒 2𝑥𝑥+1 𝑥𝑥𝑥𝑥 2𝑥𝑥+1 𝑥𝑥 2 𝑒𝑒 2𝑥𝑥+1
� 𝑑𝑑𝑑𝑑 = 19.4671; � 𝑑𝑑𝑑𝑑 = 13.5836; � 𝑑𝑑𝑑𝑑 = 12.6752
√1 − 𝑥𝑥 2 √1 − 𝑥𝑥 2 √1 − 𝑥𝑥 2
−1 −1 −1

2. Estimate the value of the function at x = 4 from the table of data given below, using, (a)
Lagrange interpolating polynomial of 2nd degree using the points x=2,3,5; (b) Newton’s
interpolating polynomial of 4th degree.

x f(x)
1 1
2 12
3 54
5 375
6 756
Tutorial 5: Solution
Question 1 (a)

Conventional Method:

Second-degree polynomial: 𝑓𝑓2 (𝑡𝑡) = c0 + c1 𝑡𝑡 + c2 𝑡𝑡 2


 3 3 3
  3 t 
 ∫ dt ∫ tdt ∫   ∫ e dt 
2
t dt
 −1 −1 −1  c0   −1   4 4 28 / 3  c0  19.7177 
3 3 3
   3 t      
 ∫ tdt ∫t ∫−1t dt   c1  =  −∫1te dt  ⇒  4 28 / 3 20   c1  = 40.9068
2 3
dt
 3−1 −1  c   3  28 / 3 41 / 2 244 / 5 c  98.5883
 2 3 3
  2     2   
 ∫ t dt ∫ t dt ∫  ∫
3 4 2 t
t dt t e dt 
 −1 −1 −1 −1 

𝑓𝑓2 (𝑡𝑡) = 0.358687 + 0.386269 𝑡𝑡 + 1.79334 𝑡𝑡 2

Third-degree polynomial: 𝑓𝑓3 (𝑡𝑡) = c0 + c1 𝑡𝑡 + c2 𝑡𝑡 2 + 𝑐𝑐3 𝑡𝑡 3

 4 4 28 / 3 20  c0  19.7177 
 4
 28 / 3 20 244 / 5   c1  40.9068
 = 
28 / 3 20 244 / 5 364 / 3  c2  98.5883
 
 20 244 / 5 364 / 3 2188 / 7  c3  246.913

𝑓𝑓3 (𝑡𝑡) = 1.14751 + 0.724336 𝑡𝑡 + 0.103008 𝑡𝑡 2 + 0.563445 𝑡𝑡 3

Legendre Polynomials:

Note: x = (t−1)/2

Using the integral expressions given in the question, we get


1 1
2𝑥𝑥+1
𝑏𝑏0 = � 𝑒𝑒 𝑑𝑑𝑑𝑑 = 9.85883; 𝑏𝑏1 = � 𝑥𝑥𝑥𝑥 2𝑥𝑥+1 𝑑𝑑𝑑𝑑 = 5.29729; 𝑏𝑏2
−1 −1
1
(−1 + 3𝑥𝑥 2)
= � 𝑒𝑒 2𝑥𝑥+1 𝑑𝑑𝑑𝑑 = 1.91289
2
−1

1
(−3𝑥𝑥 + 5𝑥𝑥 3 ) 2𝑥𝑥+1
𝑏𝑏3 = � 𝑒𝑒 𝑑𝑑𝑑𝑑 = 0.515074
2
−1

From which, c0= b0/2=4.92941; c1= 3b1/2=7.94594; c2= 5b2/2=4.78222; c3= 7b3/2=1.80276
4.78222(−1 + 3𝑥𝑥 2 )
𝑓𝑓2 (𝑥𝑥) = 4.92941 + 7.94594 𝑥𝑥 +
2

4.78222(−1 + 3𝑥𝑥 2 )
𝑓𝑓3 (𝑥𝑥) = 4.92941 + 7.94594 𝑥𝑥 + + 1.80276(−3𝑥𝑥 + 5𝑥𝑥 3 )/2
2

Very little additional effort required. Only c3 needs to be computed. In conventional polynomial,
the entire set of linear equations is to be solved again.

20

15 f(t)
Legendre, 3rd
10 Legendre, 2nd
f(t)
5

0
-1 0 1 2 3
t
Question 1 (b)

Tchebycheff Polynomials:

Using the integral expressions given in the question, we get (c0=b0/π; others divided by π/2)
1 1
𝑒𝑒 2𝑥𝑥+1 𝑥𝑥𝑥𝑥 2𝑥𝑥+1
𝑏𝑏0 = � 𝑑𝑑𝑑𝑑 = 19.4671; 𝑏𝑏1 = � 𝑑𝑑𝑑𝑑 = 13.5836
√1 − 𝑥𝑥 2 √1 − 𝑥𝑥 2
−1 −1
1
(2𝑥𝑥 2 − 1)𝑒𝑒 2𝑥𝑥+1
𝑏𝑏2 = � 𝑑𝑑𝑑𝑑 = 2 × 12.6752 − 19.4671 = 5.88330
√1 − 𝑥𝑥 2
−1

𝑓𝑓2 (𝑥𝑥) = 6.19657 + 8.64759 𝑥𝑥 + 3.74543(−1 + 2𝑥𝑥 2 )


3

2 Legendre
Tchebycheff
1
Error
0
-1 0 1 2 3
-1

-2

-3
t
Error is reduced near the ends in Tchebycheff method. However, for this case, it is not minimax
approximation! It can be shown that Tchebycheff is minimax when approximating a (d+1)-
degree polynomial by a d-degree polynomial.
Question 2
Lagrange Polynomials

(𝑥𝑥−3)(𝑥𝑥−5) (𝑥𝑥−2)(𝑥𝑥−5)
Using x0=2, x1=3, x2=5; we get 𝐿𝐿0 = ; 𝐿𝐿1 = ; 𝐿𝐿2 =
3 −2
(𝑥𝑥−2)(𝑥𝑥−3)
. The values at x=4 are −1/3, 1, and 1/3, respectively. Hence,
6
f(4)=12x(−1/3)+54x(1)+375x(1/3)=175.

Newton’s Divided Difference

x f(x) f[xi,xj] f[xi,xj,xk] … …

1 1
11
2 12 15.5
42 6.0
3 54 39.5 0.5
160.5 8.5
5 375 73.5
381
6 756

Interpolating polynomial: 1+11(x-1)+15.5(x-1)(x-2)+6(x-1)(x-2)(x-3)+0.5(x-1)(x-


2)(x-3)(x-5)
Therefore, f(4)=1+33+93+36-3=160

You might also like