Legendre Polynomials: D PX N PX X NDX
Legendre Polynomials: D PX N PX X NDX
1 dn ⎡ 2 n
P0 ( x) = 1 for n = 1, 2,...: Pn ( x) = n (
n ⎢
x −1 ⎤
)
2 (n !) dx ⎣ ⎦⎥
Definition:
−1
( Pi , Pj ) = ∫ Pi ( x) Pj ( x)dx =0 i≠ j
1
2
= i= j
Orthogonality: 2i + 1
2i − 1 i −1
Pi ( x) = xPi −1 ( x) − Pi − 2 ( x) i = 2,3,... P0 ( x) = 1; P1 ( x) = x
Recursion: i i
The equivalent polynomials for discrete data are Gram’s polynomials.
Actual P0 P2 P4 P8
1
0.88
0.76
f(x)
0.64
0.52
0.4
-1 -0.5 0 0.5 1
x
⎡ 1 2 −3⎤
⎢2 1 0⎥
⎢ ⎥
⎢⎣ −3 0 3 ⎥⎦
(illustrated for a 3x3 matrix [A]= )
To find out the jth (j =1,2…n-1) matrix, Pj:
1. Take the jth column of the matrix Pj-1Pj-2…P2P1A (for j =1, use 1st column of A) and
normalize it to have unit L2 norm. Denote this vector by {x}. For the given matrix, for
the first matrix P1:
⎧ 1 ⎫
⎪ 14 ⎪
⎪⎪ ⎪⎪
{x} = ⎨ 2 ⎬
⎪ 14 ⎪
⎪−3 ⎪
⎪⎩ 14 ⎪⎭
2. Compute the L2 norm of the subspace of {x} spanning the dimensions j through n, i.e.,
X = x 2j + x 2j +1 + ...xn2
(use the negative root if xj>0). Define a new vector {y}, of unit
magnitude, which has its first j-1 components as zero, the jth component as
1 ⎛ xj ⎞ x
⎜1 − ⎟ − k
2⎝ X⎠ 2 Xy j
and other components (k=j+1 to n) as .
⎧ ⎫
⎪ ⎪
⎪ ⎪
⎪ 1⎛ 1 ⎞ ⎪
⎪ 1 + ⎪
⎪ 2 ⎜⎝ ⎟
14 ⎠ ⎪
⎪ ⎪ ⎧ 0.7960 ⎫
⎪ 1 ⎪ ⎪ ⎪
X = −1 ; { y} = ⎨ ⎬ = ⎨ 0.3358 ⎬
⎪ 14 1 ⎛1 + 1 ⎞ ⎪ ⎪−0.5036 ⎪
⎩ ⎭
⎪ 2 ⎜⎝ ⎟
14 ⎠ ⎪
⎪ ⎪
⎪ 3 ⎪
⎪− ⎪
⎪ 2 14 1 ⎛1 + 1 ⎞ ⎪
⎪⎩ 2 ⎜⎝ ⎟
14 ⎠ ⎪⎭
Pj = I − 2 yyT
3. Obtain the matrix Pj as
⎡ -0.2673 -0.5345 0.8018 ⎤
P1 = ⎢⎢-0.5345 0.7745 0.3382 ⎥⎥
⎢⎣ 0.8018 0.3382 0.4927 ⎥⎦
⎧ −0.4781⎫
⎪ ⎪
{x} = ⎨−0.1317 ⎬
⎪ 0.8684 ⎪
⎩ ⎭
⎧ 0 ⎫
⎪ ⎪
X = 0.8783 ; { y} = ⎨ 0.7583 ⎬
⎪−0.6519 ⎪
⎩ ⎭
⎡1 0 0 ⎤
P2 = ⎢0 -0.1500 0.9887 ⎥⎥
⎢
⎢⎣0 0.9887 0.1500 ⎥⎦
Using this technique of factorization and after about 15 iterations of the QR method, we get a
diagonal matrix showing the Eigenvalues as 5.496, -2.074, and 1.578 (an Excel file showing
the factorization technique and first few QR iterations may be seen at http://home.iitk.ac.in/
~rajeshs/QR.xls).
π
∞
1
fˆ ( x) = ∑ c j eijx where c j = ∫ f ( x)e − ijx dx
j =−∞ 2π
Continuous: −π
OR:
π π
a ∞
1 1
fˆ ( x) = 0 + ∑ (a j cos jx + b j sin jx ) where a j = ∫ f ( x) cos jx dx and b j = ∫ f ( x) sin jx dx
2 j =1 π −π π −π
Discrete: (one time period is divided into M+1 intervals, the last point is not considered since
it will be same as the first point due to periodicity)
2πα
for xα = i.e., M+1 equally spaced points between -π (inclusive) and π (exclusive)
M +1
If M is even: θ =0, k=M/2 If M is odd : θ =1, k=(M-1)/2
k +θ
1 M
fˆ ( x) = ∑ c j eijx where c j = ∑ f ( xα )e−ijxα
j =− k M + 1 α =0
OR
k
a θ
fˆ ( x) = 0 + ∑ (a j cos jx + b j sin jx )+ ak +1 cos(k + 1) x
2 j =1 2
2 M 2 M
where a j = ∑ f ( xα ) cos jxα and b j = ∑ f ( xα ) sin jxα
M + 1 α =0 M + 1 α =0
ˆ
Note that the form given above results in interpolation with f ( x) equal to f(x) at all the (M+1)
grid points. Fewer terms could be used (i.e., summation not carried up to k + θ ) to obtain a
least squares fit. The j=1 term is the fundamental frequency and j=2 is the first harmonic.
Also note that (i) the textbook uses a “time period” of T, while we have used 2π in the class.
lso, for any vector norm there exists a “consistent” matrix norm such that
Ax ≤ A ⋅ x
Similar to the Euclidean norm of a vector there is the Frobenius norm for a matrix defined by
n n
2
A2= ∑∑ a i, j
i =1 j =1
. The norm which is easy to compute and is therefore commonly used is the
maximum norm (also called uniform norm) which, for a vector, is the element with largest
n
A ∞ = max ∑ aij
1≤i ≤ n
magnitude and, for a matrix, is the largest row-sum of absolute values, i.e., j =1
.
As shown in the class, for a linear system of equations Ax=b,
δx δA
≤ κ ( A)
x +δ x A
A A−1
where κ(A) is the condition number of the matrix A and is equal to . It can also be
δx δb
≤ κ ( A)
x b
shown that . Thus for large condition numbers, small (relative) changes in A
or b will produce large (relative) change in x.
⎡ 1 2 −3⎤
⎢2 1 0⎥
⎢ ⎥
⎢⎣ −3 0 3 ⎥⎦
(illustrated for a 3x3 matrix [A]= )
To find out the jth (j =1,2…n-1) matrix, Pj:
1. Take the jth column of the matrix Pj-1Pj-2…P2P1A (for j =1, use 1st column of A) and
normalize it to have unit L2 norm. Denote this vector by {x}. For the given matrix, for
the first matrix P1:
⎧ 1 ⎫
⎪ 14 ⎪
⎪⎪ ⎪⎪
{x} = ⎨ 2 ⎬
⎪ 14 ⎪
⎪−3 ⎪
⎪⎩ 14 ⎪⎭
2. Compute the L2 norm of the subspace of {x} spanning the dimensions j through n, i.e.,
X = x 2j + x 2j +1 + ...xn2
(use the negative root if xj>0). Define a new vector {y}, of unit
magnitude, which has its first j-1 components as zero, the jth component as
1 ⎛ xj ⎞ x
⎜1 − ⎟ − k
2⎝ X⎠ 2 Xy j
and other components (k=j+1 to n) as .
⎧ ⎫
⎪ ⎪
⎪ ⎪
⎪ 1⎛ 1 ⎞ ⎪
⎪ 1 + ⎪
⎪ 2 ⎜⎝ ⎟
14 ⎠ ⎪
⎪ ⎪ ⎧ 0.7960 ⎫
⎪ 1 ⎪ ⎪ ⎪
X = −1 ; { y} = ⎨ ⎬ = ⎨ 0.3358 ⎬
⎪ 14 1 ⎛1 + 1 ⎞ ⎪ ⎪−0.5036 ⎪
⎩ ⎭
⎪ 2 ⎜⎝ ⎟
14 ⎠ ⎪
⎪ ⎪
⎪ 3 ⎪
⎪− ⎪
⎪ 2 14 1 ⎛1 + 1 ⎞ ⎪
⎪⎩ 2 ⎜⎝ ⎟
14 ⎠ ⎪⎭
Pj = I − 2 yyT
3. Obtain the matrix Pj as
⎡ -0.2673 -0.5345 0.8018 ⎤
P1 = ⎢⎢-0.5345 0.7745 0.3382 ⎥⎥
⎢⎣ 0.8018 0.3382 0.4927 ⎥⎦
Convergence Properties of the Newton-Raphson Method
f ʹʹ( xr ) 2
Eti +1 = − ⎡⎣ Eti ⎤⎦
As discussed in the class 2 f ʹ( xr ) indicating quadratic convergence.
f ʹʹ( xa )
−
2 f ʹ( xb )
If we assume that for all points xa and xb “near” the root, has an upper bound of
2
M, we can write
M ⋅E t
i +1
(
≤ M ⋅E t
i
) . Now if we assume that the initial guess x0 is
M ⋅ Et0 < 1
sufficiently near the root AND , it can be shown that the iterations will converge
and
2i
Eti ≤
(M ⋅ Et0 )
M
So, the N-R method will always converge if the initial guess is sufficiently close to the
(simple) root AND magnitude of (M Et0) is less than 1. However, since the root is not known
beforehand it is difficult to use this criterion.
For solving y=f(x)=0, it can be shown that after a sufficient number of iterations, one end of
the interval remains fixed. If we take this fixed point as x0, we have
x −x
xi +1 = x0 + i 0 (− y0 )
yi − y0
which is identical to Newton’s divided difference linear interpolation of function x=g(y) to
obtain x for y=0. The error of interpolation is given by (xr is the root)
1
Eti +1 = xr − xi +1 = g ʹʹ( y! )(− y0 )(− yi )
2
where y! is in the interval (y0,yi). Using mean value theorem between xr and x to obtain y and
f ʹʹ( x)
g ʹʹ( y ) = − 3
[ f ʹ( x) ]
using , we may write
1 f ʹʹ( x! )
Eti +1 = − 3
f ʹ( x! 0
ʹ !i
0 ) Et f ( xi ) Et
2 [ f ʹ( x!
)]
where x!is in the interval (x0,xi), x0 is in the interval (x0,xr) and xi is in the interval (xi,xr).
! !
Assuming that the iterations converge to the root xr
0
f ʹʹ( x!
01 ) f ( x0 ) f ( xr ) Et
ʹ !ʹ
Eti +1 = − 3
Eti
2 [ f ʹ( x!01 ) ]
For this method, since we use quadratic interpolation to find x for y=0, the error of
interpolation is given by (xr is the root)
1
Eti +1 = xr − xi +1 = g ʹʹʹ( y!
)(− yi − 2 )(− yi −1 )(− yi )
6
where y! is in the interval (y ,y ,y ). Again, assuming that the iterations converge to the root
i-2 i-1 i
xr, we may write
f ʹʹʹ( xr )
Eti +1 = α Eti − 2 Eti −1 Eti where α ≈ −
6 f ʹ( xr )
If p denotes the order of the Muller method, we get p3=p2+p+1 implying that the order is
1.839 (better than Secant but not as good as Newton Raphson).
2π
ω0 =
Therefore, while the book has the fundamental frequency as T for the continuous case,
we have 1 (ii) the discrete approximation given in the book requires complex computations
while that given above in terms of sine and cosine does not (although they are equivalent).
Tchebycheff Polynomials
Definition:
(
Tn ( x) = cos n cos −1 x )
−1
(Ti , T j ) = ∫ Ti ( x)T j ( x)(1 − x 2 ) −1/ 2 dx =0 i≠ j
1
=π i= j=0
π
= i= j≠0
Orthogonality: 2
Actual T0 T2 T4 T8
1
0.88
0.76
f(x)
0.64
0.52
0.4
-1 -0.5 0 0.5 1
x
Tchebycheff Polynomials For discrete case:
= m +1 i= j=0
m +1
= i= j≠0
2
Cubic Splines
x − xi −1
X=
Using the local coordinate system for the ith segment, xi − xi −1 and using the symbol Δ to
i
represent (xi-xi-1), we get (starting from the fact that the second derivative is linear)
which along with the two end conditions f 0 = 0 and f m = 0 could be solved using the Thomas
ʹʹ ʹʹ
algorithm for tridiagonal system.
Starting from the first derivative, which is a quadratic function of x, we get (after using the
two end conditions for the first derivative and an unknown constant C1)
Integrating once to obtain the cubic polynomial and applying the two end conditions for the
function values, we get
fˆix = 1 − 3 X 2 + 2 X 3 fi −1 + 3 X 2 − 2 X 3 fi + Δ i X − 2 X 2 + X 3 fi ʹ−1 + Δ i − X 2 + X 3 fi ʹ
( ) ( ) ( ) ( )
6
fixʹ =
Δi
(
−X + X 2 ) f + Δ6 (X − X ) f + (1 − 4 X + 3 X ) f ʹ + (−2 X + 3 X ) f ʹ
i −1
2
i
2
i −1
2
i
i
6 6 2 2
fixʹʹ = 2 (
−1 + 2 X ) fi −1 + 2 (1 − 2 X ) fi + (−2 + 3 X ) fi ʹ−1 + (−1 + 3 X ) fi ʹ
Δi Δi Δi Δi
Continuity of the second derivative gives us
Δ ⎛Δ Δ ⎞ Δ
Δ i +1 fi ʹ−1 + 2 (Δ i + Δ i +1 ) fi ʹ+ Δ i fi ʹ+1 = −3 i +1 fi −1 + 3 ⎜ i +1 − i ⎟ fi + 3 i fi +1 for i=1 to (m-1)
Δi ⎝ Δ i Δ i +1 ⎠ Δ i +1
The two end conditions f 0 = 0 and f m = 0 give the other two equations as
ʹʹ ʹʹ
f −f
2 f 0ʹ + f1ʹ = 3 1 0
Δ1
f m − f m −1
f mʹ −1 + 2 f mʹ = 3
Δm
fi − fi −1
si =
If we denote the “slope” of the data points as xi − xi −1 , we may write these equations as
fˆix = (1 − X ) fi −1 + Xfi + Δ i X (1 − X )⎡⎣(1 − X ) ( fi ʹ−1 − si ) − X ( fi ʹ− si )⎤⎦
Δ i +1 fi ʹ−1 + 2 (Δ i + Δ i +1 ) fi ʹ+ Δ i fi ʹ+1 = 3 (Δ i si +1 + Δ i +1si ) for i=1 to (m-1)
2 f 0ʹ + f1ʹ = 3s1
f mʹ −1 + 2 f mʹ = 3sm
Orthoganility:
Continuous:
π
∫ sin jx sin kx dx
−π
=0 j≠k
=π j=k
π
∫ cos jx cos kx dx
−π
=0 j≠k
=π j=k
π π π
= 2π j=k
Discrete:
2πα
for xα = i.e., M+1 equally spaced points between -π (inclusive) and π (exclusive)
M +1
M
j−k
∑ eijxα e − ikxα = M +1 for integer
α =0 M +1
=0 otherwise
Fourier Expansions:
Gram’s Polynomial
The general equation for generating Gram’s polynomials of order m for equidistant points
between -1 and 1 (xi = -1 + 2i/m for i=0 to m) is
α
Gi +1 ( x) = α i xGi ( x) − i Gi −1 ( x)
α i −1
m 4(i + 1) 2 − 1 1
where α i = and G−1 ( x) = 0; G0 ( x) =
i + 1 (m + 1) 2 − (i + 1) 2 m +1
1 x
For order 1: G 0 = ; G1 =
2 2
1 x 3 2 2
For order 2: G 0 = ; G1 = ; G2 = x −
3 2 2 3
The interpolation formula is then given by
m
fˆ ( x) = ∑ Ci Gi ( x)
i =0
p=2 is the Euclidean norm and p→ ∞ denotes the maximum norm. The properties of Vector
and matrix norms are (x, y are vectors, A, B are matrices and α is a scalar)
x = 0 only if x is a null vector; otherwise x > 0
A = 0 only if A is a null matrix; otherwise A > 0
αx = α x
αA = α A
x+ y ≤ x + y
A+ B ≤ A + B
AB ≤ A ⋅ B
A
QR factorization
Any n x n real matrix can be written as A=QR, where Q is orthogonal and R is upper
triangular. To obtain Q and R, we use the Householder transformation as follows:
Let P1, P2, …Pn-1, be matrices such that Pn −1 Pn − 2 ....P2 P1 A(= R) is upper triangular. These
matrices may be chosen as orthogonal matrices and are known as householder matrices. Then
T
Q = (Pn −1 Pn − 2 ....P2 P1 )
we will have the required factorization with
We first find P1 such that P1A will have all elements below the diagonal in the first column as
zero. We would then find P2 such that P2P1A will have all zeroes below the diagonal in both
the first and the second column. And so on till we find Pn-1 which produces the upper
triangular matrix. The procedure is as follows: