Chapter 3 Supplementary: (Recall That N (N) )
Chapter 3 Supplementary: (Recall That N (N) )
(I N1 A)x = x
(N A)x = Nx
(1 )Nx = Ax.
x Nx =
1
x Ax
1
(*)
x Nx =
1
xAx
1
(**)
1
1
x(j+1) = (D + U)1 ((1 )D L)xj+ 2 + ( D + U)1 f
where MSSOR = (D + U)
[(1 )D L] (D + L)
[(1 )D U].
where:
A = NSSOR PSSOR
1
NSSOR() =
(D + L) D1 (D + U) and
(2 )
1
[(1 )D L] D1 [(1 )D U]
PSSOR() =
(2 )
1
A b
2
( k RM , k RM and k R)
k = 0, 1, 2, . . .
2f
2
f
1
12
M
.
..
...
the hessian = f 00 () =
.
..
= A.
2f
2f
M 1
2
M
at = k .
So, we have:
f 0 ( k+1 ) d = 0 (A k+1 b) dk = 0
(A( k + k dk ) b) dk = 0 (A k b) dk + k dk Adk = 0
k =
Optimal k is:
(A k b) dk
dk Adk
j < 2
j. Choose 3 <
1
Max
Then:
2
Max
. Then:
(I A) = 1
Define:
min
Max
min = min{1 , 2 , . . . , M }
j
Max
= (A) = condition number of A.
min
(I A) = 1
ek
1
1
(A)
k
1
< 1.
(A)
e0
ek 0 as k .
n (A) log
(A)
Condition number large Convergence slow!!
, RM .
we have:
hdi , dj i = 0 for i 6= j.
We define:
kkA = h, i 2 for RM .
We now state the conjugate gradient method:
Given 0 RM , d0 = r0 := (A 0 b), find k and dk (k = 1, 2, . . . ) such that:
3
(a) k+1 = k + k dk
(b) k =
rk dk
hdk , dk i
hrk+1 , dk i
hdk , dk i
(A k b) dk
.
dk Adk
(F)
By induction hypothesis:
dk Span{r0 , Ar0 , . . . , Ak }
and so:
Adk Span{r0 , Ar0 , . . . , Ak+1 r0 }.
From (F), we see that
Span{r0 , . . . , rk+1 } Span{r0 , Ar0 , . . . , Ak+1 r0 }.
Also, from induction hypothesis, Ak r0 Span(d0 , . . . , dk ).
for i 6= j.
(FF)
for i 6= j.
4
(FFF)
Proof. Suppose the statement is true for i, j k. By Lemma 1, Span(d0 , . . . , dj ) = Span(r0 , . . . , rj ). From the
induction hypothesis,
rk dj = 0
for j = 0, 1, 2, . . . , k 1
(dj Span(r0 , r1 , . . . , rj ))
d
f ( k + dk )
= 0.
d
=k