0% found this document useful (0 votes)
185 views6 pages

The Successive Over Relaxation Method in Multilayer Grid Refinement Scheme

The document discusses using the successive over-relaxation (SOR) method to solve partial differential equations in a multi-layer grid refinement scheme. It presents two model problems and studies the SOR method applied to the linear systems generated by the multi-layer grid refinement. A heuristic estimation formula is introduced for determining the optimal parameter for the SOR method in this scheme. Numerical experiments are performed to verify the estimation.

Uploaded by

Sadek Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
185 views6 pages

The Successive Over Relaxation Method in Multilayer Grid Refinement Scheme

The document discusses using the successive over-relaxation (SOR) method to solve partial differential equations in a multi-layer grid refinement scheme. It presents two model problems and studies the SOR method applied to the linear systems generated by the multi-layer grid refinement. A heuristic estimation formula is introduced for determining the optimal parameter for the SOR method in this scheme. Numerical experiments are performed to verify the estimation.

Uploaded by

Sadek Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Available online at www.pelagiaresearchlibrary.

com

Pelagia Research Library


Advances in Applied Science Research, 2013, 4(2):163-168

ISSN: 0976-8610
CODEN (USA): AASRFC

The successive over relaxation method in multi-layer grid refinement scheme

Tsun-Zee Mai1 and Leina Wu2*


1
Department of Mathematics, University of Alabama, Tuscaloosa, AL. 35487, USA
2
Department of Mathematics, Queens University of Charlotte, Charlotte, NC. 28274, USA
____________________________________________________________________________________________

ABSTRACT

The successive over-relaxation (SOR) method has been widely used as an iterative method to solve large sparse
linear system. When solving a partial differential equation over a rectangular domain with Dirichlet boundary
conditions, the multi-layer grid refinement method can be used to generate the linear system, with higher efficiency
than uniform grid theme. In this paper, we will study the SOR method in the multi-layer grid refinement scheme. A
heuristic estimation for the optimal parameter of the SOR method is given and numerical experiments are carried out
to verify the estimation in this scheme.

Key words: partial differential equation, iterative methods, SOR method, multi-layer grid refinement method
____________________________________________________________________________________________

INTRODUCTION

The multi-layer grid refinement method [5] is used to solve a partial differential equation of the form

A( x, y )u xx + C ( x, y )u yy + D ( x, y )u x + E ( x, y )u y + F ( x, y )u = G ( x, y ) (1.1)

where A, C, D, E, F are functions of x and y, with the Dirichlet boundary conditions on a rectangular region. A
numerical solution of the partial differential equation is based on the finite difference method, which involves a
five-point scheme. It discretizes the PDE into a set of difference equations so that a linear system can be generated
and solved. Normally, the uniform grid scheme is applied because of its ease in use, but due to the large number of
grids, the coefficient matrix A could be extremely large most of the time. Subsequently may the computation time be
substantial. However, in many cases only a small subdomain in the region is of great interest. It is not necessary to
put very small grids on the entire region. The multi-layer grid refinement reduces the size of the coefficient matrix A
by using much fewer grids in the whole region than the normal uniform grid scheme. We place fine grids in the
interested domain and coarse grids in other domains in the region, with a special consideration [5] of obtaining the
partial derivatives of the inner boundary points. Figure 1.1 illustrates a possible grid pattern of the multi-layer grid
refinement method. As a result, the size of the coefficient matrix of the linear system and the computational time are
substantially reduced without sacrificing the accuracy of the solutions.

163
Pelagia Research Library
Leina Wu et al Adv. Appl. Sci. Res., 2013, 4(2):163-168
_____________________________________________________________________________

Figure 1.1. Grid pattern for the two-layer scheme

2. The SOR method


Numerical iterative methods [3, 6, 7, 8, 9] have been used to solve for solutions to a large sparse linear system

= (2.1)

where A is a given matrix and b is a given vector. There are some basic iterative methods e.g. the Jacobi method[2],
the Gauss-Seidel(GS) method[8], and many others. In this paper we focus on the use of the successive
over-relaxation (SOR) method [8, 10] to solve the linear system.

We let A be written as

A = D − CL − CU .
(2.2)

The matrix D is a diagonal matrix with the same diagonal elements as A; CL and CU are strictly lower and strictly
upper triangular matrices of A, respectively. With an introduction of parameter ω acting on the Gauss-Seidel method,
the iterative method becomes a robust stand-alone method. It is called the SOR method.
The iteration of the SOR method is given by

u ( n +1) = ( D − ωCL )−1[(1 − ω ) D + ωCU ]u ( n ) + ( D − ωCL )−1 ωb, (2.3)

where the parameter is the over-relaxation factor and the iteration matrix G is defined as

G = ( D − ωCL )−1[(1 − ω ) D + ωCU ].


(2.4)

We note that if the value of is equal to 1, then the SOR method and the GS method are identical. However, with an
optimal choice of , the rate of convergence of the SOR method can be increased significantly. The rate of
convergence of an iterative method is defined by

R G = − log ρ G . (2.5)

The analytical value of the optimal value of can be found for certain linear systems, see [8], the Model Problem 1
described in section 3 is one example. If the linear system is generated by the central finite difference scheme, then
the optimal is proven to be

= (2.6)

164
Pelagia Research Library
Leina Wu et al Adv. Appl. Sci. Res., 2013, 4(2):163-168
_____________________________________________________________________________

where is the spectral radius of the Jacobi iteration matrix B = I − D −1 A. With such choice of the parameter,
the rate of convergence of the SOR method can be increased by several orders of magnitude. However, in general, the
optimal value of is not easy to obtain. We note that a general procedure [10] for finding the optimal value of ω
may be applied but not efficient in the multi-layer scheme. Therefore in this paper we introduce a heuristic estimation
formula for such scheme.

3. Numerical Experiments
In this research, we perform numerous experiments using the multi-layer grid refinement method on the following
two model problems.

3.1.1. Model Problem 1 (MP1).


uxx + u yy = 0
(3.1)
over the region Ω= [0, 1] × [0, 1]. The boundary conditions are given by

u (0, y) = cos y , u(1, y) = e−1 cos y, u( x,0) = e− x , u( x,1) = e− x cos1.


(3.2)
−x
The exact solution is u = e cos y .

3.1.2. Model Problem 2 (MP2).

uxx + u yy + D( x, y)ux + E ( x, y)u y + u = G( x, y)


(3.3)

where D( x, y) = sin x sin y , E ( x, y) = cos x cos y and G( x, y) = − sin x cos y,

over the region Ω = [0, 1] × [0, 1]. The boundary conditions are given by

u(0, y) = 0 , u(1, y) = sin1cos y, u( x,0) = sin x , u( x,1) = sin x cos1.


(3.4)

The exact solution is u = sin x cos y .


u (i ) − u
In this paper, we use the stopping criteria 2
≤ ε where u(i) is the approximate solutions at the ith iteration of
u 2
the iterative method and ε is a preset small tolerance.

As described earlier, the SOR method involves an over-relaxation parameter ω. The rate of convergence of an
iterative method is determined by the spectral radius of the iterative matrix. The smaller the spectral radius is, the
faster the method converges. For the SOR method, its rate of convergence is very sensitive to the choice of ω. If the
uniform grids are placed over a rectangular region, the spectral radius of the iterative matrix of the SOR method is

 
2

  ωρ ( B ) + ω ρ
2 2
( B ) − 4( ρ ( B ) − 1)
 , if 0 < ω ≤ ωopt
ρ (GSOR ) =  2 


 ω − 1 , if ωopt ≤ ω < 2.
(3.5)

The spectral radius has an absolute minimum value at the optimal value of ω opt , see Figure 3.1.

165
Pelagia Research Library
Leina Wu et al Adv. Appl. Sci. Res., 2013, 4(2):163-168
_____________________________________________________________________________

0.95

0.9
spectral radius

0.85

0.8

0.75

0.7
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
ω
Figure 3.1. Spectral radius vs. #

The above figure shows the spectral radius versus ω for MP1 with uniform grid size h = 1/20 over the entire domain.
The optimal value of ω can be analytically computed by (2.6) in which ρ = cos &ℎ . The spectral radius curve
is very steep around the optimal value of ω. It means that if the value of ω is a little bit off the optimal value, the rate
of convergence decreases significantly. For example, when ω = 1.7, the spectral radius of the SOR method is 0.8262
and the rate of convergence is 0.0829. The number of iteration required in this situation to obtain an accuracy of 10-6
is 72. When ω = 1.730, the spectral radius is 0.73, and the rate of convergence is 0.137. The number of iteration
required to an accuracy of 10-6 is 44. We can see that the number of iteration reduced by 40%.

2500

2000
iteration num ber

1500

1000

500

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
ω

Figure 3.2. Iteration number vs. (H = 1/20, h = 1/40)

For multi-layer grid refinement method the analytical optimal value of ω for the SOR method is not known, a linear
search is used to find the optimal value. Numerical experiments are conducted, see figures below, to illustrate similar
graphs as in Fig. 3.1 that indicates the importance of getting the optimal value of ω. We set the value of ω from 0 to
2 with a step size 0.05 to obtain the numbers of iteration required. In the two-layer grid refinement scheme of MP1
and MP2, the interested region is placed at [0.4, 0.6]×[0.4, 0.6] with fine grids. Figures 3.2 and 3.3 present the
iteration numbers vs. ω for MP1 with different sizes of H and h, where H is the mesh size of the coarse grid and h is
the mesh size of the fine grid. The maximum number of iteration has been set to be 2000. Therefore, in the figures,
the flat segments at 2000 level indicate that the number of iterations are larger than 2000. It clearly displays the
importance of the value of .

166
Pelagia Research Library
Leina Wu et al Adv. Appl. Sci. Res., 2013, 4(2):163-168
_____________________________________________________________________________

2500

2000
it er a tion n um be r

1500

1000

500

0
1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55 1.6 1.65 1.7 1.75 1.8 1.85 1.9 1.95 2
ω
Figure 3.3. Iteration number vs. (H = 1/40, h = 1/80)

For simplicity, we consider two different grid sizes. Since the discretization has been changed, the formula for
spectral radius of B can no longer be the same as in one uniform grid. Let H and h be the grid sizes for the coarse grid
domain and the fine grid domain respectively, and let R be the ratio of the gird sizes H/h. As we mentioned in the
beginning of this paper, the second layer is placed in the center of the first layer in our research. A heuristic
estimation of ρ(B) for two different grid sizes is given by

ρ B = cos πH + 1 − cos πh (3.6)


)√) )√)

It is logical to estimate that the spectral radius is a linear combination of the spectral radii by the two different
discretization. In this paper, we attempt to estimate the optimal value of ω for the SOR method used in multi-layer
refinement grid scheme. We propose that the formula (2.6) is still valid in this scheme.

Below are the tables that illustrate the values of ρ(B) by actual computation via MATLAB, ρ(B)_M, and by the
estimation from (3.6), ρ(B)_F. The optimal values of ω are also displayed in three different cases: actual values,
Opt.ω via linear search, values using (2.6) with the actual ρ(B)_M, ω_M and values using (2.6) with the estimation
of ρ(B)_F , ω_F. We note that the linear search is conducted with a step size 0.005 over the interval (0, 2).
Table 3.1. Spectral radius and the optimal omega for MP1

H h ρ(B)_M Opt.ω ρ(B)_F


1/20 0.9724 1.6218 1.6550 1.6346 0.9747
1/40 0.9914 1.7682 1.7950 1.7662 0.9912
1/10
1/80 0.9978 1.8747 1.8900 1.8586 0.9971
1/160 0.9994 1.9353 1.9450 1.9144 0.9990
1/40 0.9927 1.7843 1.8000 1.7984 0.9937
1/20 1/80 0.9976 1.871 1.8850 1.8757 0.9978
1/160 0.9994 1.9323 1.9400 1.9279 0.9993
1/80 0.9981 1.8838 1.8900 1.8930 0.9984
1/40
1/160 0.9994 1.9308 1.9400 1.9330 0.9994

Table 3.2. Spectral radius and the optimal omega for MP2

H h ρ(B)_M Opt.ω ρ(B)_F

167
Pelagia Research Library
Leina Wu et al Adv. Appl. Sci. Res., 2013, 4(2):163-168
_____________________________________________________________________________
1/20 0.9735 1.6279 1.6550 1.6346 0.9747
1/40 0.9917 1.7720 1.7950 1.7662 0.9912
1/10
1/80 0.9978 1.8768 1.8900 1.8586 0.9971
1/160 0.9995 1.9364 1.9450 1.9144 0.9990
1/40 0.9930 1.7882 1.8050 1.7984 0.9937
1/20 1/80 0.9977 1.8732 1.8900 1.8757 0.9978
1/160 0.9994 1.9335 1.9450 1.9279 0.9993
1/80 0.9982 1.8860 1.8950 1.8930 0.9984
1/40
1/160 0.9994 1.9320 1.9400 1.9330 0.9994

The tables show very good estimates obtained for various combinations of mesh sizes H and h. These tables also
gave a positive evidence for assuming that (2.6) is still an excellent formula to obtain the optimal value of ω.

For the efficiency in terms of number of iterations required, numerical experiments are also performed using
estimated value from our proposed formula. Tables 3.3 and 3.4 below show the iterative numbers when computing
MP1 and MP2 in SOR method, using the actual optimal omega “Opt.ω”from linear search and “ω_F” by the
heuristic ρ(B)_F for the over-relaxation parameter, respectively. The ones with ω_F yield to very comparable
efficiency to those of the actual optimal omegas.
Table 3.3. Iterations numbers in SOR method with different omegas for MP1

Iteration numbers Iteration numbers


H h Matrix Size Opt.ω
of opt ω of ω_F
1/10 1/20 97 1.6520 16 1.6346 18
1/10 1/40 153 1.7920 28 1.7662 36
1/20 1/40 417 1.8000 42 1.7984 42
1/40 1/80 1729 1.8900 77 1.8930 79
1/40 1/160 5457 1.9400 140 1.9330 146

Table 3.4. Iterations numbers in SOR method with different omegas for MP2

Iteration numbers Iteration numbers


H h Matrix Size Opt.ω
of opt ω of ω_F
1/10 1/20 97 1.6520 19 1.6346 21
1/10 1/40 153 1.7920 31 1.7662 40
1/20 1/40 417 1.8000 35 1.7984 35
1/40 1/80 1729 1.8900 86 1.8930 86
1/40 1/160 5457 1.9400 172 1.9330 186

We need to point out that ρ(B) will be different from the above estimations when the interested subdomain is located
in the positions other than the center of the region. However, it is very reasonable and logical to put the interested
region on the center stage. Thus we focus our research on the case where the second layer is placed in the center.

CONCLUSION

In a multi-layer grid refinement environment, the optimal value of the parameter for the robust standalone SOR
method is not known. It has been shown earlier in the paper that the optimal value is critical to make the SOR method
efficient. This paper introduced an estimation formula so that it could produce excellent estimation in this
environment. Numerical studies had been carried out to confirm the results.

REFERENCES

[1] Atkinson K., An introduction to numerical analysis, John Wiley and Sons, 1988.
[2] Bronshtein I. N. and Semendyayev K. A., Handbook of Mathematics. Spring- Verlag, New York 1997.
[3] Hageman L. and Young D.M., Applied Iterative Methods. Academic Press, New York, 1981.
[4] Jun Y. and Mai T.Z., Appl. Numer. Math. 2006, 56, 8.
[5] Mai T.Z. and Wu L., International Journal of Engineering Science and Technology. 2012, 4, 7.
[6] Saad Y., Iterative Methods for Sparse Linear Systems, PWS Publishing Company, 1996.
[7] Varga R.S., Matrix Iterative Analysis. Prentice-Hall, Englewood Cliffs, NJ, 1962.
[8] Young D.M., Iterative Solution of Large Linear Systems, Academic Press, 1971.
[9] Young D. M. and Gregory R. T, A Survey of Numerical Mathematics, Addison-Wesley Educational Publishers
Inc, 1972.
[10] Young D. M. and Mai T.Z, in: L. Hayes and D. Kincaid, (Eds.). Iterative Methods for large linear systems
(Academic Press, San Diego, CA, 1990) 293‒311.

168
Pelagia Research Library

You might also like