0% found this document useful (0 votes)
165 views

Numerical Methods For Partial Differential Algebraic Systems of Equations

This document discusses numerical methods for solving partial differential algebraic systems of equations. It covers linear and nonlinear solvers, common algorithms, and examples from mathematical geosciences. The key topics covered include approximation of partial differential algebraic equations using discrete models and nonlinear solvers, an example of solving an elliptic equation using finite difference methods and Gaussian elimination, implications for computational requirements in 1D, 2D and 3D, use of sparse storage schemes, and iterative methods like conjugate gradient and Newton's method for solving linear and nonlinear systems.

Uploaded by

elurgobi
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
165 views

Numerical Methods For Partial Differential Algebraic Systems of Equations

This document discusses numerical methods for solving partial differential algebraic systems of equations. It covers linear and nonlinear solvers, common algorithms, and examples from mathematical geosciences. The key topics covered include approximation of partial differential algebraic equations using discrete models and nonlinear solvers, an example of solving an elliptic equation using finite difference methods and Gaussian elimination, implications for computational requirements in 1D, 2D and 3D, use of sparse storage schemes, and iterative methods like conjugate gradient and Newton's method for solving linear and nonlinear systems.

Uploaded by

elurgobi
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Numerical Methods for Partial

Differential Algebraic Systems of


Equations
C.T. Miller
University of North Carolina
Scope

¾Linear solvers
¾Nonlinear solvers
¾Algorithms
¾Examples from mathematical geosciences
Approximation of PDAE’s

Model Discrete Nonlinear


Formulation Approximation Solver

• Many model forms exist


• Each approximation component
Linear Solver
has a variety of methods
• Advantages and disadvantages
for choices made for each
component
• Algorithmic considerations are
important as well
Elliptic Equation Example
Elliptic Equation Example

¾Assume a second-order FDM approximation


was used as the discrete operator
¾Also assume that the domain is regularly
shaped and discretized
¾Solve the algebraic system using Gaussian
elimination
¾Consider the computational requirements to
compute the approximation for a 100 x 100
grid in 2D and a 100 x 100 x 100 grid in 3D
Elliptic Equation Example
¾ Chief computational issues involve memory, CPU
time, and more completely computational
efficiency
¾ 2D computational requirements are 800 MB and
11 min on a 1 Gflop machine
¾ 3D computational requirements are 8 TB and 21
years!
¾ Lessons:
1. Computational performance can be an issue even for
relatively simple problems
2. Scaling of methods and algorithms should be
considered when choosing methods
3. Need to consider and exploit special features possible
for model
Elliptic Equation Example
Work Scaling
¾Work depends upon number and cost of
operations
¾Some useful equalities for assessing work are
Gaussian Elimination Work
Comparison of Computational Work
Elliptic Example Implications

¾In 2D storage reduced from 800 MB to 16


MB and CPU time reduced from 11 min to
0.2 sec---clearly acceptable
¾In 3D storge reduced from 8 TB to 160 GB
and CPU reduced from 21 years to 2.31
days---work might be acceptable but storage
is not based on current standards
Ponderables

¾ What are the implications for the scaling of 1D


problems using a similar discrete operator?
¾ What would be the implications of needing to
perform pivoting to reduce numerical round-off
error?
¾ What guidance applies for the mapping of the local
discrete operator to the global system?
¾ What simple observation would allow us to reduce
the storage and work estimates for the banded case
by an additional factor of 2?
Algebraic Solver Observations

¾Even for a simple elliptic model, a compact


discrete operator, and an intermediate
discretization level---direct methods of
solution are untenable
¾Storage considerations are even more severe
than work limitations
¾The direct methods considered are relatively
inefficient candidates for parallel
processing, which is the chief strategy for
increasing computational performance
Sparse Storage

¾Our example problem, and many others that


occur routinely in computational science,
are not only banded and symmetric but also
very sparse
¾We took partial advantage of this for banded
systems, but still had to live with fill in
¾Iterative methods endeavor to approximate
the solution of linear systems taking
advantage of the sparse nature
¾Special storage schemes are needed to do so
Sparse Storage Schemes

¾Many schemes exist


¾Store only non-zero entries
¾Must be able to reconstruct initial matrix
and perform common matrix-vector
operations
¾Some examples include primary storage,
linked list, and specific structure based
approaches
Primary Storage
Primary Storage Implications
Ponderables

¾Show the primary storage scheme meets our


requirements for a valid approach
¾What would be the implication of using
primary storage for the elliptic example in
1D?
¾What approaches might further reduce the
storage required?
Iterative Solution Approaches

¾Seek approaches that in general can operate


on linear systems stored in a sparse format
¾Two main classes exist: (1) stationary
iterative methods, and (2) nonstationary
iterative methods
¾A primary contrast between direct and
iterative methods is the approximate nature
of the solution sought in iterative
approaches
Stationary Iterative Method Example
Stationary Iterative Methods

¾ Theorem: Conditions for the convergence and rate


of convergence of this and related methods can be
proven
¾ Similar related methods such as Gauss-Seidel and
successive over-relaxation can converge much
faster and have a similar computational expense
per iteration, which is on the order of one sparse
matrix-vector multiply
¾ These methods have special significance and use
as preconditioners for non-stationary iterative
methods and as the basis of multigrid methods
Conjugate Gradient Method
Conjugate Gradient Method
Conjugate Gradient Method

¾ Theorem: Convergence can be proven to occur in


at most n iterations for SPD systems
¾ Sufficiently accurate solution can usually be
obtained in many fewer iterations depending upon
the distribution of the eigenvalues of the matrix
¾ Preconditioning can greatly increase the rate of
convergence
¾ PCG methods can be shown to converge optimally
at a rate of n log(n)
Non-Symmetric Systems

¾ GMRES is a related krylov subspace method for


non-symmetric systems for which convergence can
also be proven
¾ The expense of GMRES typically leads to
simplifications of the general algorithm in the way
of restarts
¾ Alternative krlov-subspace methods, such as
BiCGstab, have proven useful in practice, even
though they are not amenable to proofs of
convergence
¾ Suggested references: Kelley (SIAM, 1995) and
Barrett et al. (Templates, SIAM, 1994)
Nonlinear Models
Nonlinear Models

¾Nonlinear models are very common


¾Nonlinear algebraic problems results from
discrete representations
¾Solution requires an iterative approach
leading to increased complexity and expense
¾Convergence issues are more difficult for
nonlinear problems than for linear problems
¾A few methods are commonly used
Picard Iteration
Picard Iteration

¾ Nonlinear iteration proceeds until convergence at


each time step
¾ Theorem: Rate of convergence is linear
¾ For many problems of interest in hydrology,
Picard iteration has proven to be robust
¾ Method is relatively cheap computationally per
iteration and easy to implement
¾ Also known as fixed-point iteration, successive
substitution, or nonlinear Richardson iteration
Newton Iteration
Newton Iteration
Newton Iteration

¾ Theorem: Close to the solution, Newton iteration


converges quadratically
¾ [J] may be expensive to compute or not accessible
¾ The ball of convergence of Newton’s method may
be small
¾ Each nonlinear iteration requires the solution of a
linear system of equations, which may be
accomplished directly or more commonly
iteratively---resulting in nested iteration
Newton Iteration

¾ If [J] cannot be compute analytically, it can be


formed using a finite difference approximation
¾ If [J] is costly to compute, it can be reused over
multiple iterations, which is known as the chord
method
¾ Inexact Newton methods result when the iterative
linear solution tolerance is functionally dependent
upon the magnitude of f
Newton Iteration/Line Search
Newton Iteration/Line Search

¾Accept Newton direction but not the step


size
¾If step size doesn’t produce a sufficient
decrease in ||f|| reduce the magnitude of the
step by ½ (Armijo’s rule) or a local
quadratic/cubic model
¾Continue until a sufficient decrease is found
¾Close to the solution, full Newton steps and
thus quadratic convergence is expected
Algorithms

¾MOL approaches---formal decoupling of


spatial and temporal components
¾Operator splitting methods---approximate
the overall operator as a sum of operators
acting on components of the original
problem
¾Adaptive methods in time, space (h, p, r, h-
p) and space-time
Split-Operator Approaches
Sequential Split-Operator Approach
Split-Operator Approaches

¾Variety of algorithms exist with tradeoffs of


complexity and accuracy
¾Splitting error can range from O(Δt) to zero
¾Allow combining methods well suited to
individual components---hyperbolic and
parabolic parts, linear and nonlinear parts,
etc
¾Can lead to reductions in overall size of
solve for any component and advantages for
parallel algorithms
Computation and Algorithms
64
64

Year Method Reference Storag Flops ∇2u=f 64


e
1947 GE Von Neumann n5 n7 • Advances in
(banded) & Goldstine algorithmic
efficiency rival
advances in
1950 Optimal Young n3 n4 log n hardware
SOR architecture
• Consider
1971 CG Reid n3 n3.5 log n Poisson’s
equation on a
cube of size N=n3
1984 Full MG Brandt n3 n3
• If n=64, this
implies an overall
D. E. Keyes, Columbia
University reduction in flops
of ~16 million
Computation and Algorithms

relative
speedup

year
D. E. Keyes, Columbia
University
Where to go past O(N) ?
¾ Hence, for instance, algebraic multigrid (AMG),
obtaining O(N) in indefinite, anisotropic, or
inhomogeneous problems
¾ Since O(N) is already optimal, there is nowhere further
“upward” to go in efficiency, but one must extend
optimality “outward”, to more general problems

AMG Framework
n
R
error easily algebraically Choose coarse grids,
damped by
smooth transfer operators, and
pointwise
error smoothers to eliminate
relaxation
these “bad” components
within a smaller
dimensional space, and
D. E. Keyes, Columbia recur
University
Computational Performance

• Current peak performer is the


DOE’s BlueGene/L at LLNL,
which has 131,072 processors
and peaks at 367,000 GFLOPs
• Number 10 on the current list
is Japan’s Earth Simulator with
has 5,200 processors and peaks
at 40,960 GFLOPs, which was
built in 2002
• Number 500 on current list is a
1028 Xeon 2.8 GHz processor
IBM xSeries cluster, which
peaks at 5,756.8 GFLOPs
TOP500 SUPERCOMPUTER SITES
(http://www.top500.org/ )
Richards’ Equation Formulation
Richards’ Equation Formulation
Richards’ Equation Formulation
Richards’ Equation Formulation
Standard Solution Approach

¾ Mixed-form of RE
¾ Arithmetic mean relative permeabilities
¾ Analytical evaluation of closure relations
¾ Low-order finite differences or finite
element methods in space
¾ Backward Euler approximation in time
¾ Modified Picard iteration for nonlinear
systems
¾ Thomas algorithm for linear equation
solution
Algorithm Advancements

¾ Spline closure relations


¾ Variable transformation approaches
¾ Mass conservative formulation
¾ DAE/MOL time integration
¾ Spatially adaptive methods
¾ Nonlinear solvers
¾ Linear solvers
DAE/MOL Solution to RE
DAE/MOL Solution to RE
DAE/MOL RE

• Temporal truncation error


comparison
• Mixed-form Newton iteration, line
search
• Heuristic adaptive time stepping
• DASPK first and fifth order
integration
• Reference: Tocci et al. (1997), AWR
SAMOL Algorithm
Infiltration Test Problem

• VG-Mualem psk relations


• Dune sand medium
• Drained to equilibrium
• First-kind boundary
conditions
• Simulation time in days
SAMOL Simulation Profile
Comparison of RE Results
Computational and Algorithm Performance
Dissolution Fingering Example
Conservation Equations and Constraints
Simulation of Dissolution Fingering

¾Two-phase flow and species transport


¾Complexity in flow field must be resolved
¾Separation of time scales
¾Adaptive methods in space useful
Current Research Foci

¾Locally conservative methods


¾Higher-order methods in space and time
¾Integral equation methods
¾Multiscale methods
¾Problem solving environments

You might also like