0% found this document useful (0 votes)
12 views37 pages

Lec N1

Uploaded by

taha23akter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views37 pages

Lec N1

Uploaded by

taha23akter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

C S C I 8314 Spring 2021

SPARSE MATRIX COMPUTATIONS

Class time : MW 1:00 – 2:15 am


Room : Online via Zoom
Instructor : Yousef Saad

January 19, 2021


About this class: Objectives

Set 1 An introduction to sparse matrices and sparse matrix com-


putations.
• Sparse matrices;
• Sparse matrix direct methods ;
• Graph theory viewpoint; graph theory methods;

Set 2 Iterative methods and eigenvalue problems


• Iterative methods for linear systems
• Algorithms for sparse eigenvalue problems and the SVD
• Possibly: nonlinear equations

1-1 – start8314
Set 3 Applications of sparse matrix techniques
• Applications of graphs; Graph Laplaceans; Networks ...;
• Standard Applications (PDEs, ..)
• Applications in machine learning
• Data-related applications
• Other instances of sparse matrix techniques

1-2 – start8314
ä Please fill out (now if you can)
This survey

short link url:


https://forms.gle/yiXjHGXrzkwaf2Ex9

1-3 – start8314
Logistics:

ä Lecture notes and minimal information will be located here:

8314 at CSE-labs
www-users.cselabs.umn.edu/classes/Spring-2021/csci8314/

ä There you will find :


• Lecture notes, Schedule of assignments/ tests, class info
ä Canvas will contain the rest of the information: assignments,
grades, etc.

1-4 – start8314
About lecture notes:

ä Lecture notes (like this first set) will be posted on the class
web-site – usually before the lecture.
ä Note: format used in lectures may be formatted differently – but
same contents.
ä Review them to get some understanding if possible before class.
ä Read the relevant section (s) in the texts or references provided
ä Lecture note sets are grouped by topics (sections in the textbook)
rather than by lecture.
ä In the notes the symbol - 1 indicates suggested easy exercises
or questions – often [not always] done in class.
ä Also: occasional practice exercises posted
1-5 – start8314
Matlab

ä We will often use matlab for testing algorithms.


ä Other documents will be posted in the matlab section of the class
web-site.
ä Also:
ä .. I post the matlab diaries used for the demos (if any).

1-6 – start8314
CSCI 8314: SPARSE MATRIX COMPUTATIONS
GENERAL INTRODUCTION
• General introduction - a little history

• Motivation

• Resources

• What will this course cover


What this course is about

ä Solving linear systems and (to a lesser extent) eigenvalue problems


with matrices that are sparse.
ä Sparse matrices : matrices with mostly zero entries [details later]
ä Many applications of sparse matrices...
ä ... and we are seing more with new applications everywhere

1-8 – Intro
A brief history

Sparse matrices have been identified as important early on – origins


of terminology is quite old. Gauss defined the first method for such
systems in 1823. Varga used explicitly the term ’sparse’ in his 1962
book on iterative methods.
https://www-users.cs.umn.edu/∼saad/PDF/icerm2018.pdf

ä Special techniques used for sparse problems coming from Partial


Differential Equations
ä One has to wait until to the 1960s to see the birth of the general
technology available today
ä Graphs introduced as tools for sparse Gaussian elimination in
1961 [Seymour Parter]

1-9 – Intro
ä Early work on reordering for banded systems, envelope methods
ä Various reordering techniques for general sparse matrices intro-
duced.
ä Minimal degree ordering [Markowitz - 1957] ...
ä ... later used in Harwell MA28 code [Duff] - released in 1977.
ä Tinney-Walker Minimal degree ordering for power systems [1967]
ä Nested Dissection [A. George, 1973]
ä SPARSPAK [commercial code, Univ. Waterloo]
ä Elimination trees, symbolic factorization, ...

1-10 – Intro
History: development of iterative methods

ä 1950s up to 1970s : focus on “relaxation” methods


ä Development of ’modern’ iterative methods took off in the mid-
70s. but...
ä ... The main ingredients were in place earlier [late 40s, early 50s:
Lanczos; Arnoldi ; Hestenes (a local!) and Stiefel; ....]
ä The next big advance was the push of ‘preconditioning’: in effect
a way of combining iterative and (approximate) direct methods –
[Meijerink and Van der Vorst, 1977]

1-11 – Intro
History: eigenvalue problems

ä Another parallel branch was followed in sparse techniques for large


eigenvalue problems.
ä A big problem in 1950s and 1960s : flutter of airplane wings..
This leads to a large (sparse) eigenvalue problem
ä Overlap between methods for linear systems and eigenvalue prob-
lems [Lanczos, Arnoldi]

1-12 – Intro
Resources

ä Matrix market
http://math.nist.gov/MatrixMarket/

ä SuiteSparse site (Formerly : Florida collection)


https://sparse.tamu.edu/

ä SPARSKIT, etc. [SPARSKIT = old written in Fortran. + more


recent ‘solvers’]
http://www.cs.umn.edu/∼saad/software

1-13 – Intro
Resources – continued

Books: on sparse direct methods.


ä Book by Tim Davis [SIAM, 2006] see syllabus for info
ä Best reference [old, out-of print, but still the best]:
• Alan George and Joseph W-H Liu, Computer Solution of Large
Sparse Positive Definite Systems, Prentice-Hall, 1981. Englewood
Cliffs, NJ.
ä Of interest mostly for references:
• I. S. Duff and A. M. Erisman and J. K. Reid, Direct Methods for
Sparse Matrices, Clarendon press, Oxford, 1986.
• Some coverage in Golub and van Loan [John Hopinks, 4th edition,
see chapters 10 to end]
1-14 – Intro
Overall plan for this course

ä We will begin by sparse matrices in general, their origin, storage,


manipulation, etc..
ä Graph theory viewpoint
ä We will then spend some time on sparse direct methods
ä .. back to graphs: Graph Laplaceans and applications; Networks;
...
ä .. and then on eigenvalue problems and
ä ... iterative methods for linear systems
ä ... Plan is somewhat dynamic
ä ... at the end of semester: a few lectures given by you

1-15 – Intro
SPARSE MATRICES
• See Chap. 3 of text

• See the “links” page on the class web-site

• See also the various sparse matrix sites.

• Introduction to sparse matrices

• Sparse matrices in matlab –


What are sparse matrices?

Pattern of a small sparse matrix


1-17 Chap 3 – sparse
ä Vague definition: matrix with few nonzero entries
ä For all practical purposes: an m × n matrix is sparse if it has
O(min(m, n)) nonzero entries.
ä This means roughly a constant number of nonzero entries per
row and column -
ä This definition excludes a large class of matrices that have O(log(n))
nonzero entries per row.
ä Other definitions use a slow growth of nonzero entries with respect
to n or m.

1-18 Chap 3 – sparse


‘‘..matrices that allow special techniques to take advantage of the
large number of zero elements.” (J. Wilkinson)

A few applications which lead to sparse matrices:

Structural Engineering, Computational Fluid Dynamics, Reservoir sim-


ulation, Electrical Networks, optimization, Google Page rank, infor-
mation retrieval (LSI), circuit similation, device simulation, .....

1-19 Chap 3 – sparse


Goal of Sparse Matrix Techniques

ä To perform standard matrix computations economically i.e., with-


out storing the zeros of the matrix.
Example: To add two square dense matrices of size n requires
O(n2) operations. To add two sparse matrices A and B requires
O(nnz(A) + nnz(B)) where nnz(X) = number of nonzero
elements of a matrix X.
ä For typical Finite Element /Finite difference matrices, number of
nonzero elements is O(n).
A−1 is usually dense, but L and U in the LU factor-
Remark: ization may be reasonably sparse (if a good technique
is used)

1-20 Chap 3 – sparse


- 2 Look up Cayley-Hamilton’s theorem if you do not know about
it.
- 3 Show that the inverse of a matrix (when it exists) can be
expressed as a polynomial of A, where the polynomial is of degree
≤ n − 1.
- 4 When is the degree < n − 1? [Hint: look-up minimal
polynomial of a matrix]
- 5 What is the patter of the inverse of a tridiagonal matrix? a
bidiagonal matrix?

1-21 Chap 3 – sparse


Nonzero patterns of a few sparse matrices

ARC130: Unsymmetric matrix from laser problem. a.r.curtis, oct 1974 SHERMAN5: fully implicit black oil simulator 16 by 23 by 3 grid, 3 unk
PORES3: Unsymmetric MATRIX FROM PORES BP_1000: UNSYMMETRIC BASIS FROM LP PROBLEM BP

1-23 Chap 3 – sparse


Types of sparse matrices

ä Two types of matrices: structured (e.g. Sherman5) and un-


structured (e.g. BP 1000)
ä The matrices PORES3 and SHERMAN5 are from Oil Reservoir
Simulation. Often: 3 unknowns per mesh point (Oil , Water satura-
tions, Pressure). Structured matrices.
ä 40 years ago reservoir simulators used rectangular grids.
ä Modern simulators: Finer, more complex physics ä harder and
larger systems. Also: unstructured matrices
ä A naive but representative challenge problem: 100 × 100 × 100
grid + about 10 unknowns per grid point ä N ≈ 107, and nnz ≈
7 × 108.

1-24 Chap 3 – sparse


Solving sparse linear systems: existing methods

Direct sparse Iterative Methods


Solvers Preconditioned Krylov

General
Ax=b Purpose
− ∆ u = f + bc
Specialized

Fast Poisson Multigrid


Solvers Methods

1-25 Chap 3 – sparse


Two types of methods for general systems:
ä Direct methods : based on sparse Gaussian eimination, sparse
Cholesky,..
ä Iterative methods: compute a sequence of iterates which converge
to the solution - preconditioned Krylov methods..
These two classes of methods have always been in
Remark:
competition.
ä 40 years ago solving a system with n = 10, 000 was a challenge
ä Now you can solve this in a fraction of a second on a laptop.

1-26 Chap 3 – sparse


ä Sparse direct methods made huge gains in efficiency. As a result
they are very competitive for 2-D problems.
ä 3-D problems lead to more challenging systems [inherent to the
underlying graph]
Difficulty:

• No robust ‘black-box’ iterative solvers.


• At issue: Robustness in conflict with efficiency.

ä Iterative methods are starting to use some of the tools of direct


solvers to gain ’robustness’

1-27 Chap 3 – sparse


Consensus:
1. Direct solvers are often preferred for two-dimensional problems
(robust and not too expensive).
2. Direct methods loose ground to iterative techniques for three-
dimensional problems, and problems with a large degree of freedom
per grid point,

1-28 Chap 3 – sparse


Sparse matrices in matlab

ä Matlab supports sparse matrices to some extent.


ä Can define sparse objects by conversion
A = sparse(X) ; X = full(A)

ä Can show pattern


spy(X)

ä Define the analogues of ones, eye:


speye(n,m), spones(pattern)

1-29 Chap 3 – sparse


ä A few reorderings functions provided.. [will be studied in detail
later]
symrcm, symamd, colamd, colperm

ä Random sparse matrix generator:


sprand(S) or sprand(m,n, density)

(also textttsprandn(...) )
ä Diagonal extractor-generator utility:
spdiags(A) , spdiags(B,d,m,n)

ä Other important functions:


spalloc(..) , find(..)
1-30 Chap 3 – sparse
Graph Representations of Sparse Matrices

ä Graph theory is a fundamental tool in sparse matrix techniques.

DEFINITION. A graph G is defined as a pair of sets G = (V, E)


with E ⊂ V × V . So G represents a binary relation. The
graph is undirected if the binary relation is reflexive. It is directed
otherwise. V is the vertex set and E is the edge set.

Example: Given the numbers 5, 3, 9, 15, 16, show the two


graphs representing the relations
R1: Either x < y or y divides x.
R2: x and y are congruent modulo 3. [ mod(x,3) = mod(y,3)]

1-31 Chap 3 – sparse1


ä Adjacency Graph G = (V, E) of an n × n matrix A :
• Vertices V = {1, 2, ...., n}.
• Edges E = {(i, j)|aij 6= 0}.
ä Often self-loops (i, i) are not represented [because they are
always there]
ä Graph is undirected if the matrix has a symmetric structure:
aij 6= 0 iff aji 6= 0.

1-32 Chap 3 – sparse1


Example: (directed graph)

  1 2
?

 ? 

? ? 
 

?
4 3
Example: (undirected graph)

  1 2
? ?
 ? ?
 

? ?
 
 
? ?
4 3
1-33 Chap 3 – sparse1
- 6 Adjacency graph of:
 
? ? ?
? ? ? ?
 
? ?
 
A= .
 
 ? ? 
? ? ? ?
 
 
? ? ?

- 7 Graph of a tridiagonal matrix? Of a dense matrix?


- 8 Recall what a star graph is. Show a matrix whose graph is a
star graph. Consider two situations: Case when center node is labeled
first and case when it is labeled last.

1-34 Chap 3 – sparse1


ä Note: Matlab now has a graph function.
ä G = graph(A) creates adjacency graph from A
ä G is a matlab class/
ä G.Nodes will show the vertices of G
ä G.Edges will show its edges.
ä plot(G) will show a representation of the graph

1-35 Chap 3 – sparse1


- 9 Do the following:

• Load the matrix ’Bmat.mat’ located in the class web-site (see


‘matlab’ folder)
• Visualize pattern (spy(B)) + find: Number of nonzero elements,
size, ...
• Generate graph - without self-edges:
G = graph(B,’OmitSelfLoops’

• Plot the graph –


• $1M question: Any idea on how this plot is generated?

1-36 Chap 3 – sparse1

You might also like