0% found this document useful (0 votes)
166 views4 pages

Section 1.1 Matrix Multiplication, Flop Counts and Block (Partitioned) Matrices

Matrix multiplication, block matrices, and systems of linear equations are introduced. Key points include: 1) Matrix-vector multiplication (b = Ax) results in b being a linear combination of A's columns. Block matrices allow partitioning large matrices into smaller submatrices for more efficient computation and parallel processing. 2) A system of linear equations can be written as Ax=b, where A is an n×n coefficient matrix, x is a vector of n unknowns, and b is a vector of constants. For the system to have a unique solution, A must be nonsingular (i.e. det(A) ≠ 0). 3) Examples where linear systems arise include structural analysis

Uploaded by

Mococo Cat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
166 views4 pages

Section 1.1 Matrix Multiplication, Flop Counts and Block (Partitioned) Matrices

Matrix multiplication, block matrices, and systems of linear equations are introduced. Key points include: 1) Matrix-vector multiplication (b = Ax) results in b being a linear combination of A's columns. Block matrices allow partitioning large matrices into smaller submatrices for more efficient computation and parallel processing. 2) A system of linear equations can be written as Ax=b, where A is an n×n coefficient matrix, x is a vector of n unknowns, and b is a vector of constants. For the system to have a unique solution, A must be nonsingular (i.e. det(A) ≠ 0). 3) Examples where linear systems arise include structural analysis

Uploaded by

Mococo Cat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

SECTION 1.

Matrix Multiplication, Flop Counts and Block (Partitioned) Matrices


Read this section. The following are a few points to note.

Suppose that A is an n m matrix, A j denotes the j-th column vector of A, and x


is a column vector with m entries. That is,

a11 a12 L a1m a1 j x1


a a x
a 22 L a2m 2j
A = 21 , Aj = and x = 2 .
M M M M M

a n1 an2 L a nm a nj xm

Proposition 1.1.6 (page 3)


If b = Ax , then b is a linear combination of the columns of A.

Most algorithms involving matrices can be coded in a row-oriented or a column-


oriented manner. This refers to the order in which entries in a matrix are accessed. See
Algorithms (1.1.3) and (1.1.7) on pages 2 and 3. The first is a row-oriented matrix-vector
multiply and the second is a column-oriented matrix-vector multiply. They perform
exactly the same operations, but in a different order. In general, you want to code an
algorithm to access data from a matrix in the same (linear) order in which the entries are
stored in the computer memory.

Flop is an abbreviation for "floating-point operation". The traditional measure of


the efficiency of a numerical algorithm (and a way to estimate its running time) is based
on a flop count. The flop count is simply the number of flops that the algorithm would
execute when used to solve a particular problem. Flop counts are usually given in terms
of the problem "size", for example, the parameters n and m if an n m matrix is
involved.

Unfortunately there is no standard definition of a flop. In our textbook, it refers


to any one floating-point operation (an add, subtract, multiply or divide). This differs
from the definition in the first edition of this text, and from the definition used in the CSc
349A/B text where the number of multiplicative operations (* and /) are counted
separately from the number of additive (+ and -) operations.

Read the examples in the textbook regarding flop counts for doing a matrix/vector
multiplication and for the multiplication of two matrices.

1
For example, the flop count for the multiplication of two n n matrices is 2n 3
flops. Using the big-O notation, this means that matrix multiplication is an O(n 3 )
operation. However, it is common when giving flop counts for algorithms in numerical
linear algebra to include the constant in the term with the highest power of n. For
example, the flop count for the Gaussian elimination algorithm for solving a system of n
linear equations in n unknowns is a polynomial in n, namely

2 3 3 2 7
n + n n .
3 2 6

Thus, although this is an O(n 3 ) flop count, it is more common to state that the Gaussian
2
elimination algorithm requires about n 3 flops. This is a good estimate for large values
3
of n, which is why the lower order terms in n are ignored.

One common use of a flop count is to estimate the relative execution time of an
algorithm for problems of different sizes. See Exercises 1.1.8 and 1.1.9 on page 5.

Block (partitioned) matrices

-- concept of partitioning a matrix into rectangular submatrices

Theorem 1.1.24 (page 9)


Partition matrices A, X and B as follows, where A is n m , X is m p and B is
n p :

A11 L A1s X 11 L X 1t B11 L B1t



A= M
M , X = M M , B = M
M
Ar1 L Ars X s1 L X st Br1 L Brt

submatrices Aik are submatrices X kj are submatrices Bij are


of order ni mk , and of order mk p j , and of order ni p j .
n1 + L + nr = n , m1 + L + ms = m ,
m1 + L + ms = m . p1 + L + pt = p

s
Then B = AX if and only if Bij = Aik X kj , for 1 i r , 1 j t.
k =1

2
EXAMPLE

B = AX

A11 X 11 + A12 X 21 A11 X 12 + A12 X 22 A11 A12 X 11 X 12


A X + A X =
21 11 22 21 A21 X 12 + A22 X 22 A21 A22 X 21 X 22

B, A and X could, for example, be partitioned as follows:

Partitioning into blocks does not affect the total flop count of an algorithm, but it
can greatly affect its performance. Read pages 10-11 with regard to transferring data
from main memory and parallel processing.

For a more extensive discussion, see Section 1.3 of Golub and Van Loan (which
includes, e.g., fast matrix multiplication based on block matrix multiplication).

SECTION 1.2
Notation for a system of n linear equations in n unknowns:

Ax = b
or
a11 x1 + a12 x 2 + L + a1n x n = b1
a 21 x1 + a 22 x 2 + L + a 2 n x n = b2
M
a n1 x1 + a n 2 x 2 + L + a nn x n = bn

We'll assume (for now) that the n n coefficient matrix A is nonsingular:


see Theorem 1.2.3 (page 12).

3
For any square matrix A, the following conditions are equivalent:
A 1 exits
there is no nonzero vector y such that Ay = 0
the column vectors of A are linearly independent
the row vectors of A are linearly independent
det( A) 0
given any vector b, there is exactly one vector x such that Ax = b

Several examples of problems in which systems of linear equations arise are given on
pages 13-19. Note that in some of these examples, the coefficient matrix A is sparse (see
page 21). Sparse matrices are discussed in Sections 1.6, 1.7 and 1.9.

You might also like