0% found this document useful (0 votes)
71 views6 pages

Chapter - Two - CT - 1

The document discusses methods for solving systems of linear equations that arise from structural analysis problems using the finite element method. It describes how the system of equations can be written in matrix form as [K]{δ}={F}, where [K] is the stiffness matrix, {δ} is the displacement vector, and {F} is the force vector. It then summarizes several methods for solving these systems of equations, including Gaussian elimination, Gauss-Jordan elimination, and Gauss-Seidel iteration. It also discusses how the stiffness matrix is typically banded or sparse due to the nature of finite element discretization and techniques for storing these types of matrices efficiently in memory.

Uploaded by

Naresh Jirel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views6 pages

Chapter - Two - CT - 1

The document discusses methods for solving systems of linear equations that arise from structural analysis problems using the finite element method. It describes how the system of equations can be written in matrix form as [K]{δ}={F}, where [K] is the stiffness matrix, {δ} is the displacement vector, and {F} is the force vector. It then summarizes several methods for solving these systems of equations, including Gaussian elimination, Gauss-Jordan elimination, and Gauss-Seidel iteration. It also discusses how the stiffness matrix is typically banded or sparse due to the nature of finite element discretization and techniques for storing these types of matrices efficiently in memory.

Uploaded by

Naresh Jirel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2 Chapter – TWO – SOLUTION OF LINEAR EQUATIONS

Matrix Displacement Equations

- Standard form [k] {} = {F}

Where, [k] = stiffness Matrix

{} = displacement vector

{F} = Force vector in co-ordinate directions

Kij = force developed in co-ordinate i due to unit displacement in co-ordinate j.

Positive Definite Matrix

A symmetric matrix is said to be positive definite if all its Eigen values are strictly positive (>0).

i.e., A matrix A of size (n x n) is positive definite if for any non-zero vector

x={ ……… } , >0

2 5
1 4
3 6

A well-known example is that of a frame element having 2 nodes and 3 degrees of freedom per node.

1,2 – Axial Co-ordinates

2,5 - Shear Co-ordinates

3,6 – Moment Co-ordinates

Special features of Matrix Displacement Equations

 Matrix is having diagonal dominance and is positive definite. No need to re-arrange the equations
to get diagonal dominance.
 Matrix is symmetric (obvious from Maxwell’s reciprocal theorem). Only upper or lower triangular
elements may be formed and rest can be obtained using symmetry.
 The matrices are banded in nature i.e., the non-zero elements of stiffness matrix are concentrated
near the diagonal of the matrix. Elements away from diagonal are zero.
2.1 System of Linear Equations
Matrix displacement Equations are linear simultaneous equations normally of form [A] {x} = {b}

a11x1 + a12x2 + a13x3 +……..+ a1nxn = b1

a21x1 + a22x2 + a23x3 +……..+ a2nxn = b2

….

….

an1x1 + an2x2 + an3x3 +……..+ annxn = bn

which can be written as:


⋯ ⋯
⎡ ⋯ ⋯ ⎤⎡ ⎤ ⎡ ⎤
⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥⎢ ⋮ ⎥ ⎢ ⋮ ⎥
⎢ ⎥ ⎢ ⎥=⎢ ⎥
⎢ ⋯ ⋯ ⎥⎢ ⎥ ⎢ ⎥
⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥⎢ ⋮ ⎥ ⎢ ⋮ ⎥
⎣ … … ⎦⎣ ⎦ ⎣ ⎦

Algorithm to Solve:

A. Gauss Elimination Method

To solve [A] {x} = {b}, we reduce it to an equivalent system [U] {x} = {g}, in which U is upper
triangular. This system can be easily solved by a process of backward substitution.
We carry out pivotal operation on row 2nd. For row 2nd, a11 is pivot.
First line/equation is maintained as it is.
For Equations below:

= − And = − for i,j = 2,…,n

⋯ ⋯
⎡ 0 ⋯ ⋯ ⎤⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥⎢ ⋮ ⎥ ⎢ ⋮ ⎥
⎢ ⎥=
⎢ 0 ⋯ ⋯ ⎥⎢ ⎥ ⎢ ⎥
⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥⎢ ⋮ ⎥ ⎢ ⋮ ⎥
⎣ 0 … … ⎦⎣ ⎦ ⎣ ⎦

For pivotal operations on akk, no changes are made in kth row but rows below kth row
#$ #$
! " " " "
= − #
#$ And = − #
#$ for i,j = k+1,…,n
## ##

After (n-1) pivotal operations, matrix equation is of the form,


⋯ ⋯
⎡ 0 ⋯ ⋯ ⎤⎡ ⎤ ⎡ ⎤
⎢ ⎥⎢ ⋮ ⎥ ⎢ ⎥
⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥ ⎢ ⋮ ⎥
" " ⎢ ⎥ =
⎢ 0 0 0 ⋯ ⋯ ⎥⎢ ⎥ ⎢
"

⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥⎢ ⋮ ⎥ ⎢ ⋮ ⎥
" ⎣ ⎦
⎣ 0 0 0 … 0 … ⎦ ⎣ " ⎦
This is of the form of Upper triangular matrix and back substitution can be done to calculate the
unknowns,
%&
= , and then
&&

% " ∑& ( )(
= (* +
for i = n-1, n-2, ….. , 1

This completes gauss-elimination theorem.

B. Gauss- Jordan Elimination Method

Gauss-Jordan Elimination is an algorithm that can be used to solve systems of linear equations and
to find the inverse of any invertible matrix. It relies upon three elementary row operations one can
use on a matrix:
1 0
For any Matrix, A-1 A = I, where I = identity matrix as , .
0 1
For a setoff equations, [A] {x} = {b},
{x} = [A]-1 {b}
 {x} = {b} if [A]-1 = I
For this, we operate on [A] matrix and make it equal to Identity Matrix so that its inverse is also
equal to Identity Matrix.
⋯ ⋯
⎡ 0 ⋯ ⋯ ⎤⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥⎢ ⋮ ⎥ ⎢ ⋮ ⎥
⎢ ⎥=
0 ⋯ ⋯
is converted to
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥⎢ ⋮ ⎥ ⎢ ⋮ ⎥
⎣ 0 … … ⎦⎣ ⎦ ⎣ ⎦
1 0 0 ⋯ 0 ⋯ 0
⎡0 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
1 0 ⋯ 0 ⋯ 0⎤ ⎡ ⎤
⎢ ⎥⎢ ⋮ ⎥ ⎢ ⋮ ⎥ ⎢⋮⎥ ⎢⋮⎥
⎢⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮⎥ ⎢ ⎥ ⎢ ⎥
= and hence ⎢ ⎥ = ⎢ ⎥
⎢0 0 0 ⋯ 1 ⋯ 0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮⎥ ⎢ ⋮ ⎥ ⎢ ⋮ ⎥ ⎢⋮⎥ ⎢⋮⎥
⎣0 0 0 … 0 … 1⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

C. Gauss- Siedel Iteration Method


For a set of linear simultaneous equations normally of form [A] { x } = {b} which can be written
as
a11x1 + a12x2 + a13x3 +……..+ a1nxn = b1
a21x1 + a22x2 + a23x3 +……..+ a2nxn = b2
….
….
an1x1 + an2x2 + an3x3 +……..+ annxn = bn

can also be written as

1
= { −/ } … … … (1)
0

1
= { −/ } 345 6 ≠ 2 … … … (2)
0

and similarly,

1
= { −/ } 345 6 ≠ 9
0

Steps:

1. Assign initial values to x1, x 2, x 3, … , x n


2. Substitute initial values to R.H.S. of equation (1) to get new value of x1.
3. Substitute new value of x1 and other old values of x to get new x2.
4. Repeat Steps all over again.
5. Continue iterations until new and old values do not alter significantly.

2.2 Banded Matrices and Sparse Matrices


A matrix whose most of the elements are zero is called sparse matrix. Such matrix occur in many
applications while solving PDEs by numerical methods like FEM and FDM. Storing only non-
zero entries and their locations save huge computing memory.
In a Banded Matrix, all of the non-zero elements occur close to the main diagonal and are contained
within a band, small bandwidth, all remaining elements being zero. The stiffness matrix that we
come across in most FDM and FEM applications are symmetric and banded.

2.3 Data Storage and Memory Optimization


The banded matrix has elements defined on its main diagonal, kl sub-diagonals above main
diagonal. In solving problems containing such matrices, only the non-zero elements are stored.
Various storage formats are available. The most common method is to store m x n band matrix in
2-dimensional array with kl+1 columns and n rows.
For example, consider the matrix
0 0 0 0
⎡ : 0 0 0⎤
⎢ 0 0⎥
⎢ ⎥
⎢ 0⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦ )

Here, number of sub-diagonals, kl = 3


The new band width of the matrix, say, nbw = 1+ kl = 4.
Here n x nbw is the new order of the matrix.

⎡ : ⎤
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ 0⎥
⎢ 0 0⎥
⎣ 0 0 0⎦ ) %;

In general, the pth diagonal of the main matrix is stored as pth column i.e., the principal diagonal or
1st diagonal is stored as 1st column.
The correspondence between the original matrix and new matrix is given by
(6 > <) = ( "= )

For example, the entry marked in red in above matrix is a24. The new location of a24 is given by:

> = (>" = ) =

Algorithm for Gaussian elimination method can be re-written for symmetric banded matrix
considering
1. For original matrix, =
2. For new matrix, number of elements in kth row is min(n-k+1, nbw)
For example – number of elements in 6th row is min(8-6+1, 4) = 3

2.4 Conjugate Gradient Method


The conjugate gradient method is an iterative technique for solving large systems of linear
equations [A] { x } = {b}, where the co-efficient matrix [A]n x n is symmetric (AT = A) and
positive definite ( ? > 0).
The idea is to minimise the residue and converge two unequal vectors orthogonal to each other
(conjugates). Conjugate set is generated by the gradient vectors. Search direction is constructed by
conjugation of residuals. Each residual is orthogonal to the previous search direction as well as to
the previous residuals.

i. Starting at Point x0: Assume initial values.


ii. Gradient of A, g0 = A x0 – b
iii. Conjugate vector or residue, d0 =b - A x0 = -g0
A#B A#
iv. Co-efficient, @ = , k = 0, 1, 2, …..
C#B D C#
v. Calculate, = = +@ F
vi. Calculate, G = = G +@ F
B
A#+ A#+
vii. Co-efficient, H = A#B A#
viii. Calculate, F = = −G = + H F
ix. Repeat step (iv), calculate @ =
x. Repeat step (v), calculate =

The solution converges if G G reaches a small enough value to be neglected. This method is
robust and normally converges in n iterations for n x n matrix.

You might also like