0% found this document useful (0 votes)
55 views60 pages

Mathematics For Physicist: Adhi Harmoko Saputro

The document provides an overview of various numerical linear algebra techniques used in physics, including: - Solving linear systems using Gauss elimination and partial pivoting - LU factorization using Doolittle's method and Cholesky decomposition - Gauss-Jordan elimination - Least squares fitting of data to polynomials - Computing matrix eigenvalues using numerical methods like iteration It also includes MATLAB code examples for implementing these techniques.

Uploaded by

Mila Apriani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views60 pages

Mathematics For Physicist: Adhi Harmoko Saputro

The document provides an overview of various numerical linear algebra techniques used in physics, including: - Solving linear systems using Gauss elimination and partial pivoting - LU factorization using Doolittle's method and Cholesky decomposition - Gauss-Jordan elimination - Least squares fitting of data to polynomials - Computing matrix eigenvalues using numerical methods like iteration It also includes MATLAB code examples for implementing these techniques.

Uploaded by

Mila Apriani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Mathematics for Physicist

Adhi Harmoko Saputro


Numeric Linear Algebra
Linear Systems: Gauss
Elimination
Linear Systems
 A linear system of n equations in n unknowns
Linear Systems
 Using matrix multiplication
Ax = B
Gauss Elimination
 The standard method for solving linear systems is a systematic
process of elimination that reduces to triangular form because the
system can then be easily solved by back substitution.
Partial Pivoting
Partial Pivoting
Partial Pivoting
Partial Pivoting
Partial Pivoting
Gauss Elimination
Gauss Elimination
Gauss Elimination
Matlab Code
for j=1:m-1
for z=2:m
if a(j,j)==0
t=a(j,:);a(j,:)=a(z,:);
a(z,:)=t;
end
end
for i=j+1:m
a(i,:)=a(i,:)-a(j,:)*(a(i,j)/a(j,j));
end
end
x=zeros(1,m);
for s=m:-1:1
c=0;
for k=2:m
c=c+a(s,k)*x(k);
end
x(s)=(a(s,n)-c)/a(s,s);
end
Matlab Code
Matrix Input
3 4 -2 2 2
4 9 -3 5 8
-2 -3 7 6 10
1 4 6 7 2
Triangular Matrix
3.0000 4.0000 -2.0000 2.0000 2.0000
0 3.6667 -0.3333 2.3333 5.3333
0 0 5.6364 7.5455 11.8182
0 0 0 -4.6129 -17.0323
Solution
-2.1538
-1.1538
-2.8462
3.6923
LU-Factorization
 An LU-factorization of a given square matrix A is of the form

A = LU

 where L is lower triangular and U is upper triangular


Doolittle’s Method
 Solve the system using Doolittle’s method
Doolittle’s Method
Doolittle’s Method
Doolittle’s Method
Doolittle’s Method
Doolittle method
Matlab Code
[m,n]=size(A); for p=1:k-1
s1=s1+L(i,p)*U(p,k);
U=zeros(m); end
L=zeros(m); end
for j=1:m L(i,k)=(A(i,k)-s1)/U(k,k);
L(j,j)=1; end
end for k=i:m
for j=1:m s2=0;
U(1,j)=A(1,j); for p=1:i-1
end s2=s2+L(i,p)*U(p,k);
for i=2:m end
for j=1:m U(i,k)=A(i,k)-s2;
for k=1:i-1 end
s1=0; end
if k==1 end
s1=0;
else
Matlab Code
The matrix to be decomposed is The Upper Triangular Matrix is

A= U=
3 5 2 3 5 2
0 8 2 0 8 2
6 2 8 0 0 6

The Lower Triangular Matrix is

L=
1 0 0
0 1 0
2 -1 1
Cholesky’s Method
 The popular method of solving Ax = b based on this factorization
A = LLT
Cholesky’s Method
Cholesky’s Method
Matlab Code
T = chol(A);
Cholesky factorization.
chol(A) uses only the diagonal and upper triangle of A.
The lower triangle is assumed to be the (complex conjugate)
transpose of the upper triangle. If A is positive definite, then
R = chol(A) produces an upper triangular R so that R'*R = A.
If A is not positive definite, an error message is printed.
Matlab Code
The matrix to be decomposed is
A=
4 2 14
2 17 -5
14 -5 83
The Complexe Matrix L is
T=
2 1 7
0 4 -3
0 0 5
Gauss–Jordan Elimination
Matlab Code
for j=1:m-1
for z=2:m
if a(j,j)==0
t=a(1,:);a(1,:)=a(z,:);
a(z,:)=t;
end
end
for i=j+1:m
a(i,:)=a(i,:)-a(j,:)*(a(i,j)/a(j,j));
end
end
for j=m:-1:2
for i=j-1:-1:1
a(i,:)=a(i,:)-a(j,:)*(a(i,j)/a(j,j));
end
end
for s=1:m
a(s,:)=a(s,:)/a(s,s);
x(s)=a(s,n);
end
Matlab Code
Matrix Input
3 4 -2 2 2
4 9 -3 5 8
-2 -3 7 6 10
1 4 6 7 2
Triangular Matrix
1.0000 0 0 0 -2.1538
0 1.0000 0 0 -1.1538
0 0 1.0000 0 -2.8462
0 0 0 1.0000 3.6923
Solution
-2.1538
-1.1538
-2.8462
3.6923
Least Squares Method
Least Squares Method
 In curve fitting we are given n points (pairs of numbers) (x1, y1),
…, (xn, yn) and we want to determine a function f(x) such that

f ( x1 )  y1 ,..., f ( xn )  yn

 approximately.
 The type of function (for example, polynomials, exponential
functions, sine and cosine functions) may be suggested by the
nature of the problem (the underlying physical law, for instance),
and in many cases a polynomial of a certain degree will be
appropriate.
Approximate Fitting
 The four points
(1.3, 0.103), (0.1, 1.099), (0.2, 0.808), (1.3, 1.897)
 corresponds the interpolation polynomial
f(x) = x3 - x + 1
Method of Least Squares
Method of Least Squares
Method of Least Squares
 The normal equations
Example
Example
Matlab Code
fitobject = fit(x,y,fitType)

fitobject = fit(x,y,fitType) creates the fit to the data in x and y with the model
specified by fitType.

Library Model Name Description


'poly1' Linear polynomial curve
'poly11' Linear polynomial surface
'poly2' Quadratic polynomial curve
'linearinterp' Piecewise linear interpolation
'cubicinterp' Piecewise cubic interpolation
'smoothingspline' Smoothing spline (curve)
'lowess' Local linear regression (surface)
Matlab Code
x = [1.3 0.1 0.2 1.3]';
y = [0.103 1.099 1.099 1.897]';

fitobject = fit(x,y,'poly1’);

Result
Linear model Poly1:
fitobject(x) = p1*x + p2
Coefficients (with 95% confidence bounds):
p1 = 0.6819 (0.4202, 0.9436)
p2 = 1.032 (0.7901, 1.275)
Curve Fitting by Polynomials of Degree m
 Our method of curve fitting can be generalized from a polynomial
y = a + bx to a polynomial of degree m

 where m  n  1.
 Then q takes the form

 and depends on m + 1 parameters bo, … , bm


Curve Fitting by Polynomials of Degree m
 we then have m + 1 conditions

 which give a system of m + 1 normal equations.

 In the case of a quadratic polynomial


Curve Fitting by Polynomials of Degree m
 the normal equations are (summation from 1 to n)
Example
Example
Matlab Code
x = [0 2 4 6 8]';
y = [5 4 1 6 7]';

fitobject = fit(x,y,'poly2');

Result
Linear model Poly2:
fitobject(x) = p1*x^2 + p2*x + p3
Coefficients (with 95% confidence bounds):
p1 = 0.2143 (-0.3355, 0.7641)
p2 = -1.414 (-6.001, 3.172)
p3 = 5.114 (-2.63, 12.86)
Matrix Eigenvalue
Matrix Eigenvalue
Matrix Eigenvalue
Matrix Eigenvalue
Matrix Eigenvalue
Matrix Eigenvalue
Matrix Eigenvalue
Numerics for Matrix Eigenvalues
 Cannot determine eigenvalues exactly by a finite process because
these values are the roots of a polynomial of nth degree
 Must mainly use iteration
Matlab Code
e = eig(A)
returns a column vector containing the eigenvalues of square matrix A.

[V,D] = eig(A)
returns diagonal matrix D of eigenvalues and matrix V whose columns are
the corresponding right eigenvectors, so that A*V = V*D
Matlab Example
A = [1.0000 0.5000 0.3333 0.2500
0.5000 1.0000 0.6667 0.5000
0.3333 0.6667 1.0000 0.7500
0.2500 0.5000 0.7500 1.0000];
e = eig(A);
[V,D] = eig(A);
Terima Kasih

You might also like