Methods of Applied Mathematics Franics B
Methods of Applied Mathematics Franics B
APPLIED MATHEMATICS
Dr Jad Matta
“I do hereby attest that I am the sole author of this report and that its contents are only
the result of my reading of the above mentioned textbook.”
0|Page
APPLIED MATHEMATICS
Table of Content
o Gauss-Jordan reduction
o Linear combination
o Matrix Multiplication
o Properties of inverse matrices
o Determinants of small and Large matrices
o The definition of determinant
o Equivalent form of invertibility
o Vectors in Euclidean n space
o The Cauchy-Schwarz inequality
o Cross Product of Vectors in R^3
o Equation of a plane in R^3
o Linear independence, spanning and bases in R^n
o Subspaces in R^n
o Eigenvectors and Eigenvalues
Calculus of Variations
Integral equations
1|Page
APPLIED MATHEMATICS
Gauss-Jordan reduction
2|Page
APPLIED MATHEMATICS
The matrix above gives an idea of what we want. Notice the staircase line drawn
through the matrix has all entries below it equal to zero. The entries marked with
a ∗∗ can take on any value.
3|Page
APPLIED MATHEMATICS
Matrix
multiplication is not commutative.
The most important difference between the multiplication of matrices and the
multiplication of real numbers is that real numbers x and y always commute (that
is xy=yx), but the same in not true for matrices
Theorem 3.4.6.
Left distributive law.
Let A,, B and C be matrices of the right size for matrix multiplication.
Then A(B+C)=AB+AC
Right distributive law.
Let A,, B and C be matrices of the right size for matrix multiplication.
Then (B+C)A=BA+CA.
Associativity of matrix multiplication.
Let A,, B and C be matrices of the right size for matrix multiplication.
Then A(BC)=(AB)C.
Definition: The inverse of a matrix. Let A be a square matrix. If there exists a matrix B
so that AB = BA = I. then B is called the inverse of A and it is written A-1
4|Page
APPLIED MATHEMATICS
Definition: Matrix invertability. A matrix A is invertible if it has an inverse, that is, if the
matrix A-1 exists.
Properties of the inverse of a Matrix
Uniqueness of Inverse.
A square matrix A can have no more than one inverse.
Definition 3.7.5. Matrix singularity.
A square matrix is nonsingular if its reduced row echelon form is I.. Otherwise it
is singular.
Theorem 3.7.9. A Right Inverse is an Inverse.
Suppose A and B are square matrices with AB=I.. Then B=A−1.
Exponents and Transpose.
If A is a square matrix with inverse A−1 then (AT)−1=(A−1)T
Inverse of Product of Matrices.
If A and B are invertible matrices of the same size, then AB is also invertible
and (AB)−1=B−1A−1
Definition 3.8.6. Symmetric matrix.
A matrix A=[ai,j] is symmetric if ai,j=aj,i for all i,j=1,2,…,n.=1,2,…,. Alternatively, we
may write this as A=AT.
Triangular matrix.
5|Page
APPLIED MATHEMATICS
(A−1)−1=A(−1)
(An)−1=(A−1)n)
(rA)−1=1rA−1
The determinant
6|Page
APPLIED MATHEMATICS
Let A be any square matrix, M be its matrix of minors and P satisfy pi,j=(−1)i+j. Then the
row sums and column sums of A∘P∘M are identical.
Definition 4.3.5. The determinant of a square matrix.
Let A be a square matrix with M as the matrix of minors and C as cofactor matrix. Then
the determinant of A is the common row and column sum of A∘P∘M=A∘C.
7|Page
APPLIED MATHEMATICS
8|Page
APPLIED MATHEMATICS
we proved that detE detA=det(EA) for any elementary matrix E.. In other words, in this
case the determinant of the product is the product of the determinants. We can now
show that this is true for any pair of matrices.
9|Page
APPLIED MATHEMATICS
10 | P a g e
APPLIED MATHEMATICS
11 | P a g e
APPLIED MATHEMATICS
Definition: General equation of a line in R^2. A line L is the set of points (x,y)
satisfying the equation ax + by + c = 0 where a and b are not both 0.
Equations of lines in R3
Theorem 5.4.6.
The point (x,y,z)is on the line through u=(x0,y0,z0) and v=(x1,y1,z1) if
(x,y,z)=(1−t)u+tv
for some real number t.. In addition, the direction vector for that line
is n=(x1,y1,z1)−(x0,y0,z0)=v−u and
(x,y,z)=u+tn
Cross Product of Vectors in R3
Theorem 5.5.3. x⊥x×y and y⊥x×y
x×y is perpendicular to both x and y..
Theorem 5.5.4. x×rx=0
If x and y are collinear, then x×y=0.
Theorem 5.5.5. ∥x×y∥=∥x∥∥y∥sinθ‖
12 | P a g e
APPLIED MATHEMATICS
13 | P a g e
APPLIED MATHEMATICS
14 | P a g e
APPLIED MATHEMATICS
If x2,x2,…,xm are vectors in Rn, then, for any choice of scalars r1,r2,…,rm the
vectorr1x1+r2x2+⋯+rmxm is called a linear combination of x2,x2,…,xm
15 | P a g e
APPLIED MATHEMATICS
16 | P a g e
APPLIED MATHEMATICS
Suppose that A is a square matrix of order n.. Then for any real number λ, we define
the eigenspace Eλ byEλ={x∈Rn∣Ax=λx
Clearly 00 is in E for any value of λ, and λ is an eigenvalue if and only if there is
some x≠0 in E
17 | P a g e
APPLIED MATHEMATICS
18 | P a g e
APPLIED MATHEMATICS
Hermitian Matrix
Hermitian matrix has a similar property as the symmetric matrix and was named after a
mathematician Charles Hermite. The hermitian matrix has complex numbers as its
elements, and it is equal to its conjugate transpose matrix.
Let us learn more about the hermitian matrix and its properties along with examples.
A hermitian matrix is a square matrix, which is equal to its conjugate transpose matrix.
The non-diagonal elements of a hermitian matrix are all complex numbers. The complex
numbers in a hermitian matrix are such that the element of the ith row and jth column is
the complex conjugate of the element of the jth row and ith column.
19 | P a g e
APPLIED MATHEMATICS
Here the non-diagonal are complex numbers. Only the first element of the first row and
the second element of the second row are real numbers. Also, the complex number of
the first-row second element is a conjugate complex number of the second-row first
element.
Here also the non-diagonal elements are all complex numbers. The elements
connecting the diagonal from the first row first element to the third-row third element are
all real numbers. Also, notice that an element in the position (i, j) is the complex
conjugate of the element in the position (j, i). For example, in the matrix below, 2 + i is
present in the first row and the second column, whereas it's conjugate 2 - i is present in
the second row and first column. The same is the case with other complex numbers as
well.
From the above two matrices, it is clear that the diagonal elements of a Hermitian matrix
are always real. Also, the element in the position (i, j) is the complex conjugate of the
element in the position (j, i). Hence, a 2 × 2 Hermitian matrix is of the
form ⎡⎢⎣xy+ziy−ziw⎤⎥where x, y, z, and w are real numbers. Similar, we can construct a 3
× 3 Hermitian matrix using the formula ⎡⎢⎣ab+cic+dib−cieg+hic−dig−hik
The elements of the principal diagonal of a hermitian matrix are all real numbers.
The non-diagonal elements of a hermitian matrix are complex numbers.
Every hermitian matrix is a normal matrix, such that AH = A.
The sum of any two hermitian matrices is hermitian.
The inverse of a hermitian matrix is a hermitian.
The product of two hermitian matrices is hermitian.
The determinant of a hermitian matrix is real.
Terms Related to Hermitian Matrix
20 | P a g e
APPLIED MATHEMATICS
The following terms are helpful in understanding and learning more about the hermitian
matrix.
Principal Diagonal: In a square matrix, all the set of elements of the diagonal
connecting the first element of the first row to the last element of the last row,
represents a principal diagonal.
Symmetric Matrix: A matrix is said to be a symmetric matrix if the transpose of a
matrix is equal to the given matrix. AT = A.
Conjugate Matrix: The conjugate matrix of a given matrix is obtained by replacing
the corresponding elements of the given matrix, with their corresponding conjugates.
Transpose Matrix: The transpose of a matrix A is represented as AT, and the
transpose of a matrix is obtained by changing the rows into columns or columns into
rows of a given matrix.
Writing Matrix as Hermitian and Skew-Hermitian
A square matrix A can be written as the sum of a Hermitian matrix P and a skew-
Hermitian matrix Q where P = (1/2) (A + AH) and Q = (1/2) (A - AH). i.e.,
A = P + Q where
P = (1/2) (A + AH) and
Q = (1/2) (A - AH)
For any matrix A, one can easily see that (A + AH) is Hermitian and (A - AH) is skew-
Hermitian.
A typical problem in the calculus of variations involve finding a particular function y(x) to
maximize or minimize the integral I(y) subject to boundary conditions y(a)= A and y(b) =
B.
The integral I(y) is an example of a functional, which (more generally) is a mapping from
a set of allowable functions to the reals.
21 | P a g e
APPLIED MATHEMATICS
We say that I(y) has an extremum when I(y) take s its maximum or minimum value.
To find x corresponding t local minimum (Xmin):
find f’(x) or dy/dx and set it to zero. Solving for f’(x) =0 gives stationary points function
testing needed to determine their nature.
thus, to find:
- Stationary points of f(x), solve df/dx =0 or for x
- Stationary functions of a functional I[f] (function of function)
Example:
Find path such that distance AB is
minimzed/
𝐵
I = ∫𝐴 𝑑𝑆 = √𝑑𝑥 2 + 𝑑𝑦 2
=√1 + (𝑑𝑦/𝑑𝑥)2 dx
𝑥1
= I = ∫𝑥2 √1 + (𝑑𝑦/𝑑𝑥)2 dx
22 | P a g e
APPLIED MATHEMATICS
𝑥2
Problem: find y=f(x) between points A and B such that the intergral ∫𝑥1 √1 + (𝑑𝑦/𝑑𝑥)2 dx
is minimized
Given v(x,y) of a particle, find the
y = f(x) such that the time taken
by the particle is minimized
Since dt = ds / v(x,y)
𝑥2 √1+(dy.dx)2
T = ∫𝑥1
𝑣(𝑥.𝑦)
23 | P a g e
APPLIED MATHEMATICS
24 | P a g e
APPLIED MATHEMATICS
25 | P a g e
APPLIED MATHEMATICS
26 | P a g e
APPLIED MATHEMATICS
1) Linearity
The linearity of an integral equation is defined with respect to the linearity of the
unknown function y(x). However, non-linear integral equation arises if y(x) is preplaced
by a non-linear function F(y(x))
2) Kind (1st or 2nd kind)
An integral equation is said to be of the First kind of the unknown function only occurs
under the integral sign. Otherwise, if it occurs both under the integral sign and
elsewhere , then it is of the second kind
3) Homogeneity (Homogeneous / Non-homogeneous integral equation)
If the function f(x) = 0, we have a homogeneous equation which always has the trivial
zero solution y(x) =0.
Otherwise, the equation is non-homogeneous.
4) Type (Fredholm’s integral equation and Volterra’s integral equation)
Fredholm’s integration equation:
Once the limits b and a are constants, then the integral equation is a called a Fredholm
integral equation.
equations (1) and (2) are Fredholm’s integral equations of the first kind and second kind
respectively.
𝑏
F(x) = 𝑥(𝑘 𝑎∫ גּ, 𝑡)𝑦(𝑡)𝑑𝑡 … … … … … … (1)
𝑏
Y(x) = f(x)+𝑥(𝑘 𝑎∫ גּ, 𝑡)𝑦(𝑡)𝑑𝑡 … … … … … … (2)
27 | P a g e
APPLIED MATHEMATICS
28 | P a g e
APPLIED MATHEMATICS
𝑏 𝑏
C = ∫𝑎 ℎ(𝑡)𝑓(𝑡)𝑑𝑡 + ∫𝑎 גּh(t)𝑔(𝑡)𝑐 𝑑𝑡
𝑏 𝑏
C = ∫𝑎 ℎ(𝑡)𝑓(𝑡)𝑑𝑡 + 𝑎∫ גּh(t)𝑔(𝑡) 𝑐 𝑑𝑡
𝑏 𝑏
C = גּc ∫𝑎 ℎ(𝑡)𝑓(𝑡)𝑑𝑡 + ∫𝑎 h(t)𝑔(𝑡) 𝑑𝑡
𝑏 𝑏
C[1 - 𝑎∫ גּℎ(𝑡)𝑔(𝑡)𝑑𝑡 ] = ∫𝑎 ℎ(𝑡)𝑓(𝑡) 𝑑𝑡
𝑏 𝑏
C = ∫𝑎 ℎ(𝑡)𝑓(𝑡)𝑑𝑡 / 1 - 𝑎∫ גּℎ(𝑡)𝑔(𝑡)𝑑𝑡 …………………………(6)
29 | P a g e
APPLIED MATHEMATICS
Green’s Theorem
Let C be a positively oriented, piecewise smooth, simple, closed curve and let D be the
region enclosed by the curve. If P and Q have continuous first order partial derivatives
on D then,
𝛿𝑄 𝛿𝑃
∫ 𝑃 𝑑𝑥 + 𝑄 𝑑𝑦 = ∬ ( − ) 𝐷𝐴
𝐷 𝛿𝑥 𝛿𝑦
𝐶
30 | P a g e