0% found this document useful (0 votes)
11 views31 pages

Methods of Applied Mathematics Franics B

The document is a comprehensive overview of applied mathematics, covering topics such as matrices, linear equations, calculus of variations, and integral equations. It includes definitions, theorems, and properties related to matrix operations, determinants, eigenvalues, and eigenvectors. The content is structured with clear sections and subsections, making it a useful resource for understanding fundamental concepts in applied mathematics.

Uploaded by

enlightenedep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views31 pages

Methods of Applied Mathematics Franics B

The document is a comprehensive overview of applied mathematics, covering topics such as matrices, linear equations, calculus of variations, and integral equations. It includes definitions, theorems, and properties related to matrix operations, determinants, eigenvalues, and eigenvectors. The content is structured with clear sections and subsections, making it a useful resource for understanding fundamental concepts in applied mathematics.

Uploaded by

enlightenedep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

APPLIED MATHEMATICS

APPLIED MATHEMATICS

Dr Jad Matta

Doctor of Philosophy in Applied Math


Applied Mathematics, Fourth Edition

“I do hereby attest that I am the sole author of this report and that its contents are only
the result of my reading of the above mentioned textbook.”

0|Page
APPLIED MATHEMATICS

Table of Content

Matrices and Linear Equations

o Gauss-Jordan reduction
o Linear combination
o Matrix Multiplication
o Properties of inverse matrices
o Determinants of small and Large matrices
o The definition of determinant
o Equivalent form of invertibility
o Vectors in Euclidean n space
o The Cauchy-Schwarz inequality
o Cross Product of Vectors in R^3
o Equation of a plane in R^3
o Linear independence, spanning and bases in R^n
o Subspaces in R^n
o Eigenvectors and Eigenvalues

Calculus of Variations

o What is the calculus of variations?


o Analogy to Calculus
o Deriving the Euler-Lagrange equations
o Rayleigh Ritz method

Integral equations

o General form of Integral Equation


o Classification of Integral Equation
o Solving a Fredholm’s integral equation if the is separable
o Solution of an integral equation
o Degenerate Kernel
o Integral equations converting Initial Value Problems to Volterra’s integral
o Green Theorem

1|Page
APPLIED MATHEMATICS

Matrices and Linear Equations


Representing Systems of Linear Equations using Matrices
A system of linear equations can be represented in matrix form using a coefficient
matrix, a variable matrix, and a constant matrix.

Row echelon form

We now want to define a general class of matrices whose corresponding system of


linear equations have solutions that are easy to find. These matrices have a special
pattern of zeros and ones, and are said to be in row echelon form.

Figure 2.5.1. A matrix in row echelon form


The matrix above gives an idea of what we want. Notice the staircase line drawn
through the matrix has all entries below it equal to zero. The entries marked with
a ∗∗ can take on any value.
The first nonzero entry in a row (if there is one) is called the leading entry. If it
equals 1,1, then it is called a leading one.
Definition 2.5.2. Row echelon form.

A matrix is in row echelon form if


 Every leading entry is a leading one.
 Every entry below a leading one is 0.0.
 As you go down the matrix, the leading ones move to the right.
 Any all zero rows are at the bottom.

Gauss-Jordan reduction

Gauss-Jordan reduction is an extension of the Gaussian elimination algorithm. It


produces a matrix, called the reduced row echelon form in the following way: after
carrying out Gaussian elimination, continue by changing all nonzero entries above the
leading ones to a zero. The resulting matrix looks something like:

2|Page
APPLIED MATHEMATICS

The matrix above gives an idea of what we want. Notice the staircase line drawn
through the matrix has all entries below it equal to zero. The entries marked with
a ∗∗ can take on any value.

The rank of a matrix


Definition 2.6.7.
The rank of a matrix is the number of leading ones in the reduced row echelon form.
Some properties pf addition of matrices
Theorem: Suppose A, B and C are matrices of the same size, then
 If A and B are m x n matrices, then so A + B
 A + B = B + A (commutativity of addition)
 (A + B) + C = A + (B + C) (associativity of addition)
Thereom: Properties of scalar multiplication. Suppose that A and B are matrices of the
same size, and r and s are scalars, then
 If A is an m x n matrix, then rA is also m x n
 r(A+B) = rA + rB
 (r + s)A = rA + sA
 (rs)A = r (sA)
 1A = A
Linear combinations. Matrix addition and scalar multiplication are both used to
compute linear combinations
Definition: Linear combination of matrices. If A and B are matrices conformable for
addition, and r and s are scalars, then the matrix of the form rA + sB is called a linear
combination of A and B.
Linear combination of matrices. If A1, A2,…, An are matrices conformable for
addition, then, for any choice of scalars r1, r2,…, rn the matrix
r1A1 + r2A2 + …. rnAn
is called a linear combination of A1, A2, … An

3|Page
APPLIED MATHEMATICS

Matrix
multiplication is not commutative.

The most important difference between the multiplication of matrices and the
multiplication of real numbers is that real numbers x and y always commute (that
is xy=yx), but the same in not true for matrices
Theorem 3.4.6.
Left distributive law.
Let A,, B and C be matrices of the right size for matrix multiplication.
Then A(B+C)=AB+AC
Right distributive law.
Let A,, B and C be matrices of the right size for matrix multiplication.
Then (B+C)A=BA+CA.
Associativity of matrix multiplication.
Let A,, B and C be matrices of the right size for matrix multiplication.
Then A(BC)=(AB)C.

Definition: The inverse of a matrix. Let A be a square matrix. If there exists a matrix B
so that AB = BA = I. then B is called the inverse of A and it is written A-1

4|Page
APPLIED MATHEMATICS

Definition: Matrix invertability. A matrix A is invertible if it has an inverse, that is, if the
matrix A-1 exists.
Properties of the inverse of a Matrix
Uniqueness of Inverse.
A square matrix A can have no more than one inverse.
Definition 3.7.5. Matrix singularity.
A square matrix is nonsingular if its reduced row echelon form is I.. Otherwise it
is singular.
Theorem 3.7.9. A Right Inverse is an Inverse.
Suppose A and B are square matrices with AB=I.. Then B=A−1.
Exponents and Transpose.
If A is a square matrix with inverse A−1 then (AT)−1=(A−1)T
Inverse of Product of Matrices.
If A and B are invertible matrices of the same size, then AB is also invertible
and (AB)−1=B−1A−1
Definition 3.8.6. Symmetric matrix.
A matrix A=[ai,j] is symmetric if ai,j=aj,i for all i,j=1,2,…,n.=1,2,…,. Alternatively, we
may write this as A=AT.
Triangular matrix.

A matrix is triangular if it is either upper triangular or lower triangular


Theorem 3.8.10. Product of upper triangular matrices is upper triangular.
 If A and B are upper triangular matrices, the AB is also upper triangular.
 If A and B are lower triangular matrices, the AB is also lower triangular.
Theorem 3.8.11. An upper triangular matrix is invertible if and only if all the diagonal
entries are nonzero.
An upper triangular matrix A is invertible if and only if every diagonal entry A is
nonzero.
Theorem 3.8.12. The inverse of an upper triangular matrix is upper triangular.
If A is upper triangular and invertible, then A−1−1 is also upper triangular.
Definition 3.8.13. Permutation matrix.
A permutation matrix is a square matrix with two properties:
1. Each entry of the matrix is either 00 or 1.1.
2. Every row and every column contains exactly one 1.
Laws of Exponents.

For any invertible square matrix A and integers m and n,,


 AmAn=Am+n
 (Am)n=Amn

5|Page
APPLIED MATHEMATICS

 (A−1)−1=A(−1)
 (An)−1=(A−1)n)
 (rA)−1=1rA−1

Theorem 3.11.2. Equivalent Forms of Invertibility.


Suppose that A is an n×n square matrix. Then the following statements are equivalent:
1. A is invertible
2. Ax=0 if and only if x=0
3. The reduced row echelon form of A is In
4. A is a product of elementary matrices
5. Ax=b is consistent for any b
6. Ax=b has exactly one solution for any b

The determinant

Minors and cofactors

6|Page
APPLIED MATHEMATICS

The determinant of large matrices


Defintion: Hadamard product of two matrices. If A=[ai,j] and B=[bi,j] are both m x n
matrices, then the Hadamard product of A ∘B is an m×n matrix defined by(A∘B)i,j=ai,jbi,j

In other words, multiplication is done element-wise.


Theorem 4.3.4. A∘C has constant row and column sums.

Let A be any square matrix, M be its matrix of minors and P satisfy pi,j=(−1)i+j. Then the
row sums and column sums of A∘P∘M are identical.
Definition 4.3.5. The determinant of a square matrix.
Let A be a square matrix with M as the matrix of minors and C as cofactor matrix. Then
the determinant of A is the common row and column sum of A∘P∘M=A∘C.

7|Page
APPLIED MATHEMATICS

8|Page
APPLIED MATHEMATICS

Properties derived from cofactor expansion


The laplace expansion theorem turns out to be a powerful tool, both for computation
and for the derivation of theoretical results,
Theorem: All zero row or column implies detA=0.
If A has an all zero row or all zero column, then det(A)=0.
Theorem 4.4.2. The determinant of a triangular matrix.
If A� is triangular, then det(A)=a1,1a2,2⋯an,n.
The determinant of a diagonal matrix the product of its diagonal entries.
If A is a diagonal matrix, then det(A)=a1,1a2,2⋯an,n.
Row additivity theorem.
Theorem:If A,, B and C are as above, then detC=detA+detB.

we proved that detE detA=det(EA) for any elementary matrix E.. In other words, in this
case the determinant of the product is the product of the determinants. We can now
show that this is true for any pair of matrices.

For any square matrices A and B of the same size


det(AB)=detA detB.
The adjoint of a matrix.
If a matrix A has C as a cofactor matrix then the adjoint of A is CT. We write this
as adj(A)=CT.
Theorem:
Let A be an invertible matrix. Then
A−1=1/det A * adjA

9|Page
APPLIED MATHEMATICS

Vectors in Euclidean n space

The dot Product

10 | P a g e
APPLIED MATHEMATICS

The Cauchy-Schwarz inequality

11 | P a g e
APPLIED MATHEMATICS

Theorem: x . y = ||x|| ||y|| cos ϕ

Definition: General equation of a line in R^2. A line L is the set of points (x,y)
satisfying the equation ax + by + c = 0 where a and b are not both 0.
Equations of lines in R3

Theorem 5.4.6.
The point (x,y,z)is on the line through u=(x0,y0,z0) and v=(x1,y1,z1) if
(x,y,z)=(1−t)u+tv
for some real number t.. In addition, the direction vector for that line
is n=(x1,y1,z1)−(x0,y0,z0)=v−u and
(x,y,z)=u+tn
Cross Product of Vectors in R3
Theorem 5.5.3. x⊥x×y and y⊥x×y
x×y is perpendicular to both x and y..
Theorem 5.5.4. x×rx=0
If x and y are collinear, then x×y=0.
Theorem 5.5.5. ∥x×y∥=∥x∥∥y∥sinθ‖

For any vectors x and y in R3, ∥x×y∥=∥x∥∥y∥sinθ

12 | P a g e
APPLIED MATHEMATICS

Theorem 5.5.6. Additional cross product properties.


Let u and v be vectors in Rn, and let r be a real number. Then
 u×v=−(v×u)
 (ru)×v=r(u×v)
 u×(v+w)=(u×v)+(u×w)

Equations of Planes in R^3

Theorem: Point-normal form through 0.The equation of a plane orthogonal to n=(a,b,c)


through 0 ax+by+cz=0
Point-normal form.
The equation of the plane through (x0,y0,z0) perpendicular to the vector n=(a,b,c) is
a(x−x0)+b(y−y0)+c(z−z0)=0.
Theorem General equation of a plane.
The general equation of a plane is
ax+by+cz+d=0 where n=(a,b,c) is orthogonal to the plane.

The area of a triangle and parallelogram determined by three points


Suppose we have three points A,, B and C in R3.. We want to compute the area of the
triangle determined by these points. If the points are collinear, then the triangle
collapses to a line and the area is zero, so we may suppose that the points are
noncollinear.

13 | P a g e
APPLIED MATHEMATICS

Projections in R^2 and R^n

14 | P a g e
APPLIED MATHEMATICS

Linear independence, spanning and bases in Rn

Definition 5.9.1. Linear combination of vectors.

If x2,x2,…,xm are vectors in Rn, then, for any choice of scalars r1,r2,…,rm the
vectorr1x1+r2x2+⋯+rmxm is called a linear combination of x2,x2,…,xm

15 | P a g e
APPLIED MATHEMATICS

16 | P a g e
APPLIED MATHEMATICS

Eigenvalues and eigenvectors:


Definition 6.1.1. The eigenvalue of a matrix.
A number λ is an eigenvalue of a square matrix A if Ax=λx for some x≠0.
Definition 6.1.2. The eigenvector of a matrix.
If Ax=λx
for some x≠0,, then x is called an eigenvector of A corresponding to the eigenvalue λ.
Notice that if x=0, then Ax=λx is simply the equation 0=0 for any value of λ. This is not
too interesting, and so we always have the restriction x≠0.
Definition 6.1.7. Eigenspaces.

Suppose that A is a square matrix of order n.. Then for any real number λ, we define
the eigenspace Eλ byEλ={x∈Rn∣Ax=λx
Clearly 00 is in E for any value of λ, and λ is an eigenvalue if and only if there is
some x≠0 in E

17 | P a g e
APPLIED MATHEMATICS

Similarity and diagonalization

18 | P a g e
APPLIED MATHEMATICS

Hermitian Matrix

Hermitian matrix has a similar property as the symmetric matrix and was named after a
mathematician Charles Hermite. The hermitian matrix has complex numbers as its
elements, and it is equal to its conjugate transpose matrix.

Let us learn more about the hermitian matrix and its properties along with examples.

What is a Hermitian Matrix?

A hermitian matrix is a square matrix, which is equal to its conjugate transpose matrix.
The non-diagonal elements of a hermitian matrix are all complex numbers. The complex
numbers in a hermitian matrix are such that the element of the ith row and jth column is
the complex conjugate of the element of the jth row and ith column.

19 | P a g e
APPLIED MATHEMATICS

The matrix A can be referred to as a hermitian matrix if A = AH. A hermitian matrix is


similar to a symmetric matrix but has complex numbers as the elements of its non-
principal diagonal.

Hermitian Matrix of Order 2 x 2

Here the non-diagonal are complex numbers. Only the first element of the first row and
the second element of the second row are real numbers. Also, the complex number of
the first-row second element is a conjugate complex number of the second-row first
element.

Hermitian Matrix of Order 3 x 3

Here also the non-diagonal elements are all complex numbers. The elements
connecting the diagonal from the first row first element to the third-row third element are
all real numbers. Also, notice that an element in the position (i, j) is the complex
conjugate of the element in the position (j, i). For example, in the matrix below, 2 + i is
present in the first row and the second column, whereas it's conjugate 2 - i is present in
the second row and first column. The same is the case with other complex numbers as
well.

Hermitian Matrix Formula

From the above two matrices, it is clear that the diagonal elements of a Hermitian matrix
are always real. Also, the element in the position (i, j) is the complex conjugate of the
element in the position (j, i). Hence, a 2 × 2 Hermitian matrix is of the
form ⎡⎢⎣xy+ziy−ziw⎤⎥where x, y, z, and w are real numbers. Similar, we can construct a 3
× 3 Hermitian matrix using the formula ⎡⎢⎣ab+cic+dib−cieg+hic−dig−hik

Properties of Hermitian Matrix

The following properties of the hermitian matrix help in a better understanding of a


hermitian matrix.

 The elements of the principal diagonal of a hermitian matrix are all real numbers.
 The non-diagonal elements of a hermitian matrix are complex numbers.
 Every hermitian matrix is a normal matrix, such that AH = A.
 The sum of any two hermitian matrices is hermitian.
 The inverse of a hermitian matrix is a hermitian.
 The product of two hermitian matrices is hermitian.
 The determinant of a hermitian matrix is real.
Terms Related to Hermitian Matrix

20 | P a g e
APPLIED MATHEMATICS

The following terms are helpful in understanding and learning more about the hermitian
matrix.

 Principal Diagonal: In a square matrix, all the set of elements of the diagonal
connecting the first element of the first row to the last element of the last row,
represents a principal diagonal.
 Symmetric Matrix: A matrix is said to be a symmetric matrix if the transpose of a
matrix is equal to the given matrix. AT = A.
 Conjugate Matrix: The conjugate matrix of a given matrix is obtained by replacing
the corresponding elements of the given matrix, with their corresponding conjugates.
 Transpose Matrix: The transpose of a matrix A is represented as AT, and the
transpose of a matrix is obtained by changing the rows into columns or columns into
rows of a given matrix.
Writing Matrix as Hermitian and Skew-Hermitian

A square matrix A can be written as the sum of a Hermitian matrix P and a skew-
Hermitian matrix Q where P = (1/2) (A + AH) and Q = (1/2) (A - AH). i.e.,

 A = P + Q where
 P = (1/2) (A + AH) and
 Q = (1/2) (A - AH)

For any matrix A, one can easily see that (A + AH) is Hermitian and (A - AH) is skew-
Hermitian.

Part 2 Calculus of Variations


What is the calculus of variations?
Many problems resolve finding a function that maximizes or minimizes an integral
expression. One example is finding the curve giving the shortest distance between two
points.
Functional and Extrema
Typical Problem: Consider a definite integrals that depends on unknown y(x), as well as
its derivative y’(x) = dy / dx,
𝑏
I(y) = ∫𝑎 𝐹(𝑥, 𝑦, 𝑦 ′ )𝑑𝑥

A typical problem in the calculus of variations involve finding a particular function y(x) to
maximize or minimize the integral I(y) subject to boundary conditions y(a)= A and y(b) =
B.
The integral I(y) is an example of a functional, which (more generally) is a mapping from
a set of allowable functions to the reals.

21 | P a g e
APPLIED MATHEMATICS

We say that I(y) has an extremum when I(y) take s its maximum or minimum value.
To find x corresponding t local minimum (Xmin):
find f’(x) or dy/dx and set it to zero. Solving for f’(x) =0 gives stationary points function
testing needed to determine their nature.
thus, to find:
- Stationary points of f(x), solve df/dx =0 or for x
- Stationary functions of a functional I[f] (function of function)

• Dirichlet Principle – One stationary


ground state for energy
• Solutions to many physical problems
require maximizing or minimizing some
parameter I.
• Distance
• Time
• Surface Area
• Parameter I dependent on selected
path u and domain of interest D:
• Terminology:
• Functional – The parameter I to
be maximized or minimized
• Extremal – The solution path u
that maximizes or minimizes I

Example:
Find path such that distance AB is
minimzed/
𝐵
I = ∫𝐴 𝑑𝑆 = √𝑑𝑥 2 + 𝑑𝑦 2

=√1 + (𝑑𝑦/𝑑𝑥)2 dx
𝑥1
= I = ∫𝑥2 √1 + (𝑑𝑦/𝑑𝑥)2 dx

22 | P a g e
APPLIED MATHEMATICS

𝑥2
Problem: find y=f(x) between points A and B such that the intergral ∫𝑥1 √1 + (𝑑𝑦/𝑑𝑥)2 dx
is minimized
Given v(x,y) of a particle, find the
y = f(x) such that the time taken
by the particle is minimized

Since dt = ds / v(x,y)

𝑥2 √1+(dy.dx)2
T = ∫𝑥1
𝑣(𝑥.𝑦)

23 | P a g e
APPLIED MATHEMATICS

Deriving the Euler-Lagrange equations

The Euler-Lagrange Equation

24 | P a g e
APPLIED MATHEMATICS

Functions of Two Variables


• Analogy to multivariable calculus:
• Functions still take extreme
values on bounded domain.
• Necessary condition for
extremum at x0, if f is
differentiable:

Calculus of variations method similar:

25 | P a g e
APPLIED MATHEMATICS

Rayleigh Ritz method


It involves the principle of balancing trajectorial energy and calculus of constants.it uses
Polynomial functions to solve boundary value problems.
The Rayleigh Ritz method utilizes the principle of minimizing total potential energy in a
system and the calculus of variations. It employs the use of trial functions that satisfy
specific conditions, including boundary conditions to solve boundary value problems.
Definition Boundary conditions: The solution of differential equations should fulfill these
conditions at the boundaries of the domain.
Meanwhile, some considerations you need to bear in mind when selecting trial functions
include:
- The trial functions need to be linearly independent
- The trial functions should be ideally smooth and continuous
- The trail functions should satisfy the boundary conditions

Part 3 Integral Equations


Definition: an integral equation (IE) is an equation in which an unknown function
appears under one or more signs of integration.
Example of an integral equation in the laplace equation
𝑖𝑛𝑓
f(x) = ∫0 𝑎−𝑥𝑢 𝛷(𝑢)𝑑𝑢

General form of an Integral equation


𝑏
y(x) = ‫𝑥(𝐾 𝑎∫ גּ‬, 𝑡)𝑦(𝑡)𝑑𝑡 + 𝑓(𝑥)

y(x) = unknown function


k(x,t) = kernel
a,b = units of integration
f(x) = free term, forcing term
‫ = גּ‬investigate parameter
*kernel: the part of the function k(x,t) is referred to as the kernel. It is a known
continuous function defined in the square R: a ≤ x ≤ b, a ≤ t ≤ b of the (x,t)-plane.
*force term: f(x) is the force or forcing term in the equation. It is also known and
continuous.
*unknown function: this is the function y(x) we want to solve for

26 | P a g e
APPLIED MATHEMATICS

*limits: a(x) and b(x) are limits of integration


Investigative Parameter
 ‫ ג‬is introduced to determine variations of the problem/solution as ‫ ג‬is allowed to
vary
 A solution may not exist for fixed ‫ג‬
 Therefore ‫ ג‬is allowed to vary in order that a solution may exist
Classification of integral equations
i) Linearity and non-linear equations
ii) Kind (first kind, second kind)
iii) Homogeneity (Homogeneous and non-homogeneous)
iv) Type (Fredholm’s Integral equations and Volterra’s Integral Equations)
v) Other forms of classifications

1) Linearity
The linearity of an integral equation is defined with respect to the linearity of the
unknown function y(x). However, non-linear integral equation arises if y(x) is preplaced
by a non-linear function F(y(x))
2) Kind (1st or 2nd kind)
An integral equation is said to be of the First kind of the unknown function only occurs
under the integral sign. Otherwise, if it occurs both under the integral sign and
elsewhere , then it is of the second kind
3) Homogeneity (Homogeneous / Non-homogeneous integral equation)
If the function f(x) = 0, we have a homogeneous equation which always has the trivial
zero solution y(x) =0.
Otherwise, the equation is non-homogeneous.
4) Type (Fredholm’s integral equation and Volterra’s integral equation)
Fredholm’s integration equation:
Once the limits b and a are constants, then the integral equation is a called a Fredholm
integral equation.
equations (1) and (2) are Fredholm’s integral equations of the first kind and second kind
respectively.
𝑏
F(x) = ‫𝑥(𝑘 𝑎∫ גּ‬, 𝑡)𝑦(𝑡)𝑑𝑡 … … … … … … (1)
𝑏
Y(x) = f(x)+‫𝑥(𝑘 𝑎∫ גּ‬, 𝑡)𝑦(𝑡)𝑑𝑡 … … … … … … (2)

27 | P a g e
APPLIED MATHEMATICS

** Volterra’s Integral equation


If in fredholm’s integral equation, the constant b is changed to the variable x then we
have a corresponding volteras integral equation.
- Volterra’s integral equation of the first kind:
𝑥
F(x) = ∫𝑎 𝑘(𝑥, 𝑡)𝑦(𝑡)𝑑𝑡 = 𝑓(𝑥)

- Volterra’s integral equation of the second kind:


𝑏
Y(x) = F(x) = ‫𝑥(𝑘 𝑎∫ גּ‬, 𝑡)𝑦(𝑡) + 𝑓(𝑥) ( a ≤ t ≤ x ≤ b)

 Integro differential Equations


These are equations in which an unknown function y(x) occurs In one side as an
ordinary derivative and appears on the other side under the integral sign.
𝑥
Ex: y’’(x) = - x + ∫𝑎 (𝑥 − 𝑡 )𝑦(𝑡)𝑑𝑡 y(0) =0 , y’(0) = 1

Solving a Fredholm’s integral equation if the is separable


Consider the integral equation below:
𝑏
Y(x) = F(x) = ‫𝑥(𝑘 𝑎∫ גּ‬, 𝑡)𝑦(𝑡)𝑑𝑡 … … … . (1)

Since it is assumed k(x,t) is separable


K(x,t) = g(x) h(t) ………………………..(2)
𝑏
Y(x) = f(x) + ‫𝑥(𝑘 𝑎∫ גּ‬, 𝑡)ℎ(𝑡) 𝑦(𝑡)𝑑𝑡
𝑏
Y(x) = f(x) + ‫ 𝑎∫ גּ‬ℎ(𝑡)𝑦(𝑡)𝑑𝑡 … … … … … (3)
𝑏
Let c = ∫𝑎 ℎ(𝑡)𝑦(𝑡)𝑑𝑡 … … … … … … … … . (4)

Putting (4) in (3)


Y(x) = f(x) + ‫ גּ‬g(x) c…………………….(4a)
Changing variable (x  t)
Y(t) = f(t) + ‫ גּ‬g(t) + c……………………(5)
Putting 5 and 4
𝑏 ℎ(𝑡)[ 𝑓(𝑇) + ‫גּ‬g(t)𝑐 ]𝑑𝑡
C = ∫𝑎

28 | P a g e
APPLIED MATHEMATICS

𝑏 𝑏
C = ∫𝑎 ℎ(𝑡)𝑓(𝑡)𝑑𝑡 + ∫𝑎 ‫ גּ‬h(t)𝑔(𝑡)𝑐 𝑑𝑡
𝑏 𝑏
C = ∫𝑎 ℎ(𝑡)𝑓(𝑡)𝑑𝑡 + ‫ 𝑎∫ גּ‬h(t)𝑔(𝑡) 𝑐 𝑑𝑡
𝑏 𝑏
C = ‫ גּ‬c ∫𝑎 ℎ(𝑡)𝑓(𝑡)𝑑𝑡 + ∫𝑎 h(t)𝑔(𝑡) 𝑑𝑡
𝑏 𝑏
C[1 - ‫ 𝑎∫ גּ‬ℎ(𝑡)𝑔(𝑡)𝑑𝑡 ] = ∫𝑎 ℎ(𝑡)𝑓(𝑡) 𝑑𝑡
𝑏 𝑏
C = ∫𝑎 ℎ(𝑡)𝑓(𝑡)𝑑𝑡 / 1 - ‫ 𝑎∫ גּ‬ℎ(𝑡)𝑔(𝑡)𝑑𝑡 …………………………(6)

Solution of an integral equation


Definition: A function say y(x) is said to be a solution to an integral equation if when
substituted is place of the unknown function register an identity.
Degenerate Kernel
The kernel k (x,t) of an integral equation s termed degenerate if it can be represented in
the form of a finite sum of pairwise product of functions; one of which is a function of
only x and the other being of a function of only. Mathematically,
K(x,t) = a1(x)b1(t) + a2(x)b2(t) + ….. + an(x)bn(t) = ∑𝑛𝑖=1 𝑎𝑖(𝑥)𝑏𝑖(𝑡) where ai(x), bi(t) are
linearly independent functions.
 Consider the integral equation below:
𝑏
Y(x) = f(x) + ‫ גּ‬+ ∫𝑎 𝑘(𝑥, 𝑡)𝑦(𝑡)𝑑𝑡 … … … … . . (1)

If k(x,t) is degenerate, then


K(x,t) = ∑𝑛𝑖=1 𝑎𝑖(𝑥)𝑏𝑖(𝑡)
K(x,t) = a1(x) b1(t) + a2(x) b2(t) +…… + anbn(t) ……(2)
Putting (2) into (1)
𝑏
Y(x) = f(x) + ‫ גּ‬+ ∫𝑎 [𝑎1(𝑥)𝑏1(𝑡) + 𝑎2(𝑥)𝑏2(𝑡) + ⋯ . . +𝑎𝑛𝑏𝑛(𝑡)]𝑦(𝑡)𝑑𝑡
𝑏 𝑏 𝑏
Y(x) = f(x) + ‫ גּ‬+ ∫𝑎 𝑎1(𝑥)𝑏1(𝑡) + ‫𝑎 𝑎∫ גּ‬2(𝑥)𝑏2(𝑡) + ⋯ + ‫)𝑡(𝑛𝑏𝑛𝑎 𝑎∫ גּ‬
𝑏 𝑏
Y(x) = f(x) + ‫ גּ‬a1(x)c1 + ‫ גּ‬a1(x) ∫𝑎 𝑏1(𝑡) 𝑦(𝑡)𝑑𝑡 + ⋯ + ‫ גּ‬an(x) ∫𝑎 𝑏𝑛(𝑡)𝑦(𝑡)𝑑𝑡

The compact form of the general solution is


𝑏 𝑏
Y(x) = f(x) + ‫=𝑖𝑛∑ גּ‬1 𝑎𝑖(𝑥) ∫𝑎 𝑏𝑖(𝑡)𝑦(𝑡)𝑑𝑡 where ci = ∫𝑎 𝑏𝑖(𝑡) 𝑦(𝑡)𝑑𝑡

Integral equations converting Initial Value Problems to Volterra’s integral


equation

29 | P a g e
APPLIED MATHEMATICS

Consider the second order ordinary differential equation


Y’’ + p(x) y’ + q(x) y = g(x) y(0) = ɑ y’(0) = ẞ
To convert this initial value problem to volterra;s integral equation
We let y’’(x) = u(x)
Integrate both sides from 0 to infinity (changing variables from x to t)
𝑧 𝑥
∫0 𝑦 ′′ (𝑡) = ∫0 𝑢(𝑡)𝑑𝑡
𝑥
Y’(t)| = ∫0 𝑢(𝑡)𝑑𝑡
𝑥
Y’(x) – y’(0) = ∫0 𝑢(𝑡)𝑑𝑡
𝑥
Y’(x) = y’(0) - ∫0 𝑢(𝑡)𝑑𝑡
𝑥
Y’’(x) = ẞ + ∫0 𝑢(𝑡)𝑑𝑡

Integrate both sides again from 0 to infinity (changing variables)


𝑥 𝑎 𝑥
∫0 𝑦 ′ (𝑡)𝑑𝑡 = ∫0 ẞ dt + ∫0 [ 𝑢(𝑡)𝑑𝑡]𝑑𝑡
𝑥
Y(t)| = ẞ t | + ∬0 𝑢(𝑡)𝑑𝑡 𝑑𝑡
𝑥
Y(x) – y(0) = ẞx + ∬0 𝑢(𝑡)𝑑𝑡 𝑑𝑡

Changing multiple integrals into single integrals.


ɑ
Y(x) = ɑ + ẞ x + ∫𝟎 (𝒙 − 𝒕)𝒖(𝒕)𝒅𝒕

Green’s Theorem
Let C be a positively oriented, piecewise smooth, simple, closed curve and let D be the
region enclosed by the curve. If P and Q have continuous first order partial derivatives
on D then,

𝛿𝑄 𝛿𝑃
∫ 𝑃 𝑑𝑥 + 𝑄 𝑑𝑦 = ∬ ( − ) 𝐷𝐴
𝐷 𝛿𝑥 𝛿𝑦
𝐶

30 | P a g e

You might also like