0% found this document useful (0 votes)
29 views8 pages

M204 Syst II

Uploaded by

Harvey Specter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views8 pages

M204 Syst II

Uploaded by

Harvey Specter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Systems of ODE’s - II

1. Systems of Linear ODE’s


Many processes in physics , chemistry and biology are often char-
acterized by multiple functions of one variable simultaneously. The
relationship between these functions is described by differential
equations that contain these functions and their derivatives. In
this case, we have to study systems of differential equations of the
form


 x01(t) = f1(x1(t), x2(t), · · · , xn(t)),
x0 (t) = f (x (t), x (t), · · · , x (t)),

2 2 1 2 n
(1.1) sysA

 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

 0
xn(t) = fn(x1(t), x2(t), · · · , xn(t)).
We will consider mainly systems of linear equations of first order
and discuss some important systems of nonlinear equations.
It is important to note that we can reduce each n0th order equa-
tion of the form

y (n) = F (t, y, y 0, · · · , y (n−1))

into the system of first order equations





 y10 = y2,
 0
y2 = y3,


······ ,
0




 yn−1 = yn ,
y 0 = F (t, y , y , · · · , y ).

n 1 2 n−1
1
2

2. Systems of Linear ODE’s


Alinear system of ODE’s is a system of equations, where the
largest derivative of unknown functions in the system is a first
derivative and all unknown functions and their derivatives only
occur to the first power and are not multiplied by other unknown
functions, i.e. a system of n linear first order equation is a system
of ODE2s of the form


 x01(t) = p11(t)x1(t) + p12(t)x2(t) + · · · + p1n(t)xn(t) + h1(t),
x0 (t) = p (t)x (t) + p (t)x (t) + · · · + p (t)x (t) + h (t),

2 21 1 22 2 2n n 2

 ....................................

 0
xn(t) = pn1(t)x1(t) + pn2(t)x2(t) + · · · pnn(t)xn(t) + hn(t),
(2.1) sis1
where x1(t), ..., xn(t) are unknown functions, pij , hi, j = 1, ...n are
given functions. sis1
It is convinient to write the system (2.1) in the form

x0(t) = P (t)x(t) + g(t), (2.2) sisP1

where

x01(t)
   
x1(t)
 x2(t)  0
  x02(t)
x(t) =  ..  is the unknown vector -function x (t) =  .. 
   ,
. . 
xn(t) x0n(t)

 
g1(t)
 g2(t) 
g(t) = 
 ... 

gn(t)
3

is a given source function and


 
p11(t) p12(t) . . . p1n(t)
 p21(t) p22(t) . . . p2n(t) 
P (t) = 
 ...

... ... ... 
pn1(t) pn2(t) . . . pnn(t)
iis a given matrix-function. A system of equations
x0(t) = P (t)x(t) (2.3) sisP01
sisP1
is called the homogeneous sytem corresponding to the system (2.2).
eu1s Theorem 2.1. If the functions pij (t), gi(t), i, j = 1, · · · n, are
continuous on some interval I and t0 ∈ I, and x0 ∈ Rn is a given
vector, then the problem
(
x0(t) = P (t)x(t) + g(t),
(2.4) sisCauchy
x(t) = x0
has a unique solution on I.
Definition 2.2. Given vector -functions
     
f11(t) f12(t) f1n(t)
(1)
 f21(t)  (n)
 f22(t)  (n)  f2n(t) 
f (t) =  ..  , · · · , f (t) =  ..  , f (t) = 
   
 ...  ,

. .
fn1(t) fn2(t) fn1(t)
the determinat
f11 f12 ... f1n
f f22 ... f2n
W [f1, · · · , fn](t) 21 ,
... ... ... ...
fn1 fn2 ... fnn
is called the Wronski determinant or the wronskian of the vector
functions f (1)(t), · · · , f (n)(t).
4

Theorem
sisP01
2.3. If x1(t), · · · , xn(t) are solutions of the equation
(2.3) on (α, β), and
W [x1, · · · , xn](t0) 6= 0
for some t0 ∈ (α, β), then the general solution of the system has
the form,
x(t) = C1x1(t) + · · · + Cnxn(t),
where C1, · · · , Cn are arbitrary constants.
sisP01
Proof. Let φ(t) be an arbitrary solution of the equation (2.3) and
consider the system of equations


 C1x11(t0) + C2x12(t0) + · · · + c1nx1n(t0) = φ01,
C x (t ) + C x (t ) + · · · + c x (t ) = φ0,

1 21 0 2 22 0 1n 2n 0 2
(2.5) algs

 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ,

C1xn1(t0) + C2xn2(t0) + · · · + c1nxnn(t0) = φ0n,

where
 0    
φ1 x11(t0) x1n(t0)
 φ02   x21(t0)   x2n(t0) 
 .  = φ(t0), ...  = x1(t0), · · · , ...  = xn(t0)
 .. 

   
φ0n xn1(t0) xnn(t0)
algs
Since determinant of the system (2.5) W [x1, · · · , xn](t0) 6= 0, it
has a unique solution (C1∗, C2∗, · · · , Cn∗).
Thus the vector function
x∗(t) = C1∗x1(t) + C2∗x2(t) + · · · + Cn∗xn(t)
sisP01
is a solution of the system (2.3) that satisifes the initial condition
x∗(t0) = φ(t0),
So the vector - functions x∗(t) and φ(t) satisfy the same system
and the the same initial condition. Therefore due to the Theorem
5
eu1s
2.1 we have
φ(t) = C1∗x1(t) + C2∗x2(t) + · · · + Cn∗xn(t),
sisP01
i.e. arbitrary solution φ(t) of the sytem (2.3) is a linear combination
of vector functions x1(t), x2(t), · · · , xn(t), provided W [x1, · · · xn](t0) 6=
0. 
Definition
sisP01
2.4. If x1(t), · · · , xn(t) are solutions of the equation
(2.3) on (α, β) and
W [x1, · · · , xn](t0) 6= 0,
for some t0 ∈ (α, β), then x1(t), · · · , xn(t) is called fundamental
sisP01
set of solutions of the system (2.3).
sisP1
gens Theorem 2.5. A general solution of the system (2.2) has the
form
x(t) = xh(t) + v(t),
sisP01
where xh(t) is a general solution of the homogeneous sytem (2.3)
and v(t) is some particular solution of the nonhomogeneous sytem
sisP1
(2.2).
The proof of this theorem follows from the fact that, if xsisHo 1 (t)
and x2(t) are solutions of the non-homogeneous equation (2.6),
then their difference is a solution of the correspondin homogeneous
eqation
x0(t) = P (t)x(t) (2.6) sisHo

There is an analog of the Abel’s Theorem for the homogeneous


sisHo
equation (2.6) or the system


x01(t) = p11(t)x1(t) + p12(t)x2(t) + · · · + p1n(t)xn(t),
x0 (t) = p (t)x (t) + p (t)x (t) + · · · + p (t)x (t),

2 21 1 22 2 2n n
(2.7) sisHo1

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

 0
xn(t) = pn1(t)x1(t) + pn2(t)x2(t) + · · · + pnn(t)xn(t).
6

AbelSys Theorem
sisHo
2.6. (Abel’s Theorem) If the coefficients of the system
(2.6) are continuous on some interval (α, β) and the vector func-
tions
   
x11(t) x1n(t)
 x21(t)   x2n(t) 
x1(t) = 
 ... 
 , · · · , xn (t) =  . 
 .. 
xn1(t) xnn(t)
sisHo
are solutions of the system (2.6), then the following formula for the
Wronskian W (t) of these vector-functions holds true
R
[p11 (t)+···+pnn (t)]dt
W (t) = Ce . (2.8) Abfs

Proof. We will prove this Theorem for system of two equations.


Suppose that
   
x11(t) x12(t)
x1(t) = and x2(t) =
x21(t) x22(t)

are solutions of the system fo equations


(
x01(t) = p11(t)x1(t) + p12x2(t),
, (2.9) sis22
x02(t) = p21(t)x1(t) + p22x2(t),

i.e.
(
x011(t) = p11(t)x11(t) + p12x21(t),
, (2.10) sis22a
x021(t) = p21(t)x11(t) + p22x21(t),

(
x012(t) = p11(t)x12(t) + p12x22(t),
. (2.11) sis22a
x022(t) = p21(t)x12(t) + p22x22(t),
7

According to the formula for the derivativ of a teterminant (see


Lecture 18) we have
d d x11(t) x12(t)
W [x1, x2](t) =
dt dt x21(t) x22(t)
x011(t) x012(t) x (t) x12(t)
= + 11 . (2.12) ddr1
x21(t) x22(t) x021(t) x022(t)
Employing the first equation in and the first equation in we obtain
x011(t) x012(t) p (t)x11(t) + p12x21(t) p11(t)x12(t) + p12x22(t)
= 11 .
x21(t) x22(t) x21(t) x22(t)
Multiplying the second row of the determinant on the right hand
side by −p12(t) and adding to the firts row we obtain

x011(t) x012(t) p (t)x11(t) p11(t)x12(t)


= 11
x21(t) x22(t) x21(t) x22(t)
x11(t) x12(t)
= p11(t) = p11(t)W [x1, x2](t) (2.13) Del1
x21(t) x22(t)
Similarly we get
x11(t) x12(t) x11(t) x12(t)
= p 22 (t) = p22(t)W [x1, x2](t).
x021(t) x022(t) x21(t) x22(t)
Del1 Del2
(2.14) Del2
Emplying the equalites (2.13) and (2.14) in we obtain
d
W [x1, x2](t) = [p11(t) + p22(t)]W [x1, x2](t). (2.15) W22
dt
For system of n ODE’s similarly we will get that
d
W [x1, · · · xn](t) = [p11(t) + · · · + pnn(t)]W [x1, · · · , xn](t).
dt
(2.16) Wnn
8

Integrating
Abfs
this equation
Wnn
we obtain the desired Abel’s formula
(2.8). The equation (2.16) implies also that
Rt
W [x1, · · · , xn](t) = W [x1, · · · , xn](t0)e t0 [p11 (τ )+···+pnn (τ )]dτ .

Abfs0
It follows from the Abel’s formula (??) that if the wronskain of
solutions x1(t), · · · , xn(t) is zero at some point , then it is zero at
each point t..
It is clear that if n vector fucntions x1(t), · · · , xn(t) are linearly
dependent on some inerval (α, β), then their wronskain iz zero for
each t ∈ (α, β).
The inverse of this statment is not true. In fact the vector functions
 3  2 
t t |t|
y1(t) = 2 and y2(t) =
t t|t|
are linearly independent on R, but
W [y1(t), y2](t) = 0, ∀t ∈ R.

You might also like