0% found this document useful (0 votes)
231 views

Computer Science

This document provides a summary of the contents of a book on the analysis and methods of nonlinear differential equations. The table of contents lists 4 chapters - Introduction, Basic Definitions, Upper and Lower Solutions, and Bihari's Inequality and Variation of Parameters. Chapter 1 defines nonlinear differential equations and establishes some existence theorems. Chapter 2 defines upper and lower solutions and establishes comparison principles relating them to the maximal and minimal solutions. Chapter 3 covers Bihari's inequality and the method of variation of parameters for nonlinear differential equations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
231 views

Computer Science

This document provides a summary of the contents of a book on the analysis and methods of nonlinear differential equations. The table of contents lists 4 chapters - Introduction, Basic Definitions, Upper and Lower Solutions, and Bihari's Inequality and Variation of Parameters. Chapter 1 defines nonlinear differential equations and establishes some existence theorems. Chapter 2 defines upper and lower solutions and establishes comparison principles relating them to the maximal and minimal solutions. Chapter 3 covers Bihari's inequality and the method of variation of parameters for nonlinear differential equations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 38

CONTENTS

CHAPTER TITLE PAGE NO


NO

INTRODUCTION 1

I BASIC DEFINITIONS 2

II UPPER AND LOWER SOLUTION 9

III BIHAR’S INEQUALITY AND 27


VARIATION OF PARAMETERS

CONCLUSION 39

BIBLIOGRAPHY 40

1
ANALYSIS AND METHODS

OF NON-LINEAR

DEFERENTIAL EQUATION

2
INTRODUCTION

INTRODUCTION

3
This chapter is the study of nonlinear differential equations where in we aim at

establishing some existence theorems. At this stage the existence theory of differential

equation broadens its scope. The cost paid for this expansion is the loss of uniqueness

property of solutions.

Since there are more then one solutions, we define the maximal and the minimal

solutions of an IVP and establish some of their properties upper and lower solutions of

differential equations play an important role which lead to the well known “comparison

principle” monotone iterative technique is yet another constructive method for establishing

existence of solutions, in particular the external solutions. The integral inequality due to

Bihari has several applications in proving a class of non linear differential equation.

4
CHAPTER-I

5
CHAPTER - I

BASIC DEFINITIONS

1. Differential equations :-

An equation involving one dependent variable and its derivative with respect to one
(or) more independent variables is called a differential equation

Example:-

d2y
= ky
dx2

2. Independent variable :-

If two variables x and y such that y can be expressed in terms of x and the value of y
changes with the value of x in the same way.

3. Dependent variable :-

A variable whose values are deter mind by one (or) more (independent) variables is
called Dependent variable.

4. Linear differential equation :-

A differential equation of first degree in y and its derivatives, where the co-efficient
of y and its derivatives are functions of x only.

5. Non-linear differential equation:-

F (t,x,x',x'', …………. Xn )=0, tЄI → 1 is called non – linear differential equation of order
'n'

Where I = [a,b]CR

F is a real (or) complex valued function defined on IxRn+1 where Rn+1 is the real (n+1)
dimensional space consisting of the (n+1) tuples of the form

6
(x(t),x'(t), x''(t),……….xn(t))

Where t varies over I and x is an unknown function of t having the derivatives


x',x'',x'''…………xn upon order 'n'

6.Connected set :-

A set A in Rn is connected if in not a subset of the disjoint union of two open set.

(1.1) EXISTENCE THEOREM:-

We begin with an initial value problem.

x ' = f(t,x) , x(t0)= x0. → (1.1)

Where f Є C [ D,R ].Where D is an open connected set in R2 and (t0,x0) Є D. in the


subsequent discussions we choose D appropriately.In order to prove classical peano
existence theorem, we have to introduce the notion of equi – continuous family of functions.

(1.2)Initial Value problem:-

The problem of finding solutionФ for L (y)=0 with satisfiesФ (x0)=α,Ф' (x0)=β where
x0 some fixed point and α,β are constant the IVP is notated as L(y)=0, y(x0)= α,y'(x0)=β

(1.3)Equi – continuous:-

A family of functions F={ fα(t) }αЄA defined on real interval I is said to be equi-
continuous on I if for any given Є > 0 there exists a δ= δ(ε)>0 independent of fαЄ F

And also of t1,t2 in I such that |fα(t1)-fα(t2)| < Є wherever | t1-t2 |< δ

(1.4)A scoli’s Lemma:-

Let F ={fα(t)} be a family of functions which is uniformly bounded and equi


continuous on an interval I. then every sequence of functions {f n(t)} in F contains a
subsequence {fnk(t)},k=0,1,2,….. Which is uniformly convergent on every compact sub-
interval of I.

Theorem:-(1.4)
7
Let the function f(t,x)be continuous and bounded on the infinite strip.

S ={ (t,x) :t0≤ t ≤t0+ h,│ x │< ∞, (h>0)} in D.

Then the initial value problem (1.1) has at least one solution x(t) existing on the interval

I= [ t0,t0+h]

Proof:-

Define a sequence {xn}by

x0,t Є [t0+h/n];

xn(t)= t-h/n
x0+ ∫ f(S,xn(s))ds,t Є[t0+kh/n,t0 +(k+1)h/n] → 1.2
t0
where (k=1,2,3…..n-1)

it is interesting to look into the definition of xn closely.

Xn(t) = x0 is defined first on the interval [t0,t0+h/n].This definition is used to define xn on the
interval [t0+h/n,t0+2h/n]. Extend the definition of xnin the next intervalt0 + 2h/n≤t≤ t0+3h/n .

Continue this process till the interval t0≤ t ≤ t0+h is covered.

Since f is bounded on s, there exists a constant M>0 such that

|f (t,x)| ≤ M ; (t,x) ЄS.

Hence, for t1, t2ЄI , from (1.2) we obtain,

|xn(t1)-xn(t2)|≤ M |t1- t2 |

Implying that the sequence { xn(t)}is equi- continuous on I.Further, in view of the definition
of xn, we have.

|xn(t)|≤ |x0|+Mh

Implying that {xn(t)}is uniformly bounded on I. Recalling Ascoli’s lemma, we claim that
there exists a sub sequence {xnk}, k=0,1,2,…… which converges uniformly on I, to a
continuous function x(t)
8
t t
Clearly, xnk(t)=x0+∫f(s,xnk(s))ds-∫ f(s,xnk(s))ds.
t0 t-h/nk
let k → ∞. We can take the limit inside the first integral since f is continuous on S and the
convergence is uniform. The second integral tends to zero
hence
t
x (t)=x0+∫f(s,x(s))ds,t ЄI
t0
x is a solution of the IVP (1.1) on I.
Extremal solutions;-

It has been pointed out in the previous chapter that theIVP(1.1). May possess more
then one solutions.Uniqueness property holds,only when satisfies some constraints.In the
absence of this properly, we obtain the notion of external solutions, ie., maximal and
minimal solutions.

Maximal and minimal solution:-

(i)Let r(t)be a solution of the equation (1.1) on [t 0,t0+h].then r(f)is said to be a maximal
solution x(t) of (1.1) existing on [t0,t0+h]. the inequality.

X(t)≤ r(t),tЄ[t0,t0+h]. holds

(ii) let ρ(t) be a solution of the equation (1.1) on [t 0,t0+h].then ρ(t)is said to be a minimal
solution of(1.1) if, for every solution x(t) of(1.1) existing on [t0,t0+h] the inequality

ρ (t)< x(t);tЄ[t0,t0+h].holds.

Example:-

For the IVP

x'= 3x2/3,x(0)=0;

f(t,x)=3x2/3is continuous on -∞ <t <∞,-∞<x <∞. It is seen that for each constant c>o. The
function Фcdefined by,

0, - ∞ <t < c;
9
Фc(t) =
(t-c)3 , C < t < ∞;

is a solution of the IVP, these are infinite number of solutions, we also note that

0 , -∞ < t <0
r(t) =
t3, 0 < t < ∞
and ρ (t) = 0, -∞ < t < ∞,
Are also solutions of the IVP. Here r(t) and ρ (t) are the maximal and the minimal solution
of the IVP respectively.

10
CHAPTER-II
CHAPTER - II

(2.1) Upper and lower solutions:-

(i)A function wЄC' {(t0,t0+h),R} is said to be an upper solutions of (1.1)

11
if w'> f(t,w), w(t0) > x0,tЄ(t0,t0+h).

(ii)A function vЄC'{ (t0,t0+h) ,R } is said to be a lower solution of (1.1)

if v'< f(t,v),v(b0)< x0,tЄ(t0,t0+h)

(2.2)Example:-

Let f(t,x) = x2,x0=-1 in (6.1)

consider v(t)= -2/t+1, w(t) =-1/2(t+1).

Then v'(t) =2/(t+1)2,w'(t) =1/2(t+1) , f(t,v) = 4/(t+1)2, f((t,w) = ¼(t+1)2 v(0) =


-2,w(0)=-1/2.clearty v and w are lower and upper solutions.

A fundamental result on differential inequalities is the following theorem.

Theorem:- (2.3)

Let v,w Є C' { (t0,t0+h),R } be lower and upper solutions of (1.1)


respectively.Suppose that, for x≥y, f satisfies the inequality

f(t,x) – f(t,y) ≤ L (x-y) → (1.3)

where L is a constant. Then v(t0) ≤w(t0)implies that v(t) ≤w (t) , tЄ[t0,t0+h]

Proof:-

Let us prove the theorem for strict inequalities.To this end let us suppose that,

w'>f(t,w),v'≤ f(t,v),tЄ(t0,t0+h)

and v(t0)<w(t0)

we shall prove that v(t) <w(t), tЄ(t0,t0+h).

If this is false, there exists a t1Є(t0,t0+h) such that,

v(t1)=w(t1)

v(t)<w(t),tЄ(t0,t1).
12
Hence, we have for small h>0,

v(t1-h)-v(t1)<w(t1-h)-w(t1)

Which implies that v ' (t1)≥w' (t1).

This yields the inequality.

f(t1,v(t1)) v' (t1)> w' (t1)>f(t1,w(t1))=f(t1,v(t1))

Which is a contradiction.

Hence , v(t)<w (t) for tЄ(t0,t0+h).

Let us now define w (t) =w(t)+εe2Lt

Where Є>0 is sufficiently small and L is given (1.3).Then w(t)>w(t) and by (1.3),we get,

w' = w'(t)+2Lεe2Lt> f(t,w(t))-f(t,w(t))+f(t,w(t))+2Lεe2Lt

>-L(w(t)-w(t))+f(t,w(t)+2LЄe2Lt

= f(t,w(t))+LЄe2Lt >f (t,w(t))

Here we have employed the inequality (1.3) Also v(t0)<w(t0)<w(t0). It therefore follows from
the previous argument that v(t)<w(t) for t0<t<t0+h. Let ε →0 then it follows that

v(t)≤w(t),t0≤ t≤ t0+h.

The proof is complete

Theorem:-( 2.4)

Let mЄc[(t0,t0+h),R],fЄc{(t0,t0+h) x,R,R}And

D+ m(t) = lim sup1/h [m(t+h)-m(t)]≤ f((t,m(t))tЄ (t0,t0+h).


h→0
Then m(t0)≤ x0impliesm(t)≤ r(t), tЄ[t0,t0+h];

Where r(t) is the maximal solution of (1.1) existing on [t0,t0+h)

13
Proof:-

Let F be such that

F(t,x) = f(t,x),x ≥ m(t);

f(t,m(t)),x<m(t).

Let x(t) be a solutions of x' =F(t,x),x(t0)=x0.

Suppose that x(t) <m(t) for some t.

Then there exists a t1>t0

such that x(t1)≤ m (t1)and x' (t1)<D+ m(t1).

Hence, D+m(t1)≤f(t1,m(t1))=F(t1,x(t1))=x'(t1)

which contradicts D+m(t1)>x' (t1). It therefore, fallows that x(t)≥m(t), which implies
that x(t)is a solution of (1.1)in view of the definition of F.

since r(t) is the maximal solution of (1.1)we have x(t)≤ r(t)from which it follows that
m(t)≤ r(t),t0≤t<t0+h

Theorem:-(2.5)

Assume that,

(i) g(t,u) is continuous and non- negative for tЄ[ t0,t0+h),0<u< 2b and for every t1Є
(t0,t0+h),u(t) =0,tЄ[t0,t1) is the only solution of the
IVPu'=g(t,u),u(t0)=0;tЄ[t0,t1)→(1.4)
(ii) fЄC [R0,R] where

R0={ (t,x);t0≤ t ≤ t0+a,│ x-x0│ ≤ b, a>0,b>0};

And for (t,x),(t,y) ЄR0.

│f(t,x) – f(t,y)│≤ g(t,│x-y│) → (1.5)

Then, the IVP(1.1) has at most one solution on [ t0,t0+h]

Proof:-
14
Suppose that there are two solutions x1(t)and x2(t) of (1.1) on [t0,t0+h]

Define m(t) = │x1(t)-x2(t)│

Then D+ m(t)≤ │x1'(t) – x2'(t)│= │f(t,x1(t))-f(t,x2(t))│

≤ g(t,m(t)).

Here we have used the condition (1.5).

Also m(t0)=0. We now apply comparison

Theorem (2.4)and get.

m(t) ≤ r(t),t0 ≤ t<t1,t1 Є(t0,t0+h) where r(t) is the maximal solution of the
IVP(2.4). The assumption (i) ensures that r(t) =0 on [ t0,t1). Hence x(t)=y(t) for t0≤ t≤ t0+h.

The proof is complete.

(2.6)Example:-

Consider the case when 0≤ f(t,x)≤ λ (t) for tЄ[ 0,T] and 0 ≤ x ≤ A for some
A>0.

Here λ (t) = maxf(t,x) 0≤ x ≤ A

t
Choose f1=0,f2=λ(t).Then we see that v=0,w(t)= A/2+∫ λ(s)ds for
0
0 ≤ t ≤T

t
Provided ∫ λ(s) ds ≤ A/2
0
Theorem(2.7)

Let fЄC[ [t0,t0+h) x R,R] and │f(t,x)│≤ M. Then there exists a solution of the
IVP(1.1) on [ t0,t0+h).

Proof:-

Let x(t) be a solution of (1.1) with │x0│≤ r0 for some r0>0which exists on

[t0,β)for t0< β < t0+h and such that β cannot be increased. Define m(t) =│x(t)│for t0≤ t < β.

Then we have

15
D+m(t)≤ │x' (t)│= │f(t,x(t)) │ ≤ M,t0 ≤ t< β and. m(t0) =│x(t0)│= │x0│≤ r0.

We now use the theorem (2.4) .observe that the comparison equation is

u'= M, u(t0)=r0

having solution u(t) = r0+M(t-t0).This shows that │x(t)│≤ r0+M(t-t0), t0≤ t<β.

We now show that lim x(t)exists.


t- β-
For any t1,t2 such that t0< t1 < t2 < β, we have
t2 t2
│x(t1)-x(t2)│≤ ∫ │f(s,x(s))│ds≤ M ∫ ds =M(t2-t1)
t1 t1

Taking the limit as t1,t2 → β- and using Cauchy criterion for convergence, it follows that lim

x(t) exists.

t-β-

Define x(β)=lim x(t) and consider the IVP x'=f(t,x) with x(β) as the

t-β-

initial condition att=β.

Then a solution x(t) can be continued beyond β. This conclusion contradicts the assumption.

Hence, there exists a solution x(t) of (1.1) on [ t0,t0,h).

The proof is complete.

(2.8)Existence via upper and lower solutions:-

If we know the existence of upper and lower solutions w,v such that v≤w, are can
prove the existence of solutions in the closed set.

Ω=[(t,u):v(t)≤x≤w(t),t0≤t<t0+h].

Theorem:-(2.9)
16
Let I=[t0,t0+h),v,wЄC'[I,R] be lower and upper solution of (1.1) such that v(t)≤w(t)on
I and fЄC[Ω,R].Then there exists a solution x(t)of (1.1) such that v(t)≤ x(t)≤w(t) on I.

Proof:-

Let P:IxR→R be defined by

p(t,x) = max [v(t), min(x,w(t))]

then f(t, p(t,x)) defines a continuous extension of f to IxR which is also bounded since f is
bounded on Ω. Therefore by theorem (2.6), x' =f(t,p(t,x)),x(t0)=x0 has a solution on I. for ε>0
consider,

wε(t)=w(t)+ε(1+t)

vε(t)=v(t)- ε(1+t)clearly vε (t0)< x0<wε(t0)

we wish to show that vε(t)<x(t)<wε(t) on I.

suppose that t1Є (t0,t0+h) is such that vε(t)<x(t)<wε(t) on [t0,t1) and x(t1)=wε(t1).Then,
x(t1)>w(t1)and so p(t1,x(t1))=w(t1).

Also, v(t1)< p(t1,x(t1))<w(t1).

Hence,w' (t1)≥ f(t1,p(t1,x(t1)))=x'(t1) since wε'(t1)>w'(t1) we have wε' (t1) > x'(t1) This
contradicts x(t)<wε(t) for tЄ[t0,t1) on I. Letting ε→0 we get v(t) ≤ u(t)≤w(t) on I

The proof is complete

Monotone iterative method and method of quasi linearization:-

Theorem :- (2.10)

Let fЄC[IxR,R],v0,w0 be Lower and upper solutions of (1.1)such that v0≤w0 onI
=[t0,t0+h]Suppose further that,

f(t,x)-f(t,y)≥-M(x-y) → (1.6)

17
for v0≤y ≤x ≤w0 and M ≥ 0.Then there exists monotone sequence {vn},{wn}such that vn→v
and wn→w as n→∞ uniformly and monotonically on I and that v,w are minimal and
maximal solution of (1.1) respectively.

Proof:-

For any ηЄC[IxR,R] such that v0≤ η≤w0,we consider the linear differential equation.

(a) x'=f(t,η)-M(x-η),x(t0) = x0→(1.7)

It is clear that for every such η, there exists a unique solution of 1.7 on I.

Define a mapping A by A η=x. This mapping will be used to define sequences { vn},{wn }.
Let as prove that (a) v0≤ Av0,w0≥ Aw0;

(b) A is a monotone operator on the segment

[v0,w0] ={xЄC[IxR,R];v0(t)<x ≤ w0(t),tЄI}

To prove (a),

Set Av0=v1, where v1 is the unique solution of (1.7) with η = v0. Setting Ф=v1-v0, we see that

Ф' =v1'-v0'> f(t,v0)-M(v1-v0)-f(t,v0)=-MФ;Ф(t0)=0.

This shows that Ф (t)>Ф (t0)e-M(t-t0)> 0.

Hence v0<v1 on I or equivalently v0<Av0.

Similarly we can prove that w0> Aw0

To prove(b)

let η1,η2Є[v0,w0]such that η1< η2.

Suppose that x1=Aη1,x2=Aη2.Set Ф=x2-x1so that,

Ф'= f(t,η2)-M(x2-η2)-f(t1,η1)+M(x1-η1)

18
≥-M(η2-η1)-M(x2-η2)+M(x1-η1)

=-MФ.And Ф (t0)=0

As before this implies that Aη1≤ Aη2. Proving (b).We can now define sequences

vn=Avn-1,wn=Awn-1 and conclude from the previous arguments that.

v0≤ v1≤ v2≤ ………. ≤vn≤ ………… ≤wn≤ …….. ≤w2.≤w1≤w0 on I .

It then follows that limvn=v and limwn=w on I. It is easy to


n→∞ n→∞

show that v and w aresolutions of (1.1) in view of the fact that vn,wn satisfy,

vn' = f(t,vn-1) – M(vn-vn-1),vn(t0)=x0;

wn' = f(t,wn-1) – M (wn-wn-1),wn(t0)= x0

To prove that v,ω are respectively minimal and maximal solution of (1.1) we have to show
that if x(is any solution of (1.1) such that v0< x <w0 on I, then v0 < v < x <w<w0 on I.To do
this,suppose that for some n , vn< x <wn on I and set Ф = x-vn+1so that

Ф'=f(t,x-f(t,vn)+M(vn+1-vn)

>-M(x-vn)+M(vn+1-vn)= -MФ;Ф(t0)=0

Hence if follows that vn+1< x on I.Similarly x <wn+1 on I

Therefore, vn+1≤ x ≤wn+1 on I.

Since v0≤ x ≤w0 on I this proves by induction that vn≤ x ≤wn on I for all n.Taking the limit as
n→∞, we conclude that v ≤x ≤ w on I

The proof is complete

Theorem:- (2.11)

Assume that,

(i) α0; β0ЄC'[J,R] such that for tЄJ.

19
α'0≤ f(t, α0), β0'≥ f(t,β0)and α0 (t) ≤ β0(t);
(ii) fЄC0,2[Ω,R],fxx(t,x)≥0 on Ω where Ω = [ (t,x),α0(t) ≤ x ≤ β0(t),tЄJ].

Then there exist monotone sequences {αn(t)},{βn(t)}which converge uniformly to the


solution of the given IVP(1.1)and the convergence is quadratic

Proof:-

In view of (ii) above we see that whenever.

α0 (t) ≤x2≤ x1≤β0(t),f(t,x1)-f(t,x2)≤ L(x1- x2)

for some L ≥ 0 and further

f(t,x1)≥ f(t,x2)+ fx(t,x2)(x1-x2)

To obtain the above relation use Taylor’s series to express f(t,x1)and use the fact that
fxx(t,x)≥ 0. consider the related linear differential equation of order one, namely

α'1 =f(t,α0)+fx(t,α0)(α1-α0),α1(0)=x0

β1'=f(t,β0)+fx(t,β0)(β1-β0),β1(0)=x0 were α1(0) ≤x0≤β1(0).

Now set p=α0-α1 so that

P' = α0'-α1'≤ f(t,α0)-[f(t,α0)(α1-α0)]

= fx((t,α0)(α1-α0)=fx(t,α0)p.

Further p(0)=α0(0)-α1(0)≤x0-x0=0.Thus we have the differential inequality p'≤fx(t,α0)p,p(0)≤


0

Which after integration of both sides yields p(t) ≤ 0 for tЄJ.

Hence we concluded that α0 (t)≤α1(t),tЄJ.

Next, Let p =α1-β0. Note that p(o)< 0Also,p'=α1'-β0'< f(t,α0)+fx(t,α0 )( α1-α0)-f(t1 β0) .

Observe that, Since β0≥α0,we have, in view of the assumption (ii)above,f(t, β0) ≥f(t,
α0)+fx(t,α0) (β0-α0).

20
Substitute f(t,β0)to get finally p'≤fx(t , α0)p which to gether p(0) ≤ 0 again implies

P(t)≤ 0,tЄJ(or)α1(t)≤ β0(t),tЄJ.

α0(t) ≤α1(t) ≤ β0(t), tЄJ.

In a similar manner we can prove that α0 (t) ≤ β1(t) ≤ β0(t),tЄJ we now need to show that,

α1(t) ≤ β1(t), tЄJ. So that we get,

α(t)≤α1 (t) ≤ β1(t) ≤ β0(t),tЄJ.

To prove this inequality we get from,

α1(t)≥α0 (t) , f(t, α1)≥ f(t,α0) + fx(t,α0)(α1-α0).

Also we have,

α1' = f(t,α0) + fx(t,α0)(α1-α0),α1(0)=x0.

Hence it follows that, α1'≤ f(t,α1).

Similarly, we can proved that,β1'≥f(t,β1),β(0)

Hence, using theorem (2.3) we conclude that

α1(t) ≤ β1(t),tЄJ

To employ the method of mathematical induction assume that for some k>1 αk'< f(t,αk),
βk≥f(t, βk), αk(t)≤ βk(t) tЄJ, and αk (0)= x0=βk(0).

We than so that,αk(t) ≤αk+1(t)≤ βk+1(t)βk(t), tЄJ, where αk+1 and βk+1are the solutions of the
linear IVPs

α'k+1 = f(t,αk)+fx(t1αk)(αk+1-αk),αk+1(0)=x0,

β'k+1= f(t,αk)+fx(t,αk) (βk+1-βk),βk+1(0)=x0.

As before, set p = αk - αk+1 so that p'<fx (t,αk)p and p(0)=0 which implies that p(t) ≤ 0, proving
there by αk (t) ≤ αk+1(t), tЄJ.

21
We can similarly prove the inequalities αk+1(t)≤ βk(t),βk+1(t) ≤ βk(t) and αk+1(t) ≤ βk+1(t),for
tЄJ. These details are omitted here hence by induction we have for all n,

α0 ≤ α1≤α2≤……≤αn βn≤βn-1≤ ……≤ β1≤ β0on J.

We now integrate the related equations for αk+1 and βk+1 and take limit as k→∞ F following
the procedure given in theorem (2.11) we conclude that the sequences {αn(t)},
{βn(t)}converge uniformly and monotonically to the unique solution x(t) of the given IVP on
J.The complete the proof we need to show that the sequences {αn(t)}, {βn(t)} converge
quadratically . By such convergence, we mean that of x(t) is the solutions if the given IVP
on J and │x(t)- αn+1(t)│ and │x(t)- αn (t) │are the successive errrorfunctions then there exists
a constant λ>0 such that,

│x(t)- αn+1(t)│≤ λ │x(t) -αn(t)│2, n=0,1,2,..

For this purpose , consider.

Pn+1(t)=x(t)-αn+1(t),

Now that, qn+1(t)=βn+1(t)-x(t), n=0,1,2,…..

Pn+1(t)≥ 0, qn+1(t) ≥0,

Pn+1(0)=qn+1(0)=0, n=0,1,2,3……

No we have,

P'n+1=x'(t)- α'n+1(t)

= f(t, x) - [f(t1αn) + fx(t,αn)( αn+1-αn)]

=fx(t,η) (x-αn)-fx(t,αn)( αn+1-x+x-αn)

=fx(t,η)pn-fx(t,αn)pn+fx(t,αn)pn+1

= fxx(t,ξ) (η - αn) Pn+ fx(t,αn)pn+1

≤fxx(t1ξ ) p2n+fx(t,αn)pn+1

Where αn <η<x;η<ξ<αn

Note that fxx≥0


22
Let │fxx (t,x)│≤ M1,│fx(t,x)│≤L,(t,x)ЄΩ

Hence P'n+1≤ M1Pn2+Lpn+1,Pn+1(0)=0

Solving the Differential inequality,

We have (e-Lt Pn+1(t))'≤ e–LtM1 Pn2(t) which after Simplification yields.

1
(e-Lt Pn+1(t))'≤M1∫e-LsPn2(s) ds.
0
And finally

Pn+1(t)≤M1/L max P2n(s)(eLT-1)

0<x< t

≤m1/L max Pn2(s)eLT

0<x< t

Or maxx-αn+1 ≤λ max x-αn2. Where λ=M1/L eLT.

tЄJtЄJ

Similarly, we can prove that ,

max│βn+1-x│≤λ max │βn-x│2

tЄJ tЄJ

The proof is complete

23
24
CHAPTER-III

CHAPTER – III

(3.1)Bihar’s inequality:-

Bihari’s integral inequality is applied in the study of the properties of non-linear differential
equations. It generalizes the integral inequality of Gronwall we Present it in the following
theorem,

Theorem:- (3.2)

Let f,vЄC(R+,R+),gЄC[(0,∞),(0,∞)] and g(x) be non decreasing in x.


For c>0,
t
Let, f(t) < c+∫ v(s) g(f(s))ds,t≥ t0≥ 0.
t0

t
-1
then f(t) ≤ G [G(c) + ∫v(s)ds],t0≤ t < T holds, where, G(u) – G(u0)
t0
25
u
∫ ds/ g(s)', G-1 (u) is the inverse function of G(u) and.u0
t
T=sup[ t ≥ t0;G(c ) + ∫v(s)ds Єdom G-1]
t0
Proof :-

t
Let p(t) = c+∫v(s) g(f (s) ) ds.
t0
So that P(t0)= c.

Further , P'(t) = v(t) g(f(t)) ≤ v(t(t)g(p(t))

since g is non decreasing and f(t) ≤ p(t).

t t
Hence ∫p'(s)ds/ g(p(s))≤∫ v(s)ds.Substitute Z =p(s).
t0t0

Substitute Z =p(s).Then,dz=p'(s)ds. When s= t, z= p(t) and when s=t0, z=p(t0)=c,

Hence, we obtain
P(t) t
G(p(t))-G(c) =∫ dz/g(z) <∫v(s)ds,
c t0

Which yields the inequality


t
f(t) ≤p(t) ≤ G-1[ G (c ) + ∫v(s)ds], t0 ≤ t ≤ T
t0

The proof is complete.

Theorem:- (3.3)

Let f,vЄC[R+ , R+ ] , ωЄC[R+,R+] and C > 0 satisfy


t
f(t) ≤ Ct ∫[v(s)f(s) tω(s,f(s)]dst>t0
t0
Suppose further that,
t t
ω ( t, z exp (∫v(s)ds))≤ λ(t)g(z) exp(∫v(s)ds)
t0 t0
26
where λЄ C[R+,R+] gЄC[(0,∞),(0,∞)] and g(u) is non decreasing in u. then

t t
-1
f(t) ≤ G [ G(c )+∫λ(s) ds]exp( ∫v(s)) ds;t0≤ t ≤T
t0 t0
-1
(G,G and T are the same as in theorem 3.2)

Proof:- t t
Set p(t) exp (∫v(s)ds)= c+∫[v(s) f(s)+ ω (s,f(s))]ds.
t0 t0
t
It implies that f(t) ≤ p(t) exp(∫v(s)ds)
t0
Hence , We get .

t
[p'(t) +v(t)p(t)]exp(∫ v(s)ds) = v(t) f(t)+ ω(t,f(t))
t0

t
≤ v(t)p(t) exp(∫v(s)ds)+ ω(t,p(t)exp(∫v(s)ds))
t0
t t
≤ v(t) p(t) exp(∫ v(s)ds)+λ(t) g(p(t) exp(∫v(s)ds)
t0 t0
Hence it follows that
P1(t) ≤ λ(t)g(p(t)),p(t0)=c.

Following the argument of theorem 3.2,we obtain.


t
-1
P(t)≤ G [G(c )+∫ λ(s)ds], to <t<T
t0
t
f(t)≤ p(t) exp( ∫ v(s)ds)
t0

t t
-1
≤ G [G(c ) + ∫ λ(s)ds] exp (∫ v(s )ds)
t0 t0
The proof is complete.

27
(3.4)Application of bihari’s integral Inequality:-

Let the function f(t,x) be define and continuous on the rectangle R.

R:│t-t0 │< a,│x-x0│<h, a>0,h>0 and satisfy the condition.

│f(t,x2)-f(t,x1)│< g( │x2-x1│)

Where (t,x1),(t,x2) are points in R , and g is a continuous function for u> 0,

g(u) >0 for u>0 and g(0)=0 , g is non decreasing in u for u ≥0.

u
Assume that ∫dt/g(t) is divergent for u>0. Then the equation
0
1
x = f(t, x)has at most One solution Ф (x) in R with Ф(t0)=x0 .

To prove that the following reasoning is helpful, Suppose that there are two different
solution Ф and Ψ such that Ф (t0)=x0,Ψ(t0)=x0.Then there exists such that a point t1> t0 such
that

Ф(t1) = Ψ(t1) and Ф (t)≠Ψ(t) for t1< t < t1 + α

For some α>0. In case there are more then one suchpoints t, choose the one which is nearest
to t0. we have

Ф1(t)-Ψ1(t)=f(t,Ф(t)-f(t,Ψ(t))< g(│Ф(t)-Ψ(t)│)

t
Hence │Ф (t )-Ψ(t)│< ∫ g(│Ф(s)-Ψ(s)│)ds
t1
Let p(t) = │Ф (t) –Ψ(t)│and

t
∫ g(p(s)) ds = v(t). note that v(t1)=0
t1

then g(p(t) < g(v(t)) implies that,

v'< 1
g(v)
u
28
Define ∫ ds = G(u)for u0>0,u> 0
u0g(s)

We now have ,

dG(v) ≤1
dt
u
define ∫ ds = G(u)for u0>0,u> 0
u0 g(s)

We now have ,

dG(v) ≤ 1
dt
which by integration between t1+ δ and t yields

G(v(t))< G(v(t1+δ))+t-(t1+δ)δ>0,t1+δ<1.

0
If δ →0+then G(v(t1+δ))→∫ ds =-∞,u0>0.
u0 g(s)
note that G(v(t) is finite for t>t1. Then above inequality leads to a contradiction.

Similarly we can also obtain a contradiction for t<t0 Therefore, Ф(t) = Ψ(t) in the
interval of existence

The proof is complete

(3.5)Variation of parameters ( A Nonlinear Variation )

Variation of parameters (Constants) formula plays an Important role in the study of


perturbed differential equations: We have already established such a formula for the first
order, higher order and system of differential equations. These are the results pertaining to
linear equations. The present section includes a non-linear Variation of parameters formula.
We prove this formula for a system of non linear equations so that the formula obtained in
chapters 1,2,4 become particular cases of its more general variation. Consider an IVP.

x'= f(t,x) , x(t0)=x0 → (1.8)

Where fЄC[IxRn,Rn] and possesses continuous partial derivatives ∂f/∂x on IxRn. we assume I
to be the interval t0≤ t < ∞,t0 ≥ 0. We prove the following result.
29
Theorem:-3.6

Assume That fЄC [IxRn,Rn] and possesses continuous partial derivatives

∂f ie, ∂fi , i, j = 1,2,…n on IxRn


∂x ∂xi

Let the solutions x(t)=x(t,t0,x0) of The IVP x' = (t,x),x(t0)=x0 exist for t>t0 and Let

H(t,t0,x0) = ∂f (t,x(t,t0,x0)) →(1.9)


∂x
Then (i) Ф(t,t0,x0) = ∂x(t,t0,x0) →(1.10)
∂x0

Exists and is a solutions of the IVP for the linear system

Z'= H(t,t0,x0) Z,→(1.11)

Z(T0) = En →(1.12)

Where En is the identity nxn matrix

(i) ∂x(t,t0,x0) exists and is the solution


∂t0
Of Z'=H(t,t0,x0)z and satisfies the relation

∂x(t,t0,x0) = - Ф(t,t0,x0).f(t0,x0),t> t0→(1.13)


∂t0
The equation (6.16) is called variational equation.

Example :- (3.7)

To verify the result in the Theorem (1.9). Let us consider the IVP

x'= -1/2 x3, x(t0) = x0, t> t0 → (1.14)

Ivp(1.15) has a solution x given by

x(t,t0,x0)=x0[ x02 (t-t0)+1] -1/2, t, t0ЄR+ → (1.15)

Hence ∂x(t,t0,x0) = [x02 (t-t0)+1] -3/2


30
∂x0
= Ф(t, t0,x0), t≥ t0→(1.16)

Then IVP (1.11),(1.12) in this example becomes

Z' = - 3/2 [ x(t,t0,x0)]2z,z(t0)=1

we solve the above linear equation to see that,

Z(t,t0,1)=[ x02(t-t0)+1] -3/2.

= Ф(t,t0,x0),t≥ t0

Further, observe that f(t0,x0)= -1/2 x03 .

Differentiate (6.23) w.r.to t0 , to get

∂x(t,t0,x0) = -1/2 x0[ x02 (t-t0)+1] -3/2 [-x02]

∂t0

= - [ x20(t-t0)+1] -3/2. (-1/2 x30)

= - Ф(t,t0,x0).f(t0,x0).

Which is (1.13)

We shall consider the non linear differential system (1.16) The following

theorem gives an analogy of variation of parameters formula for the solution y(t,t0,x0) of the

perturbed system.

Y' =f(t,y)+F(t,y),y(t0)=x0,t>t0 →(1.17)

31
Theorem:- (3.8)

Alekseev’s formula :-

Let f,F ЄC [ I x Rn,Rn] and Let ∂f exist and be continuous on


∂x
[I x Rn]

if x(t,t0,x0)

Is the solution of existing for t≥ t0 any solution y(t, t, x0) of (1.17) satisfies the integral
equation.
t
y(t, t0, x0)= x(t,t0,x0)+∫ Ф (t,s,y(s,t0,x0)). F(s, y (s,t0, x0 ))ds → (1.18)
t0
for t≥ t0, were Ф(t, t0, x0) = ∂(t,t0,x0)
∂x0

Proof:-
Write y(t) = y(t, t0,x0) Then
d x(t, s,y(s)) = ∂x(t,s,y(s)) + ∂x(t,s,y(s))
ds ∂s ∂y
From the relations (1.18)and(1.15) it is clear that

∂x(t,s,y(s)) = -Ф (t,s,y(s)).f(s,y(s))
∂s
and ∂x(t,s,y(s)) = Ф (t,s,y(s)).
∂y
Hence, in view of (1.17), we have

d/ds (t,s,y(s)) = -Ф(t, s,y(s)).f(s,y(s)) +Ф(t,s,y(s)).

[ f(s,y(s))+F(s,y(s)]

= Ф(t, s,y(s). F(s,y(s)).

Integrate both sides between t0 and t t0 to get

t
L.H.S. = ∫ d x (t,s,y(s))ds = [ x(t,s,y(s,t0,x0)) ] = x(t,t,y(t,t0, x0))-
t0 ds t0
x (t,t0, y(t0,t0,x0))
32
= y(t,t0,x0) – x(t,t0,x0)

t
R.H.S = ∫ Ф(t,s,y(s)) .F (s, y, (s))ds
t0

The conclusion (1.18) now follows

The proof is complete.

Theorem:- (3.9)

Let fЄC [IxRn,Rn] and ∂f/∂x exist and be continuous on IxRn. Assume that x(t,t0,x0)

and x(t,t0,y0) are solutions of x' =f(t,x)through (t0,x0) and (t0,y0) respectively, existing for t ≥

t0 such that x0, y0 belong to a convex subset of Rn . Then, for t ≥ t0 1

x(t,t0,y0)- x(t,t0,x0) = [ ∫ Ф (t,t0,x0+s(y0-x0)ds] . (y0-x0)→ (1.19)


0
Proof:-

Since x0,y0 belong to a convex subset of Rn, x(t,t0,x0+s(y0-x0)) is defined for 0≤ S ≤ 1


. Thus

d/ds x(t,t0,x0 + S(y0-x0)) = Ф (t,t0,x0+s(y0-x0)).(y0-x0)

Integrate both sides between 0 and 1 to obtain (1.19)

The proof is complete.

Remark.

Let f(t,0)=0,tЄI and x0 =0. Then x(t,t0,0)=0 tЄI . for y0 ≠ 0, we obtain form (1.19)

1
x(t,t0,y0) = [ ∫ Ф(t,t0,sy0)] .y0.→(1.20)
0
In case, f(t,x) = a(t) x(with n=1),where a is a continuous on I, it is known that

t
Ф(t,t0,sy0) = exp ( ∫ a(s)ds).
t0
Hence, the relation (1.20) yields.

33
t
x(t,t0,y0)=exp( ∫ a(s)ds).y0
t0

The similarity in the representation of x(t,t0,y0) in the case of linear as well as nonlinear

differential equations is worth noting, The variation of parameters formula stated in

Theorem (3.8) has a further useful generalization we present it in the subsequent theorem.

Theorem:- (3.10)

Let solutions of the equation (1.8) for n=1 be such that

│x(t,t0,x0)│≤ c, tЄI →(1.21)

Where c >0 is a constant. Further, Suppose that

││Ф(t,s,y)F(s,y)││ ≤p(s)││y││ ,t,sЄI,yЄR → (1.22)

Where PЄC [I,Rt ] and that it is integrable on I. Then solutions of the perturbed, equation
(1.17) (for n=1) are bounded, on its interval of existence.
Proof:-
Let y(t,t0,x0) be a solution. Of the equation (1.17) for n=1). Then
t
y(t,t0,x0)=x(t,t0,x0)+∫ Ф(t,s,y(s,t0,x0)) F(s,y(s,t0,x0))ds.
t0
Employing the conditions (1.21) and (1.22), we obtain

t
│y(t,t0,x0)│≤ c+∫ p(s)│y(s,t0,x0)│ds.
t0
applying Gromwells integral inequality, we have,
t
│y(t,t0x0)│≤ c exp ( ∫ p(s)ds.
t0

Since the function P is enterable, the conclusion of the theorem follows.

34
CONCLUSION

35
CONCLUSION:
This project deals the process of “ANALYSIS AND METHODS OF NON-

LINEAR DIFFERENTIAL EQUATION” on ordinary differential equation.

Chapter - I deals with basic definition, This concept will be used in

the next chapter.

Chapter - II deals with definitions, examples & upper and lower

solutions theorems.

Chapter-III deals with definitions, Bihar’s inequality and variation

of parameters theorems.

All the chapters deals with definitions, theorems and examples on “ANALYSIS

ANDMETHODS OF NON-LINEAR DIFFERENTIALEQUATIONS”

36
BIBLIOGRAPY

37
BIBLIOGRAPHY

1.S.G.DEO, ordinary differential equation.

2.F.BRAUER and J.A.NOHEL, ordinary differential equation W.A.Benjamin,

Newyork(1967).

3.E.A.CODDINGTION, An Introduction to ordinary differential equation, prentice –Hall,

Englewood Cliffs.

4.S.G.LADDE,V.LAKSHMIKATHANM and A.S VATSALA, Monotone iterative

Technique for nonlinear Differential equations, pitman, Boston,1985.

5.W.LEIGHTON, ordinary differential equation and edition, wadsworth, Belmont,

California (1966)

6.D.A.SANCHEZ, ordinary differential equation and Stability Theory,W.H.FREEMAN and

company,San Francisco(1968)

7.H.K.WILSON, ordinary differential equation Addison-Wesley, Reading, Massa –

CHUSETTS (1971)

38

You might also like