In Partial Fulfillment of The Requirements For The Award of The Degree of
In Partial Fulfillment of The Requirements For The Award of The Degree of
By
SABEENA.Y
(REG. NO. 34620P20005)
Under the Guidance and Supervision of
Dr. J. YOGAMBIGAI, M.Sc., B.Ed., HDCA., Ph.D.,
HEAD AND ASSISTANT PROFESSOR
DEPARTMENT OF MATHEMATICS
APRIL -2022
M.M.E.S. WOMEN’S ARTS AND SCIENCE COLLEGE,
(Affiliated to Thiruvalluvar University)
MELVISHARAM-632509.
i
M.M.E.S. WOMEN’S ARTS AND SCIENCE COLLEGE
(Affiliated to Thiruvalluvar University)
BONAFIDE CERTIFICATE
This is to certify that the dissertation entitled “LMI – BASED APPROACH ON
STABILITY CRITERIA FOR A CLASS OF FRACTIONAL – ORDER STATIC
NEURAL NETWORK WITH SUCCESSIVE TIME DELAY” submitted in partial
fulfillment of the requirements for the award of the Degree of Master of Science in
Mathematics, is a record of original research work done by Ms. Y. SABEENA (Reg. No.
34620P20005) during the period 2020-2022 of her study in the Department of Mathematics, to
the M.M.E.S. Women’s Arts and Science College, Melvisharam-632509, Affiliated College
from Thiruvalluvar University, Serkkadu, Vellore-632509, under my supervision and guidance
and the dissertation has not formed the basis for the award of any Degree, Diploma,
Associateship, Fellowship, or other similar title to any candidate of any University.
…………………………………............ ……………………………………........
Dr. J. YOGAMBIGAI, M.Sc., B.Ed., Dr. J. YOGAMBIGAI, M.Sc., B.Ed.,
HDCA., Ph.D., HDCA., Ph.D.,
Guide, Department of Mathematics, Head of Department of Mathematics,
M.M.E.S. Women’s Arts and Science College, M.M.E.S. Women’s Arts and Science College,
Melvisharam-632509. Melvisharam-632509.
Date:
Submitted for the Viva-Voice Examination held on:
Examiner:
ii
DECLARATION
I, Y.SABEENA (Reg. No. 34620P20005) hereby declare that the dissertation entitled
DELAY” submitted by me for the degree of Master of Science in Mathematics, is the record of
original research work carried out by me during the period 2020-2022 under the guidance of
Dr. J. YOGAMBIGAI, Head of the Department of Mathematics, M.M.E.S. Women’s Arts and
Science College, Melvisharam, and has not formed the basis for the award of any Degree,
Diploma, Associateship, Fellowship, Titles in this or any other university or other similar
Place:
Date:
iii
ACKNOWLEDGEMENT
First, I whole heartedly thank the lord almighty for giving me good opportunity for
completion of my project successfully.
I wish to express my sincere thanks to Correspondent
Alijanab. Haji. K. ANEES AHMED SAHIB, B.A., to allow me to do the project.
I extended my humble and sincere thanks to our Principal
Dr. FREDA GNANASELVAM M.A., M.B.A., M.M.M, Ph.D., M.M.E.S. Women’s Arts and
Science College, Melvisharam for allowing me to do my project in partial fulfilment of the
Degree of Mathematics.
I express my sincere gratitude to our Head of the Department, Dr. J. Yogambigai,
M.Sc., B.Ed., HDCA., Ph.D., Department of Mathematics, M.M.E.S. Women’s Arts and
Science College, for providing me the necessary facilities to complete this work.
I am immensely pleased to express my deep sense of gratitude and profound thanks to my
guide Dr. J. Yogambigai, Head of the Department of Mathematics, M.M.E.S. Women’s Arts
and Science College, for the successful completion of dissertation work.
I wish to express my heartfelt thanks to the teaching staff of the Department of
Mathematics, M.M.E.S. Women’s Arts and Science College, for their valuable suggestion during
the period of study.
I cannot find words for expressing my thanks to my parents Mr. A.K. Yousuf and
Mrs. Y. Shamshad, my sister Ms. Y. Mubeena, and my friends for encouraging me throughout
the period of study.
(Y. SABEENA)
iv
ABSTRACT
Lyapunov functional, two sufficient conditions are derived to ensure that the
upper limit are convex functions. Based on the fractional-order Lyapunov direct
method and some inequality skills, several novel stability sufficient conditions
neural networks with successive time delay (FPNNs) and also calculating the
fractional order neural network are presented in the forms of linear matrix
consideration includes multiple components which is more general than those with
using advanced techniques for achieving delay dependence. The obtained results
are formulated in terms of Linear Matrix Inequalities (LMIs) which can be easily
solved by using the MATLAB LMI tool box. Finally, a numerical example is
v
CONTENTS
1 INTRODUCTION 01
2 PRELIMINARIES 08
5 NUMERICAL EXAMPLE 34
6 CONCLUSION 37
BIBLIOGRAPHY 38
NOMENCLATURE
vi
The notations are fairly standard
Symbol Meaning
R
n
Set of all n-dimensional Euclidean space
Γ Capital gamma
vii
CHAPTER 1
INTRODUCTION
1.1 TIME-DELAY
The field of time-delay systems had its origin in the 18 th century and it
1940’s, with the contributions from pontryagin and Bellman. Over the years, its
variables x (t ) ∈ Rn are known as the state variables and the differential equations
state varibales. In other words, the value of the state variables x ( t ) ,t 0 ≤t ≤ ∞ . For
1
In practice, many dynamical systems cannot be satisfactorily modelled by
evolution of the state variables x (t) not only depend on their current value x (t 0)
but also on their past values say x (ξ) ,t 0−r < ξ<t 0 , such a system is called a time-
delay system. Time-delay systems may arise in practice for a variety of reasons.
Time-delay systems are also called systems with after effect or dead-time
equation. They belong to the class of functional differential equations which are
models are intimately associated with a particular learning rule. A common use
game-play. ANNs adopt the basic model of neuron analogues connected to each
2
A neuron’s network function f ( x ) is defined as a composition of other
functions gi (x ), that can further be decomposed into other functions. This can be
ẋ ¿ ) = f ( t , x1 ) (1.3)
assume that the functional differential equation(1.3) admits the solution x (t)=0,
ż (t )=f ( t , z t + y t )−f ( t , y t ) ,
3
‖∅‖r=maxa ≤θ ≤ b‖∅ (θ)‖.
LMI ( y )= An + y 1 A 1+ y 2 A2 +… … . y m A m ≥0 ,
where
matrix.
whether an LMI is feasible (e.g. whether there exists a vector y such that LMI
real linear function respectively subject to the primal and dual convex cones
is the Lyapunov method. For a system without delay requires the construction
quantifying the deviation of the state x (t) from the trivial solution 0. For a
delay-free system x (t) is needed to specify the system’s future evolution beyond
t, and in a time-delay system the “state” at time t required for the same purpose
is the value of x (t) in the interval [ t−r , t ] , i.e., x t it is natural to expect that for a
depending on x t , which also should measure the deviation of x t from the trivial
(LKF).
is continuous positive definite V ( y ) >0 for all y ≠ 0 and has continuous first order
function for which V̇ ≤ 0 on some region D containing the origin, gurantees the
function for which V̇ ( y )is negative definite on some rgion D containing the
( y 2+ z 2 )
V ( y , z )=
2
V̇ ( y , z )= yz + z (− y−2 z )
¿−2 z 2
{
α
d
α
,α >0
d t
D α = 1 , α=0
t
∫ d τ α , α <0
a
with α ∈ R .
6
Let us suppose that the function f (t) is defined in the interval [a,b], where a
and b can even be infinite. The fractional derivative with the lower terminal at
the left end of the interval [a,b] 0 Dtp f ( t) is called the left functional derivative.
The fractional derivative with the upper terminal at the right end of the interval
CHAPTER 2
PRELIMINARIES
2.1 DEFINITIONS
7
Definition 2.1.1 (Stable)
For the system described by ẋ ( t )=f ( t , x 1 ) → ( 2.1 ) , the trivial solution x ( t )=0
is said to be stable, if for any t 0 ∈ R and any ε > 0 , there exists a δ =δ ¿)>0 such
any t 0 ∈ ε and any ε > 0, there exists a δ a=δ a ( t 0 , ε ) >0 such that ‖x t ‖c < δ aimplies
0
lim x ( t )=0.
t→∞
be chosen independently at t 0.
8
The system (2.1) is globally (uniformly) asymptotically stable if it is
number.
m
F ( x ) F 0 + ∑ x i Fi >0 ,
(i=1)
matrices.
Definition 2.1.8
is defined as
t
1 f (s)
D−α f ( t )= ∫
Γ (α ) 0 (t−s )1−α
ds ,
9
+∞
Definition 2.1.9
differential is defined by
t
1 f n ( s)
α
D f ( t )= ∫
Γ (n−α ) 0 (t−s)α +1−n
ds.
t
1 f 1 ( s)
α
D f ( t )= ∫
Γ (1−α ) 0 (t−s )α
ds.
Definition 2.1.10
defined as
+∞ k
z
E α ( z ) =∑ ,
k=0 Γ (αk +1)
Definition 2.1.11
Let K be a closed convex subset of Rn. For any given x ∈ Rn, the
10
n
Pk [ x ]=argmax ¿∨x−z∨¿ x ∈ R .
zϵK
2.2 LEMMA
function w : [ 0 , γ ] → Rn such that the integration in the following are well defined,
then
[∫ ] [∫ ]
γ γ T γ
γ ∫ w ( s ) Mw ( s ) ds ≥
T
w (s ) M w (s ) .
0 0 0
[1 Ω
3 Ω
a) Ω T Ω k <0 ,
3 2
]
b) Ω1 <0 , Ω2−Ω3T Ω1−1 Ω3 < 0 ,
Lemma 2.2.3[11]
11
system has a unique solution if f (t , x) satisfies the locally Lipschitz condition
with x .
Lemma 2.2.4[5]
( ) D h( t ) , ∀ α ϵ ( 0,1 ) , ∀ t ≥0.
T
∂V
D V ( h( t ) ) ≤
α α
∂h
Specially, for any P>0,when V (h (t))=hT (t )Ph(t ), then the following well-known
inequality holds:
Lemma 2.2.5[8]
inequalities
and
Lemma 2.2.6[9]
12
Let x=0 be an equilibrium point of system Dα x ( t )=−Cx ( t )+ f ( Ax ( t )) , t ≥ 0,
{
a ab
α 1||x ( t )|| ≤ V ( t , x ( t ) ) ≤ α 2||x ( t )|| ,
ab
Dα V ( t , x ( t ) ) ≤−α 3‖x (t )‖ ,
Mittag–Leffler stable. If all conditions are satisfied globally on Rn, then x=0 is
2.3. THEOREM
V 1 +V 2 ∙ f ≤ 0 ,
where,
∆={ ( t , x ) :t ≥ 0 ,‖x‖<a< ∞ } ,
x ' =f ( t , x ) , x ( t 0 )=x ( 0 ) ,0 ≤ t 0 ≤ ∞ .
13
Theorem 2.3.2 (Lyapunov’s second theorem)
Assume that there exists a positive definite function V(t,x) such that its
derivative with respect to ẋ=f (t , x t ) is negative definite. Then the solution x=0
bounded sets in Rn , and that u , v , w : R+¿→ R +¿¿ ¿ are continuous non decreasing
functions, where additionally u( s)∧v ( s)are positive for s>0, and u(0)=v (0)=0. If
u ¿ c)
and
V̇ ( t , ∅ ) ≤−w ¿ ),
then the trivial solution of ẋ ( t )=f ( t , x t ) is uniformly stable. If w (s )>0 for s>0,
14
CHAPTER 3
In this research work, based on the basic properties of the neural network,
15
{ ẏ ( t )=−Cy ( t ) + f ( Ay ( t ) ) + g( By ( t−d 1 ( t )−d2 ( t )) ) , t ≥ 0 ,
y i ( 0 )= y i 0 ,i=1,2 , … , n .
(3.1)
gi(ai y(t)) = PK (ai y(t) + ji) ; K is a closed rectangle given by K = {y = [y1, y2,...,
yn]T ∈ Rn : ki− ≤ yi ≤ ki+, i = 1,2,...,n}; the projection operator PK (z) = [PK (z1),
P K ( zi ) =¿ (3.2)
System (3.1) with initial value condition has a unique equilibrium point,
{ ẋ ( t )=−Cx ( t ) +f ( Ax ( t ) )+ g ( Bx ( t −d 1 ( t )−d 2 ( t ) ) ), t ≥ 0 ,
xi ( 0 )= xi 0 , i=1,2 , … n ,
(3.3)
y∗) + ji ) − PK (ai y∗ + ji) with fi(0) = 0, i = 1,2,...,n and xi0 = yi0 − yi* .
16
The time delays d 1 ( t )∧d 2 ( t ) are time varying differentiable functions that
satisfy
0 ≤ d 1 ( t ) ≤ d 1 <∞ , ḋ 1 ( t ) ≤ τ 1 < ∞,
0 ≤ d 2 ( t ) ≤ d 2 <∞ , ḋ 2 ( t ) ≤ τ 2< ∞,
And we denote
problem for system(3.1). Based on the Lyapunov direct method, two sufficient
Theorem 3.2.1
μ=
( φ11 φ12
¿ φ22 )
<0,
17
where φ 11 = −(Q+ AT D1A)B−B(Q+ AT D1A)+ AT D3A, φ 12 = Q+ AT D1A+ B AT
Proof:
f i (u )
0≤ ≤1 , ∀ u≠ 0 , f i ( 0 )=0. (3.4)
u
ai x(t ) ai x ( t )
0≤ ∫ ( s−f i ( s ) ) ds , 0≤ ∫ f i ( s ) ds . (3.5)
0 0
ai x ( t )
0≤ ∫ ( s−f i ( s ) ) ds
0
ai x (t ) ai x(t )
¿ ∫ sds− ∫ f i ( s ) ds
0 0
a i x(t )
≤ ∫ sds (3.6)
0
1
= 2 (ai x ( t ) )2.
Let h ( u )=∫ ( s−f i ( s ) ) ds, then h(u) is convex with respect to u. In fact,
0
18
h ( u )=u−f i ( u ) .
'
u 2−u1
≥(u2 −u1)(1− )
u 2−u1
=0.
So h(u) is convex.
n ai x(t) t
V ( t , x )=x ( t ) Qx ( t ) +2 ∑ d 1 i
T
∫ ( s−f i ( s ) ) ds+ ∫ u T ( s ) Pu ( s ) ds ,
i=1 0 t −d1 ( t ) −d2 ( t )
(3.7)
Since ai is a row vector and x(t) is a column vector, ai x(t) and fi(ai x(t))
are two real numbers. The multiplication of matrices, computing the neural
n ai x (t ) t
V̇ ( t , x )=[ x ( t ) Q ẋ ( t ) ]+ 2 ∑ d1 i ẋ ∫ ( s−f i ( s ) ) ds +¿ ẋ ∫
T T
u ( s ) Pu ( s ) ds ¿
i=1 0 t −d1 ( t ) −d2 ( t )
19
n
¿ 2 x (t)Q ẋ (t)+2 ∑ d 1 i (a i x ( t )−f i ( ai x ( t ) ) )(a¿¿ i x ( t ) )+uT ( t ) Pu ( t ) ẋ (t)−¿u T ( t−d 1 ( t )−d 2 ( t ) ) Pu ( t−d 1 ( t ) −d 2 ( t
T
i=1
n
¿ 2 x ( t ) Q ẋ ( t ) +2 ∑ d 1 i ( ai x ( t )−f i ( ai x ( t ) ) ) ai ẋ ( t )+ u ( t ) Pu (t )−u ( t−d 1 ( t )−d 2 ( t ) ) Pu ( t−d1 ( t ) −d 2 ( t ) ) (1−ḋ 1 ( t )−
T T T
i=1
=2 xT ( t ) Q ẋ ( t ) +2 ¿
¿ 2 x ( t ) Q ẋ ( t ) +2 x (t ) A D1 A ẋ ( t ) −2 f ( Ax ( t )) D1 A ẋ ( t )+u ( t ) Pu ( t ) ẋ ( t )−u ( t ) Pu ( t ) ẋ ( t )
T T T T T T
¿ 2 x ( t ) ( Q+ A D1 A ) D x ( t ) −2 f ( Ax ( t ) ) D1 A D x ( t )
T T α T α
[ ]
¿ 2 xT ( t ) −( Q+ A T D1 A ) C x ( t ) +2 x T ( t ) ( Q+ A T D 1 A ) f ( Ax ( t ) )+ 2 x T ( t ) ( Q+ A T D 1 A ) g ( Bx ( t −d 1 ( t )−d2 ( t )) ) +2 f T (
[ ]
¿ 2 xT ( t ) −( Q+ A T D1 A ) C x ( t ) +2 x T ( t ) ( Q+ A T D 1 A ) f ( Ax ( t ) )+ 2 x T ( t ) ( Q+ A T D 1 A ) g ( Bx ( t −d 1 ( t )−d2 ( t )) ) +2 x T
[ T
]
¿ x ( t ) −( Q+ A D1 A ) C−C ( Q+ A D 1 A ) x ( t )+2 x ( t ) ( Q+ A D1 A+C A D1 ) f ( Ax ( t )) −2 f ( Ax ( t ) ) D
T T T T T T
(3.9)
{P K ( a i x (t ) +a i y ¿ − j i ) }2 ≤ ¿
That is, f 2i ( ai x ( t ) ) ≤ f i ( ai x ( t ) ) a i x (t )
For any diagonal positive definite matrix D 2=diag { d 21 , d22 , ….. d 2 n } ,we have
20
0 ≤ 2 f ( Ax ( t ) ) D2 Ax ( t )−2 f ( Ax ( t ) ) D2 f ( Ax ( t )) .
T T
(3.10)
f i ( ai x ( t ) ) ≤ (a i x ( t ) ¿2 .
2
So, for any diagonal positive definite matrix D 3=diag { d 31 , d32 , ….. d 3 n } ,
0 ≤ x ( t ) A D3 Ax ( t )−f ( Ax ( t )) D 3 f ( Ax ( t ) )
T T T
(3.11)
[ T
]
V̇ ( t , x ) ≤ x ( t ) −( Q+ A D 1 A ) C−C ( Q+ A D1 A ) x ( t ) +2 x ( t ) ( Q+ A D 1 A+C A D1 ) f ( Ax ( t ) ) −2 f ( Ax ( t ) )
T T T T T T
[ ]
¿ x T ( t ) −( Q+ A T D1 A ) C−C ( Q+ A T D 1 A ) x ( t )+2 x T ( t ) ( Q+ A T D1 A+C A T D1 ) f ( Ax ( t )) + 2 x T ( t ) AT D 2 f ( Ax ( t ) )−
[ T T
]
¿ x ( t ) −( Q+ A D1 A ) C−C ( Q+ A D 1 A ) + A D 3 A x ( t )+ 2 x ( t ) ( Q+ A D 1 A+C A D 1 + A D 2 ) f ( Ax ( t ) ) + f
T T T T T T
[ ]
¿ x T ( t ) −( Q+ A T D1 A ) C−C ( Q+ A T D 1 A ) + A T D 3 A x ( t )+ 2 x T ( t ) ( Q+ A T D 1 A+C A T D 1 + A T D 2 ) f ( Ax ( t ) ) + f
[ ]
¿ x T ( t ) −( Q+ A T D1 A ) C−C ( Q+ A T D 1 A ) + A T D 3 A x ( t )+ 2 x T ( t ) ( Q+ A T D 1 A+C A T D 1 + A T D 2 ) f ( Ax ( t ) ) + f
T
¿ η (t ) μ η (t ) (3.12)
where η ( t )=¿
Therefore, if μ<0 , the considered system (3.3) neural network with successive
CHAPTER 4
21
PROBLEM AND MAIN RESULT
derivative with successive time delay, we will mainly study the following
where α ∈ (0,1), B = diag{b1,b2,...,bn} and bi > 0; y (t) ∈ R n denote the state vector
gi(ai y(t)) = PK (ai y(t) + ji) ; K is a closed rectangle given by K = {y = [y1, y2,...,
yn]T ∈ Rn : ki− ≤ yi ≤ ki+,i = 1,2,...,n}; the projection operator PK (z) = [PK (z1), PK
P K ( zi ) =¿ (4.2)
22
System (4.1) with initial value condition has a unique equilibrium point,
y∗) + ji ) − PK (ai y∗ + ji) with fi(0) = 0, i = 1,2,...,n and xi0 = yi0 − yi* .
The time delays d 1 ( t )∧d 2 ( t ) are time varying differentiable functions that
satisfy
0 ≤ d 1 ( t ) ≤ d 1 <∞ , ḋ 1 ( t ) ≤ τ 1 < ∞,
0 ≤ d 2 ( t ) ≤ d 2 <∞ , ḋ 2 ( t ) ≤ τ 2< ∞,
And we denote
problem for system (4.1). Based on the Lyapunov direct method, two sufficient
Theorem 4.2.1
23
System (4.3) is globally Mittag–Leffler stable if there exist positive
Ω=
( Φ 11 Φ12
¿ Φ22
<0,
)
where Φ11 = −(Q+ AT D1A)B−B(Q+ AT D1A)+ AT D3A, Φ12 = Q+ AT D1A+ B AT
Proof:
f i (u )
0≤ ≤1 , ∀ u≠ 0 , f i ( 0 )=0. (4.4)
u
ai x(t ) ai x ( t )
0≤ ∫ ( s−f i ( s ) ) ds , 0≤ ∫ f i ( s ) ds . (4.5)
0 0
ai x ( t )
0≤ ∫ ( s−f i ( s ) ) ds
0
ai x (t ) ai x(t )
¿ ∫ sds− ∫ f i ( s ) ds
0 0
24
a i x(t )
≤ ∫ sds (4.6)
0
1
= 2 (ai x ( t ) )2.
Let h ( u )=∫ ( s−f i ( s ) ) ds, then h(u) is convex with respect to u. In fact,
0
h ( u )=u−f i ( u ) .
'
u 2−u1
≥(u2 −u1)(1− )
u 2−u1
=0.
So h(u) is convex.
n ai x(t) t
(4.7)
25
where γ1 = λmin(Q) and γ2 = λmax(Q) + max
1 ≤i ≤n
{d 1i } λ (AT A).
max
Since ai is a row vector and x(t) is a column vector, ai x(t) and fi(ai x(t))
are two real numbers. By using Lemma 2.2.4 and the multiplication of matrices,
n ai x (t ) t
D V ( t , x )=D [ x ( t ) Qx ( t ) ]+ 2 ∑ d1 i D ∫ ( s−f i ( s ) ) ds + Dα ∫
α α T α T
u ( s ) Pu ( s ) ds
i=1 0 t −d1 ( t ) −d 2 ( t )
n
¿ 2 x (t)Q D x (t)+2 ∑ d 1 i( ai x ( t )−f i ( ai x ( t ) ) ) D (a¿¿ i x ( t ))+u ( t ) Pu ( t ) D (t )−¿ u ( t−d 1 ( t )−d 2 ( t ) ) Pu ( t−d1
T α α T α T
i=1
n
¿ 2 x ( t ) Q D x ( t )+ 2 ∑ d1 i ( ai x ( t )−f i ( ai x ( t ) ) ) ai D x ( t )+ u ( t ) Pu ( t )−u ( t−d 1 ( t )−d 2 ( t ) ) Pu ( 1−d 1 ( t )−d 2 ( t ) ) (1−
T α α T T
i=1
¿ 2 xT ( t ) Q Dα x ( t )+ 2¿
¿ 2 x ( t ) Q D x ( t )+ 2 x ( t ) A D 1 A D x ( t ) −2 f ( Ax ( t ) ) D 1 A D x (t ) +u ( t ) Pu ( t ) D ( t )−u ( t ) Pu (t ) D (t)
T α T T α T α T α T α
¿ 2 xT ( t ) ( Q+ AT D1 A ) D α x ( t ) −2 f T ( Ax ( t ) ) D1 A Dα x ( t )
[ ]
¿ 2 x ( t ) −( Q+ A D1 A ) B x (t ) +2 x (t ) ( Q+ A D1 A ) f ( Ax ( t ) ) +2 x ( t ) ( Q+ A D1 A ) g ( Cx ( t−d1 ( t )−d 2 ( t ) ) ) +2 f (
T T T T T T T
[ ]
¿ 2 x ( t ) −( Q+ A D1 A ) B x (t ) +2 x (t ) ( Q+ A D1 A ) f ( Ax ( t ) ) +2 x ( t ) ( Q+ A D1 A ) g ( Cx ( t−d1 ( t )−d 2 ( t ) ) ) +2 x (
T T T T T T T
26
[ ]
¿ x T ( t ) −( Q+ A T D1 A ) B−B ( Q+ A T D1 A ) x (t ) +2 xT ( t ) ( Q+ AT D1 A + B A T D1 ) f ( Ax ( t ) ) −2 f T ( Ax ( t ) ) D1 Af ( A
(4.9)
{ P K ( ai x ( t )+ ai y − ji ) } ≤ P K ( a i x ( t ) +a i y − j i ) −PK ( ai y − j i ) ( ai x ( t ) ) .
¿ 2 ¿ ¿
That is, f i ( ai x ( t ) ) ≤ f i ( ai x ( t ) ) a i x (t )
2
For any diagonal positive definite matrix D 2=diag { d 21 , d22 , ….. d 2 n } ,we have
0 ≤ 2 f ( Ax ( t ) ) D2 Ax ( t )−2 f ( Ax ( t ) ) D2 f ( Ax ( t ))
T T
(4.10)
f i ( ai x ( t ) ) ≤ (a i x ( t ) ¿2 .
2
So, for any diagonal positive definite matrix D 3=diag { d 31 , d32 , ….. d 3 n } ,
0 ≤ x ( t ) A D3 Ax ( t )−f ( Ax ( t )) D 3 f ( Ax ( t ) )
T T T
(4.11)
[ T T
]
D V ( t , x ) ≤ x ( t ) −( Q+ A D1 A ) B−B ( Q+ A D 1 A ) x ( t )+2 x ( t ) ( Q+ A D1 A+ B A D1 ) f ( Ax ( t ) )−2 f ( Ax
α T T T T T
[ T
]
¿ x ( t ) −( Q+ A D1 A ) B−B ( Q+ A D1 A ) x (t ) +2 x ( t ) ( Q+ A D1 A + B A D1 ) f ( Ax ( t ) ) +2 x ( t ) A D2 f ( Ax ( t ) )−
T T T T T T T
[ T T
]
¿ x ( t ) −( Q+ A D1 A ) B−B ( Q+ A D1 A ) + A D3 A x ( t ) +2 x (t ) ( Q+ A D1 A + B A D1 + A D2) f ( Ax ( t ) ) + f
T T T T T T
27
[ T T
]
¿ x ( t ) −( Q+ A D1 A ) B−B ( Q+ A D1 A ) + A D3 A x ( t ) +2 x (t ) ( Q+ A D1 A + B A D1 + A D2) f ( Ax ( t ) ) + f
T T T T T T
[ ]
¿ x T ( t ) −( Q+ A T D1 A ) B−B ( Q+ A T D1 A ) + AT D 3 A x ( t ) +2 xT (t ) ( Q+ AT D 1 A + B A T D 1 + A T D 2) f ( Ax ( t ) ) + f
T
¿ δ (t) Ω δ (t ) (4.12)
where δ (t )=¿
CHAPTER 5
NUMERICAL EXAMPLES
EXAMPLE 5.1
28
Consider the discussed neural network with successive time delay
with parameters
A=
[−1.5
0
0
−1.5 ]
[ 1.6
B ¿ −2.5
−2.5
1.6 ]
C¿ 1 [1.5 1.51 ]
and setting d 1 ( t ) +d 2 ( t ) =0.8 then applying Theorem 3.1 in MATLAB LMI
[6.1204
Q ¿ 0.1849 6.5421
0.1849
]
The considered system neural network with successive time delay is
asymptotically stable.
EXAMPLE 5.2
29
(5.1)
Q= [ 121.0774
14.4221
12.8225
135.7928
, ]
D 1= [ 0.8517
0
0
2.7326 ]
,
[
D2= 10.3981
0
0
11.2321
,
]
D 3= [ 12.2223
0
0
18.4125
, ]
By this method, we can obtain the equilibrium point y ¿ ≈ ¿of system (5.1).
The state trajectories of system (5.1) with the initial value y ( 0 )=¿ are given
shows that the solution of system (5.1) with initial value condition is
asymptotically stable.
30
CHAPTER 6
CONCLUSION
This research work mainly investigate the Mittag-Leffler stability for a class
of fractional order static neural network with successive time delay. Some
convex integral terms are introduced into Lyapunov functions. Some novel
31
consider the dissipativity analysis of fractional-order projection neural networks
BIBLIOGRAPHY
32
[3] S.M. Abedi Pahnehkolaei, A.Alfi, J.A.T.Machado, “Dynamic stability
functions for fractional order systems”, Commun. Nonlinear Sci. Numer. Simul.
stability analysis of fractional order systems”, IET Control Theory. Appl. 11,
1070–1074, 2017.
[8] T.L. Friesz, D.H. Bernstein, N.J. Mehta, R.L. Tobin, S. Ganjlizadeh, “Day-
[9] A. Zhang, Z.D. Xu, D. Liu, “New stability criteria for neural systems with
2011 Chinese Control and Decision Conference, CCDC 2011, pp. 2995-29999,
May 2011.
33
[10] B. Huang, G. Hui, D. Gong, Z. Wang, X. Meng, “A projection neural
networks with mixed delays for solving linear variational inequality”, Neuro
measure approach,” Neurocomputing, vol. 69, no. 13-15, pp. 1776-1781, Aug.
2006.
[16] J. Liang, J. Cao, “A based –on LMI stability criterion for delayed
recurrent neural networks,” Chaos Solitons Fractals, vol. 28, no. 1, pp. 154-
34
[19] D. Baleanu, A.N. Ranjbar, S.J. Sadati, H. Delavari, “Lyapunov–
Krasovskii stability theorem for fractional systems with delay”, Rom. J. Phys.
2013.
[22] J. Di, Y. He, C. Zhang, M. Wu, “Novel stability criteria for recurrent
2014.
[25] Y. He, G.P. Liu, D. Rees, “New delay-dependent stability criteria for
neural networks with time-varying delay”, IEEE Trans. Neural Networks 18, 1,
310-314, 2007.
[26] Y. He, M. Wu, J.H. She, G.P. Liu, “An improved global asymptotic
stability criterion for delayed cellular neural networks”, IEEE Trans. Neural
neural networks with time-varying delays”, Phys. Lett. A 352, 335-340, 2006.
[28] G.J. Yu, C.Y. Lu, S.H. Tsai, B.D. Liu, “Stability of cellular neural
networks with time-varying delay,” IEEE Trans. Circuits Syst. (I): Fundam
stability for fractional order systems”, Commun. Nonlinear Sci. Numer. Simul.
a class of fractional optimal control problems”, Neural Process. Lett. 45, 1–16,
2016.
36
37