0% found this document useful (0 votes)
49 views44 pages

In Partial Fulfillment of The Requirements For The Award of The Degree of

This document describes a project submitted to Thiruvalluvar University for a Master of Science degree in Mathematics. The project investigates stability criteria for a class of fractional-order static neural networks with successive time delay. Sabeena Y submitted the project under the guidance of Dr. J. Yogambigai, Head of the Department of Mathematics at M.M.E.S. Women's Arts and Science College. The project examines the stability of fractional-order neural networks using Lyapunov methods and formulates the results in terms of linear matrix inequalities.

Uploaded by

HALEEMA FARHEEN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views44 pages

In Partial Fulfillment of The Requirements For The Award of The Degree of

This document describes a project submitted to Thiruvalluvar University for a Master of Science degree in Mathematics. The project investigates stability criteria for a class of fractional-order static neural networks with successive time delay. Sabeena Y submitted the project under the guidance of Dr. J. Yogambigai, Head of the Department of Mathematics at M.M.E.S. Women's Arts and Science College. The project examines the stability of fractional-order neural networks using Lyapunov methods and formulates the results in terms of linear matrix inequalities.

Uploaded by

HALEEMA FARHEEN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 44

LMI – BASED APPROACH ON STABILITY CRITERIA FOR A CLASS OF

FRACTIONAL – ORDER STATIC NEURAL NETWORK WITH SUCCESSIVE


TIME DELAY
PROJECT
Submitted to the
THIRUVALLUVAR UNIVERSITY
in partial fulfillment of the requirements for the award of the Degree of
MASTER OF SCIENCE
IN
MATHEMATICS

By
SABEENA.Y
(REG. NO. 34620P20005)
Under the Guidance and Supervision of
Dr. J. YOGAMBIGAI, M.Sc., B.Ed., HDCA., Ph.D.,
HEAD AND ASSISTANT PROFESSOR
DEPARTMENT OF MATHEMATICS

APRIL -2022
M.M.E.S. WOMEN’S ARTS AND SCIENCE COLLEGE,
(Affiliated to Thiruvalluvar University)
MELVISHARAM-632509.

i
M.M.E.S. WOMEN’S ARTS AND SCIENCE COLLEGE
(Affiliated to Thiruvalluvar University)

BONAFIDE CERTIFICATE
This is to certify that the dissertation entitled “LMI – BASED APPROACH ON
STABILITY CRITERIA FOR A CLASS OF FRACTIONAL – ORDER STATIC
NEURAL NETWORK WITH SUCCESSIVE TIME DELAY” submitted in partial
fulfillment of the requirements for the award of the Degree of Master of Science in
Mathematics, is a record of original research work done by Ms. Y. SABEENA (Reg. No.
34620P20005) during the period 2020-2022 of her study in the Department of Mathematics, to
the M.M.E.S. Women’s Arts and Science College, Melvisharam-632509, Affiliated College
from Thiruvalluvar University, Serkkadu, Vellore-632509, under my supervision and guidance
and the dissertation has not formed the basis for the award of any Degree, Diploma,
Associateship, Fellowship, or other similar title to any candidate of any University.

…………………………………............ ……………………………………........
Dr. J. YOGAMBIGAI, M.Sc., B.Ed., Dr. J. YOGAMBIGAI, M.Sc., B.Ed.,
HDCA., Ph.D., HDCA., Ph.D.,
Guide, Department of Mathematics, Head of Department of Mathematics,
M.M.E.S. Women’s Arts and Science College, M.M.E.S. Women’s Arts and Science College,
Melvisharam-632509. Melvisharam-632509.

Date:
Submitted for the Viva-Voice Examination held on:

Examiner:

ii
DECLARATION

I, Y.SABEENA (Reg. No. 34620P20005) hereby declare that the dissertation entitled

“LMI – BASED APPROACH ON STABILITY CRITERIA FOR A CLASS OF

FRACTIONAL – ORDER STATIC NEURAL NETWORK WITH SUCCESSIVE TIME

DELAY” submitted by me for the degree of Master of Science in Mathematics, is the record of

original research work carried out by me during the period 2020-2022 under the guidance of

Dr. J. YOGAMBIGAI, Head of the Department of Mathematics, M.M.E.S. Women’s Arts and

Science College, Melvisharam, and has not formed the basis for the award of any Degree,

Diploma, Associateship, Fellowship, Titles in this or any other university or other similar

institution of higher learning.

Place:

Date:

Signature of the candidate


(SABEENA.Y)

iii
ACKNOWLEDGEMENT

First, I whole heartedly thank the lord almighty for giving me good opportunity for
completion of my project successfully.
I wish to express my sincere thanks to Correspondent
Alijanab. Haji. K. ANEES AHMED SAHIB, B.A., to allow me to do the project.
I extended my humble and sincere thanks to our Principal
Dr. FREDA GNANASELVAM M.A., M.B.A., M.M.M, Ph.D., M.M.E.S. Women’s Arts and
Science College, Melvisharam for allowing me to do my project in partial fulfilment of the
Degree of Mathematics.
I express my sincere gratitude to our Head of the Department, Dr. J. Yogambigai,
M.Sc., B.Ed., HDCA., Ph.D., Department of Mathematics, M.M.E.S. Women’s Arts and
Science College, for providing me the necessary facilities to complete this work.
I am immensely pleased to express my deep sense of gratitude and profound thanks to my
guide Dr. J. Yogambigai, Head of the Department of Mathematics, M.M.E.S. Women’s Arts
and Science College, for the successful completion of dissertation work.
I wish to express my heartfelt thanks to the teaching staff of the Department of
Mathematics, M.M.E.S. Women’s Arts and Science College, for their valuable suggestion during
the period of study.
I cannot find words for expressing my thanks to my parents Mr. A.K. Yousuf and
Mrs. Y. Shamshad, my sister Ms. Y. Mubeena, and my friends for encouraging me throughout
the period of study.

(Y. SABEENA)

iv
ABSTRACT

This research work investigates the stability of a class of fractional-order

static neural networks with successive time delay. By constructing a suitable

Lyapunov functional, two sufficient conditions are derived to ensure that the

addressed neural network is asymptotically stable. These integrals with variable

upper limit are convex functions. Based on the fractional-order Lyapunov direct

method and some inequality skills, several novel stability sufficient conditions

which ensure the global Mittag–Leffler stability of fractional-order projection

neural networks with successive time delay (FPNNs) and also calculating the

fractional order neural network are presented in the forms of linear matrix

inequalities (LMIs). The fractional order neural network model under

consideration includes multiple components which is more general than those with

the successive time delay. By constructing a new Lyapunov functional and by

using advanced techniques for achieving delay dependence. The obtained results

are formulated in terms of Linear Matrix Inequalities (LMIs) which can be easily

solved by using the MATLAB LMI tool box. Finally, a numerical example is

provided to illustrate the effectiveness of the proposed results.

v
CONTENTS

CHAPTER TITLE PAGE NO.

1 INTRODUCTION 01

2 PRELIMINARIES 08

3 PROBLEM AND RESULT 16

4 PROBLEM AND MAIN RESULT 25

5 NUMERICAL EXAMPLE 34

6 CONCLUSION 37

BIBLIOGRAPHY 38

NOMENCLATURE

vi
The notations are fairly standard

Symbol Meaning

T Transpose of the matrix

R
n
Set of all n-dimensional Euclidean space

Rn × m Set of all n×m–dimensional real matrices

I Identity matrix with appropriate dimensions

‖‖ The Euclidean norm of the matrix C

P>0 The matrix P is positive

P<0 The matrix P is negative

Γ Capital gamma

* The symmetric term of a symmetric matrix

vii
CHAPTER 1

INTRODUCTION

1.1 TIME-DELAY

The field of time-delay systems had its origin in the 18 th century and it

received substantial attention in the early 20 th century in works devoted to the

modelling of biological, ecological and engineering systems.

Stability of time-delay systems becomes a formal subject of study in the

1940’s, with the contributions from pontryagin and Bellman. Over the years, its

interest and popularity have grown steadily.

Ordinary differential equation in the form of ẋ ( t )=f ( t , x ( t ) ) have been a

prevalent mode description of dynamical systems. In this description, the

variables x (t ) ∈ Rn are known as the state variables and the differential equations

characterize the evolution of the state variables with respect to time. A

fundamental presumption on a system modelled as such that the future

evolution of the system is completely determined by the current value of the

state varibales. In other words, the value of the state variables x ( t ) ,t 0 ≤t ≤ ∞ . For

any t 0 can be found using the initial condition x ( t 0 ) =x 0.

1
In practice, many dynamical systems cannot be satisfactorily modelled by

an ordinary differential equation. In particular, for many systems the future

evolution of the state variables x (t) not only depend on their current value x (t 0)

but also on their past values say x (ξ) ,t 0−r < ξ<t 0 , such a system is called a time-

delay system. Time-delay systems may arise in practice for a variety of reasons.

Time-delay systems are also called systems with after effect or dead-time

hereditary systems, equations with deviating argument or differential-difference

equation. They belong to the class of functional differential equations which are

infinite dimensional, as opposed to ordinary differential equations.

1.2 NEURAL NETWORK

Neural network models can be defined by a function that takes an input

(observation) and produces an output (decision).

The function f : X → Y or a distribution over X or both X and Y. sometimes

models are intimately associated with a particular learning rule. A common use

of the “ANN model” is really the definition of a class of such functions.

An Artificial Neural Network (ANN) combines biological principles with

advanced statistics to solve problems in domains such as patter recognition and

game-play. ANNs adopt the basic model of neuron analogues connected to each

other in a variety of ways.

2
A neuron’s network function f ( x ) is defined as a composition of other

functions gi (x ), that can further be decomposed into other functions. This can be

conveniently represented as a network structure, with arrows depicting the

dependencies between functions. A widely used type of composition is the

nonlinear weighted sum, where f ( x )=K ∑


i
(w i gi (x )), where K commonly referred

to as the activation function. The important characteristics of the activation

function is that it provides a smooth transition as input values change.

1.3 STABILITY CONCEPTS

Let y (t) be a solution of the differential equations

ẋ ¿ ) = f ( t , x1 ) (1.3)

The stability of the solution concerns system’s behaviour when the

system trajectory x (t) deviates from y (t ) . without loss of generality, we will

assume that the functional differential equation(1.3) admits the solution x (t)=0,

which will be referred to as the trivial solution. Indeed, if it is desirable to study

the stability of a no-trivial solution y ( t ) ,then we may resort to the variable

transformation z (t)= x (t)-y(t) , so that the new system

ż (t )=f ( t , z t + y t )−f ( t , y t ) ,

has the trivial solution z (t )=0.

For the function ∅ ∈C ( [ a , b ] , R n), define the continuous norm ‖∙‖, by

3
‖∅‖r=maxa ≤θ ≤ b‖∅ (θ)‖.

In this definition, the vector norm ‖∙‖, represents the 2-norm‖∙‖2.

1.4 LINEAR MATRIX INEQUALITY

In convex optimization a LMI is an expression of the form

LMI ( y )= An + y 1 A 1+ y 2 A2 +… … . y m A m ≥0 ,

where

1. y=[ y i , i=1,2 , … ..m] is a real vector.

2. A0 , A 1 , A2 , … . A mare n × n symmetric matrices.

3. B>0 is a generalized inequality meaning B is a positive semi definite

matrix.

This linear matrix inequality specifies a convex constraints to determine

whether an LMI is feasible (e.g. whether there exists a vector y such that LMI

( y )≥ 0 ¿, or to solve a convex optimization problem with LMI constraints, many

optimization problem in control theory, system identification and signal

processing can be formulated using LMIs. Also LMIs find minimization of a

real linear function respectively subject to the primal and dual convex cones

governing this LMI.

1.5 LYAPUNOV-KRASOVSKII FUNCTIONAL


4
An effective method for determining the stability of a time-delay system

is the Lyapunov method. For a system without delay requires the construction

of a Lyapunov function V ( t , x (t)), which in some sense is a potential measure

quantifying the deviation of the state x (t) from the trivial solution 0. For a

delay-free system x (t) is needed to specify the system’s future evolution beyond

t, and in a time-delay system the “state” at time t required for the same purpose

is the value of x (t) in the interval [ t−r , t ] , i.e., x t it is natural to expect that for a

time delay system, the corresponding Lyapunov function be a functional V ( t , x 1 )

depending on x t , which also should measure the deviation of x t from the trivial

solution 0. Such a functional is known as a Lyapunov-Krasovskii Functional

(LKF).

1.6 LYAPUNOV FUNCTION

A Lyapunov function is a scalar function V ( y ) defined on a region D that

is continuous positive definite V ( y ) >0 for all y ≠ 0 and has continuous first order

partial derivatives as every point of D. The derivative of V with respect to

system y ' =f ( y ) written as V̇ ( y )=∇ V ( y ) ∙ f ( y ). The existence of a Lyapunov

function for which V̇ ≤ 0 on some region D containing the origin, gurantees the

stability of the zero solution of y ' =f ( y ) ,while the existence of a Lyapunov

function for which V̇ ( y )is negative definite on some rgion D containing the

origin gurantees the asymptotically stability of the zero solution of y ' =f ( y ) .

Example: Given the system


5
' '
y =z , z =− y−2 z

And the Lyapunov function

( y 2+ z 2 )
V ( y , z )=
2

V̇ ( y , z )= yz + z (− y−2 z )

¿−2 z 2

1.7 FRACTIONAL ORDER DERIVATIVE

The fractional order derivative can be observed from a Leibnitz to

L’Hopital on 1695. It is a generalization of the integral order calculus to a real

or complex order. Formally real order generalization is introduced as follows:

{
α
d
α
,α >0
d t
D α = 1 , α=0
t

∫ d τ α , α <0
a

with α ∈ R .

The Riemann-Liouville, Caputo, Grunwald-Letnikov fractional derivatives

are important approaches to the fractional order derivatives.

The Riemann-Liouville fractional derivative is the most important extension

of ordinary calculus. In contrast to the ordinary calculus, the fractional

derivative of a constant is not zero.

6
Let us suppose that the function f (t) is defined in the interval [a,b], where a

and b can even be infinite. The fractional derivative with the lower terminal at

the left end of the interval [a,b] 0 Dtp f ( t) is called the left functional derivative.

The fractional derivative with the upper terminal at the right end of the interval

[a,b] is called the right functional derivative.

CHAPTER 2

PRELIMINARIES

2.1 DEFINITIONS

7
Definition 2.1.1 (Stable)

For the system described by ẋ ( t )=f ( t , x 1 ) → ( 2.1 ) , the trivial solution x ( t )=0

is said to be stable, if for any t 0 ∈ R and any ε > 0 , there exists a δ =δ ¿)>0 such

that ‖x t ‖c<δ implies ‖x (t )‖<ε for t ≥ t 0.


0

Definition 2.1.2 (Asymptotically stable)

The system (2.1) is said to be asymptotically stable if it is stable, and for

any t 0 ∈ ε and any ε > 0, there exists a δ a=δ a ( t 0 , ε ) >0 such that ‖x t ‖c < δ aimplies
0

lim x ( t )=0.
t→∞

Definition 2.1.3 (Uniformly stable)

The system (2.1) is said to uniformly stable if it is stable, and δ ( t0 , ε ) can

be chosen independently at t 0.

Definition 2.1.4 (Uniformly asymptotically stable)

The system (2.1) is uniformly symptotically stable if it is uniformly stable

and there exists a δ a >0such that for any η> 0 ,t ≥ t 0 +T ∧t 0 ∈ R .

Definition 2.1.5 (Globally uniformly stable)

8
The system (2.1) is globally (uniformly) asymptotically stable if it is

(uniformly) asymptotically stable and δ a can be an arbitrarily large, finite

number.

Definition 2.1.6 (Linear matrix inequality)

A strict linear matrix inequality (LMI) has the general form of

m
F ( x ) F 0 + ∑ x i Fi >0 ,
(i=1)

where x=( x1 , x2 , … . , x m )T ∈ R m is a vector consisting of m variables and the

symmetric matrices F i = F Ti ∈ R m ×n , i=0,1,2 , … , mare m+1 given constant

matrices.

Definition 2.1.7 (Stability)

Ordinary Differential Equation system on which we focus is ẋ=f (t , x )

with f : [ 0 , ∞ ) D→ Rn piecewise continuous in t and locally lipschitz in x, where D

is a domain containing the origin.

Definition 2.1.8

Let α ∈ R+¿¿ and f ∈ L1 ¿. The Riemann–Liouville fractional order integral

is defined as

t
1 f (s)
D−α f ( t )= ∫
Γ (α ) 0 (t−s )1−α
ds ,

9
+∞

where 0 ≤ t ≤ b , Γ ( α )=∫ t α −1 e−1 dt .


0

Definition 2.1.9

Let f ∈ Cn ( [ 0 , ∞ ) , R ) , n−1≤ α <n , where n ∈ N +¿ .¿ The Caputo fractional-order

differential is defined by

t
1 f n ( s)
α
D f ( t )= ∫
Γ (n−α ) 0 (t−s)α +1−n
ds.

In particular, when0< α <1 ,then

t
1 f 1 ( s)
α
D f ( t )= ∫
Γ (1−α ) 0 (t−s )α
ds.

Definition 2.1.10

Let α >0. The one-parameter Mittag–Leffler function Eα of order α is

defined as

+∞ k
z
E α ( z ) =∑ ,
k=0 Γ (αk +1)

where the right-hand series is convergent.

Definition 2.1.11

Let K be a closed convex subset of Rn. For any given x ∈ Rn, the

projection operator P K : R n → K is defined by

10
n
Pk [ x ]=argmax ¿∨x−z∨¿ x ∈ R .
zϵK

2.2 LEMMA

Lemma 2.2.1 (Jensen’s Inequality)[6]

For any constant symmetric matrix M ∈ Rn ×n , M =M T >0 ,scalar γ >0 , vector

function w : [ 0 , γ ] → Rn such that the integration in the following are well defined,

then

[∫ ] [∫ ]
γ γ T γ

γ ∫ w ( s ) Mw ( s ) ds ≥
T
w (s ) M w (s ) .
0 0 0

Lemma 2.2.2 (Schur complement)[9]

For given real matrices Ω1 , Ω 2 , Ω 3 of appropriate dimensions, satisfying

Ω 1=Ω 1 , Ω 2=Ω 2 , then the following conditions are equivalent:


T T

[1 Ω
3 Ω
a) Ω T Ω k <0 ,
3 2
]
b) Ω1 <0 , Ω2−Ω3T Ω1−1 Ω3 < 0 ,

c) Ω2 <0 , Ω1−Ω 3 Ω1−1 Ω3T < 0.

Lemma 2.2.3[11]

Consider the Caputo fractional-order system D α x ( t )=f (t , x ( t )) ,t 0 ≤ t 1, with

the initial value x (t 0) and α ∈(0,1), where f : [ t 0 , ∞ ] × Ω→ R , Ω⊂ R . The Caputo


n n

11
system has a unique solution if f (t , x) satisfies the locally Lipschitz condition

with x .

Lemma 2.2.4[5]

Let Ω ⊂ Rn. If V (h (t)): Ω→ R and h( t):¿ → Ω are two continuous and

differential functions, and V (h (t)) is convex over Ω , then

( ) D h( t ) , ∀ α ϵ ( 0,1 ) , ∀ t ≥0.
T
∂V
D V ( h( t ) ) ≤
α α
∂h

Specially, for any P>0,when V (h (t))=hT (t )Ph(t ), then the following well-known

inequality holds:

D α (hT (t ) Ph(t))≤ 2 hT (t) P D α h (t).

Lemma 2.2.5[8]

For any x , y ∈ R n, the projection operator P K satisfies the following

inequalities

¿∨PK ( x )−P K ( y )∨¿ ≤∨¿ x − y∨¿,

and

Lemma 2.2.6[9]

12
Let x=0 be an equilibrium point of system Dα x ( t )=−Cx ( t )+ f ( Ax ( t )) , t ≥ 0,

D ⊂ R n and 0 ∈ D . If there exists a continuously differentiable function

V (t , x (t )) :¿ × D → R satisfying locally Lipschitz condition in x such that

{
a ab
α 1||x ( t )|| ≤ V ( t , x ( t ) ) ≤ α 2||x ( t )|| ,
ab
Dα V ( t , x ( t ) ) ≤−α 3‖x (t )‖ ,

wheret ≥ 0 , x ∈ D , α ∈ ( 0,1 ) , α 1 , α 2 , α 3,a∧bare arbitrary positive scalars. Then x=0 is

Mittag–Leffler stable. If all conditions are satisfied globally on Rn, then x=0 is

globally Mittag–Leffler stable.

2.3. THEOREM

Theorem 2.3.1 (Lyapunov’s first theorem)

If there exists a positive definite function V ( x (t)) on ∆ and

V 1 +V 2 ∙ f ≤ 0 ,

where,

∆={ ( t , x ) :t ≥ 0 ,‖x‖<a< ∞ } ,

Then x=0 is stable solution of

x ' =f ( t , x ) , x ( t 0 )=x ( 0 ) ,0 ≤ t 0 ≤ ∞ .

Indeed, for g(t , y )=0, we have y (t ,t 0 , y 0)= y 0 is a solution of the comparison

equation which shows that y=0 is stable.

13
Theorem 2.3.2 (Lyapunov’s second theorem)

Assume that there exists a positive definite function V(t,x) such that its

derivative with respect to ẋ=f (t , x t ) is negative definite. Then the solution x=0

of ẋ=f ( t , x t ) is uniformly asymptotically stable.

Theorem 2.3.3 (Lyapunov-krasovskii stability theorem)

Suppose f : R ×C → Rn in ẋ ( t )=f ( t , x ) maps R ×(bounded sets in C) into a

bounded sets in Rn , and that u , v , w : R+¿→ R +¿¿ ¿ are continuous non decreasing

functions, where additionally u( s)∧v ( s)are positive for s>0, and u(0)=v (0)=0. If

there exists a continuous differentiable functional V : R ×C → R such that

u ¿ c)

and

V̇ ( t , ∅ ) ≤−w ¿ ),

then the trivial solution of ẋ ( t )=f ( t , x t ) is uniformly stable. If w (s )>0 for s>0,

then it is uniformly asymptotically stable. If, in addition nlim u ( s )=∞ , then it is


→∞

globally uniformly asymptotically stable.

14
CHAPTER 3

PROBLEM AND RESULT

3.1 PROBLEM DESCRIPTION

In this research work, based on the basic properties of the neural network,

we will mainly study the following neural network model

15
{ ẏ ( t )=−Cy ( t ) + f ( Ay ( t ) ) + g( By ( t−d 1 ( t )−d2 ( t )) ) , t ≥ 0 ,
y i ( 0 )= y i 0 ,i=1,2 , … , n .
(3.1)

where B = diag{b1,b2,...,bn} and bi > 0; y (t)∈ R n denote the state vector of

system (3.1); the connection weight matrix A=[a1T,a2T,….anT]T with ai

= [ai1,ai2,...,ain]; J = [j1, j2,...,jn]T ∈ Rn denotes the external input; the activation

function f(Ay(t)) = [f1(a1y(t)), f2(a2y(t)),..., fn(an y(t))]T with

gi(ai y(t)) = PK (ai y(t) + ji) ; K is a closed rectangle given by K = {y = [y1, y2,...,

yn]T ∈ Rn : ki− ≤ yi ≤ ki+, i = 1,2,...,n}; the projection operator PK (z) = [PK (z1),

PK (z2),..., PK (zn)]T is defined by

P K ( zi ) =¿ (3.2)

where ki+, ki−(i = 1,2,...,n) are given constants and z ∈ Rn.

Since the activation function of neural network (3.1) is a projection mapping,

system (3.1) is a neural network with successive time delay.

System (3.1) with initial value condition has a unique equilibrium point,

denoted by y¿ =[y1*,y2*,…,yn*]T . By using the transformation x(t) = y(t) − y∗,

system (3.1) can be rewritten as

{ ẋ ( t )=−Cx ( t ) +f ( Ax ( t ) )+ g ( Bx ( t −d 1 ( t )−d 2 ( t ) ) ), t ≥ 0 ,
xi ( 0 )= xi 0 , i=1,2 , … n ,
(3.3)

where f (Ax(t)) = [ f1(a1x(t)), f2(a2x(t)),..., fn(anx(t))]T and fi(ai x(t)) = PK (ai(x(t) +

y∗) + ji ) − PK (ai y∗ + ji) with fi(0) = 0, i = 1,2,...,n and xi0 = yi0 − yi* .

16
The time delays d 1 ( t )∧d 2 ( t ) are time varying differentiable functions that

satisfy

0 ≤ d 1 ( t ) ≤ d 1 <∞ , ḋ 1 ( t ) ≤ τ 1 < ∞,

0 ≤ d 2 ( t ) ≤ d 2 <∞ , ḋ 2 ( t ) ≤ τ 2< ∞,

And we denote

d ( t ) =d 1 ( t )−d2 ( t ), d ( t )=d 1 ( t )−d2 ( t ) , τ =τ 1 +τ 2

The purpose of this research work to discuss the asymptotic stability

problem for system(3.1). Based on the Lyapunov direct method, two sufficient

conditions are established in terms of the LMI.

3.2 MAIN RESULT

Theorem 3.2.1

System (3.3) is asymptotically stable if there exist positive definite matrix

Q ∈ Rn×n and diagonal positive definite matrices Di = diag{di1,di2, ...,din} (i

= 1,2,3) such that the following LMI holds:

μ=
( φ11 φ12
¿ φ22 )
<0,

17
where φ 11 = −(Q+ AT D1A)B−B(Q+ AT D1A)+ AT D3A, φ 12 = Q+ AT D1A+ B AT

D1 + AT D2, φ 22 = −2D2 − D3 − D1A − AT D1.

Proof:

From the definition of PK , PK (u +ai y∗ − ji) is a non-decreasing function

with respect to u. Since fi (u) = PK (u + ai y∗ − ji) − PK (ai y∗ − ji), then fi(u) is

also a non-decreasing function with respect to u. Now, we have

f i (u )
0≤ ≤1 , ∀ u≠ 0 , f i ( 0 )=0. (3.4)
u

Inequality (3.4) implies that u(u−f i (u)) ≥0 and u f i(u) ≥ 0. So

ai x(t ) ai x ( t )

0≤ ∫ ( s−f i ( s ) ) ds , 0≤ ∫ f i ( s ) ds . (3.5)
0 0

From (3.5), there always holds:

ai x ( t )

0≤ ∫ ( s−f i ( s ) ) ds
0

ai x (t ) ai x(t )

¿ ∫ sds− ∫ f i ( s ) ds
0 0

a i x(t )

≤ ∫ sds (3.6)
0

1
= 2 (ai x ( t ) )2.

Let h ( u )=∫ ( s−f i ( s ) ) ds, then h(u) is convex with respect to u. In fact,
0

18
h ( u )=u−f i ( u ) .
'

For any u1 < u2,

h ' (u2) − h ' (u1 ) =u2−u 1−( f i ( u2 )−f i ( u1 ) )

u 2−u1
≥(u2 −u1)(1− )
u 2−u1

=0.

So h(u) is convex.

Next, we select the following Lyapunov function:

n ai x(t) t

V ( t , x )=x ( t ) Qx ( t ) +2 ∑ d 1 i
T
∫ ( s−f i ( s ) ) ds+ ∫ u T ( s ) Pu ( s ) ds ,
i=1 0 t −d1 ( t ) −d2 ( t )

(3.7)

where Q is a positive definite matrix. Obviously,

γ1||x(t)||2 ≤ V(t, x) ≤ γ2||x(t)||2, (3.8)

where γ1 = λmin(Q) and γ2 = λmax(Q) + max


1 ≤i ≤n
{d 1i } λ (AT A).
max

Since ai is a row vector and x(t) is a column vector, ai x(t) and fi(ai x(t))

are two real numbers. The multiplication of matrices, computing the neural

network of V(t, x) along the trajectory of system (3.3) gives

n ai x (t ) t

V̇ ( t , x )=[ x ( t ) Q ẋ ( t ) ]+ 2 ∑ d1 i ẋ ∫ ( s−f i ( s ) ) ds +¿ ẋ ∫
T T
u ( s ) Pu ( s ) ds ¿
i=1 0 t −d1 ( t ) −d2 ( t )

19
n
¿ 2 x (t)Q ẋ (t)+2 ∑ d 1 i (a i x ( t )−f i ( ai x ( t ) ) )(a¿¿ i x ( t ) )+uT ( t ) Pu ( t ) ẋ (t)−¿u T ( t−d 1 ( t )−d 2 ( t ) ) Pu ( t−d 1 ( t ) −d 2 ( t
T

i=1

n
¿ 2 x ( t ) Q ẋ ( t ) +2 ∑ d 1 i ( ai x ( t )−f i ( ai x ( t ) ) ) ai ẋ ( t )+ u ( t ) Pu (t )−u ( t−d 1 ( t )−d 2 ( t ) ) Pu ( t−d1 ( t ) −d 2 ( t ) ) (1−ḋ 1 ( t )−
T T T

i=1

=2 xT ( t ) Q ẋ ( t ) +2 ¿

¿ 2 x ( t ) Q ẋ ( t ) +2 x (t ) A D1 A ẋ ( t ) −2 f ( Ax ( t )) D1 A ẋ ( t )+u ( t ) Pu ( t ) ẋ ( t )−u ( t ) Pu ( t ) ẋ ( t )
T T T T T T

¿ 2 x ( t ) ( Q+ A D1 A ) D x ( t ) −2 f ( Ax ( t ) ) D1 A D x ( t )
T T α T α

¿ 2 xT ( t ) ( Q+ AT D 1 A ) [−Cx ( t ) +f ( Ax ( t ) ) +g ( Bx ( t−d 1 ( t )−d 2 ( t ) ) )]−2 f T ( Ax ( t ) ) D 1 A [−Cx ( t ) +f ( Ax ( t ) ) + g( Bx ( t−

[ ]
¿ 2 xT ( t ) −( Q+ A T D1 A ) C x ( t ) +2 x T ( t ) ( Q+ A T D 1 A ) f ( Ax ( t ) )+ 2 x T ( t ) ( Q+ A T D 1 A ) g ( Bx ( t −d 1 ( t )−d2 ( t )) ) +2 f T (

[ ]
¿ 2 xT ( t ) −( Q+ A T D1 A ) C x ( t ) +2 x T ( t ) ( Q+ A T D 1 A ) f ( Ax ( t ) )+ 2 x T ( t ) ( Q+ A T D 1 A ) g ( Bx ( t −d 1 ( t )−d2 ( t )) ) +2 x T

[ T
]
¿ x ( t ) −( Q+ A D1 A ) C−C ( Q+ A D 1 A ) x ( t )+2 x ( t ) ( Q+ A D1 A+C A D1 ) f ( Ax ( t )) −2 f ( Ax ( t ) ) D
T T T T T T

(3.9)

where D 1=diag { d 11 ,d 12 , ….. d 1 n } .

This implies that

{P K ( a i x (t ) +a i y ¿ − j i ) }2 ≤ ¿

That is, f 2i ( ai x ( t ) ) ≤ f i ( ai x ( t ) ) a i x (t )

For any diagonal positive definite matrix D 2=diag { d 21 , d22 , ….. d 2 n } ,we have

20
0 ≤ 2 f ( Ax ( t ) ) D2 Ax ( t )−2 f ( Ax ( t ) ) D2 f ( Ax ( t )) .
T T
(3.10)

From (3.4), it is easy for us to obtain

f i ( ai x ( t ) ) ≤ (a i x ( t ) ¿2 .
2

So, for any diagonal positive definite matrix D 3=diag { d 31 , d32 , ….. d 3 n } ,

0 ≤ x ( t ) A D3 Ax ( t )−f ( Ax ( t )) D 3 f ( Ax ( t ) )
T T T
(3.11)

Combining (3.9) with (3.10-3.11) yields

[ T
]
V̇ ( t , x ) ≤ x ( t ) −( Q+ A D 1 A ) C−C ( Q+ A D1 A ) x ( t ) +2 x ( t ) ( Q+ A D 1 A+C A D1 ) f ( Ax ( t ) ) −2 f ( Ax ( t ) )
T T T T T T

[ ]
¿ x T ( t ) −( Q+ A T D1 A ) C−C ( Q+ A T D 1 A ) x ( t )+2 x T ( t ) ( Q+ A T D1 A+C A T D1 ) f ( Ax ( t )) + 2 x T ( t ) AT D 2 f ( Ax ( t ) )−

[ T T
]
¿ x ( t ) −( Q+ A D1 A ) C−C ( Q+ A D 1 A ) + A D 3 A x ( t )+ 2 x ( t ) ( Q+ A D 1 A+C A D 1 + A D 2 ) f ( Ax ( t ) ) + f
T T T T T T

[ ]
¿ x T ( t ) −( Q+ A T D1 A ) C−C ( Q+ A T D 1 A ) + A T D 3 A x ( t )+ 2 x T ( t ) ( Q+ A T D 1 A+C A T D 1 + A T D 2 ) f ( Ax ( t ) ) + f

[ ]
¿ x T ( t ) −( Q+ A T D1 A ) C−C ( Q+ A T D 1 A ) + A T D 3 A x ( t )+ 2 x T ( t ) ( Q+ A T D 1 A+C A T D 1 + A T D 2 ) f ( Ax ( t ) ) + f

T
¿ η (t ) μ η (t ) (3.12)

where η ( t )=¿

Therefore, if μ<0 , the considered system (3.3) neural network with successive

time delay is asymptotically stable. This proof is completed.

CHAPTER 4

21
PROBLEM AND MAIN RESULT

4.1 PROBLEM DESCRIPTION

In this research work, based on the basic properties of the fractional-order

derivative with successive time delay, we will mainly study the following

fractional-order static neural network model

{ D α y ( t )=−By ( t )+ f ( Ay ( t )) + g (Cy ( t−d1 ( t ) −d 2 ( t ) ) ) ,t ≥ 0 ,


y i ( 0 )= y i 0 ,i =1,2, … , n .
(4.1)

where α ∈ (0,1), B = diag{b1,b2,...,bn} and bi > 0; y (t) ∈ R n denote the state vector

of system (4.1); the connection weight matrix A=[a1T,a2T,….anT ]T with

ai = [ai1,ai2,...,ain]; J = [j1, j2,...,jn]T ∈ Rn denotes the external input; the

activation function f(Ay(t)) = [f1(a1y(t)), f2(a2y(t)),..., fn(an y(t))]T with

gi(ai y(t)) = PK (ai y(t) + ji) ; K is a closed rectangle given by K = {y = [y1, y2,...,

yn]T ∈ Rn : ki− ≤ yi ≤ ki+,i = 1,2,...,n}; the projection operator PK (z) = [PK (z1), PK

(z2),..., PK (zn)]T is defined by

P K ( zi ) =¿ (4.2)

where ki+, ki−(i = 1,2,...,n) are given constants and z ∈ Rn.

Since the activation function of fractional-order static neural network (4.1) is a

projection mapping, system (4.1) is a fractional-order projection neural network

with successive time delay.

22
System (4.1) with initial value condition has a unique equilibrium point,

denoted by y¿ =[y1*,y2*,…,yn*]T . By using the transformation x(t) = y(t) − y∗,

system (4.1) can be rewritten as

{ D α x ( t )=−Bx (t ) + f ( Ax ( t ) ) + g(Cx ( t −d 1 ( t )−d 2 ( t ) ) ) , t ≥ 0 ,


x i ( 0 )=x i 0 , i =1,2, … n ,
(4.3)

where f (Ax(t)) = [ f1(a1x(t)), f2(a2x(t)),..., fn(anx(t))]T and fi(ai x(t)) = PK (ai(x(t) +

y∗) + ji ) − PK (ai y∗ + ji) with fi(0) = 0, i = 1,2,...,n and xi0 = yi0 − yi* .

The time delays d 1 ( t )∧d 2 ( t ) are time varying differentiable functions that

satisfy

0 ≤ d 1 ( t ) ≤ d 1 <∞ , ḋ 1 ( t ) ≤ τ 1 < ∞,

0 ≤ d 2 ( t ) ≤ d 2 <∞ , ḋ 2 ( t ) ≤ τ 2< ∞,

And we denote

d ( t ) =d 1 ( t )−d2 ( t ), d ( t )=d 1 ( t )−d2 ( t ) , τ =τ 1 +τ 2

The purpose of this research work to discuss the asymptotic stability

problem for system (4.1). Based on the Lyapunov direct method, two sufficient

conditions are established in terms of the LMI.

4.2 MAIN RESULT

Theorem 4.2.1

23
System (4.3) is globally Mittag–Leffler stable if there exist positive

definite matrix Q ∈ Rn×n and diagonal positive definite matrices Di

= diag{di1,di2, ...,din}(i = 1,2,3) such that the following LMI holds:

Ω=
( Φ 11 Φ12
¿ Φ22
<0,
)
where Φ11 = −(Q+ AT D1A)B−B(Q+ AT D1A)+ AT D3A, Φ12 = Q+ AT D1A+ B AT

D1 + AT D2, Φ22 = −2D2 − D3 − D1A − AT D1.

Proof:

From the definition of PK , PK (u +ai y∗ − ji) is a non-decreasing function

with respect to u. Since fi (u) = PK (u + ai y∗ − ji) − PK (ai y∗ − ji), then fi(u) is

also a non-decreasing function with respect to u. From Lemma 2.2.5, we have

f i (u )
0≤ ≤1 , ∀ u≠ 0 , f i ( 0 )=0. (4.4)
u

Inequality (4.4) implies that u(u−f i (u)) ≥0 and u f i(u) ≥ 0. So

ai x(t ) ai x ( t )

0≤ ∫ ( s−f i ( s ) ) ds , 0≤ ∫ f i ( s ) ds . (4.5)
0 0

From (4.5), there always holds:

ai x ( t )

0≤ ∫ ( s−f i ( s ) ) ds
0

ai x (t ) ai x(t )

¿ ∫ sds− ∫ f i ( s ) ds
0 0

24
a i x(t )

≤ ∫ sds (4.6)
0

1
= 2 (ai x ( t ) )2.

Let h ( u )=∫ ( s−f i ( s ) ) ds, then h(u) is convex with respect to u. In fact,
0

h ( u )=u−f i ( u ) .
'

For any u1 < u2,

h ' (u2) − h ' (u1 ) =u2−u 1−( f i ( u2 )−f i ( u1 ) )

u 2−u1
≥(u2 −u1)(1− )
u 2−u1

=0.

So h(u) is convex.

Next, we select the following Lyapunov function:

n ai x(t) t

V ( t , x )=x ( t ) Qx ( t ) +2 ∑ d 1 i ∫ ( s−f i ( s ) ) ds+ ∫


T T
u ( s ) Pu ( s ) ds ,
i=1 0 t −d1 ( t ) −d2 ( t )

(4.7)

where Q is a positive definite matrix. Obviously,

γ1||x(t)||2 ≤ V(t, x) ≤ γ2||x(t)||2, (4.8)

25
where γ1 = λmin(Q) and γ2 = λmax(Q) + max
1 ≤i ≤n
{d 1i } λ (AT A).
max

Since ai is a row vector and x(t) is a column vector, ai x(t) and fi(ai x(t))

are two real numbers. By using Lemma 2.2.4 and the multiplication of matrices,

computing the Caputo fractional-order derivative of V(t, x) along the trajectory

of system (4.3) gives

n ai x (t ) t

D V ( t , x )=D [ x ( t ) Qx ( t ) ]+ 2 ∑ d1 i D ∫ ( s−f i ( s ) ) ds + Dα ∫
α α T α T
u ( s ) Pu ( s ) ds
i=1 0 t −d1 ( t ) −d 2 ( t )

n
¿ 2 x (t)Q D x (t)+2 ∑ d 1 i( ai x ( t )−f i ( ai x ( t ) ) ) D (a¿¿ i x ( t ))+u ( t ) Pu ( t ) D (t )−¿ u ( t−d 1 ( t )−d 2 ( t ) ) Pu ( t−d1
T α α T α T

i=1

n
¿ 2 x ( t ) Q D x ( t )+ 2 ∑ d1 i ( ai x ( t )−f i ( ai x ( t ) ) ) ai D x ( t )+ u ( t ) Pu ( t )−u ( t−d 1 ( t )−d 2 ( t ) ) Pu ( 1−d 1 ( t )−d 2 ( t ) ) (1−
T α α T T

i=1

¿ 2 xT ( t ) Q Dα x ( t )+ 2¿

¿ 2 x ( t ) Q D x ( t )+ 2 x ( t ) A D 1 A D x ( t ) −2 f ( Ax ( t ) ) D 1 A D x (t ) +u ( t ) Pu ( t ) D ( t )−u ( t ) Pu (t ) D (t)
T α T T α T α T α T α

¿ 2 xT ( t ) ( Q+ AT D1 A ) D α x ( t ) −2 f T ( Ax ( t ) ) D1 A Dα x ( t )

¿ 2 xT ( t ) ( Q+ AT D 1 A ) [−Bx ( t )+ f ( Ax ( t ) ) + g (Cx ( t−d 1 ( t )−d 2 ( t ) ) )]−2 f T ( Ax ( t ) ) D 1 A [−Bx ( t )+ f ( Ax ( t )) + g (Cx ( t −

[ ]
¿ 2 x ( t ) −( Q+ A D1 A ) B x (t ) +2 x (t ) ( Q+ A D1 A ) f ( Ax ( t ) ) +2 x ( t ) ( Q+ A D1 A ) g ( Cx ( t−d1 ( t )−d 2 ( t ) ) ) +2 f (
T T T T T T T

[ ]
¿ 2 x ( t ) −( Q+ A D1 A ) B x (t ) +2 x (t ) ( Q+ A D1 A ) f ( Ax ( t ) ) +2 x ( t ) ( Q+ A D1 A ) g ( Cx ( t−d1 ( t )−d 2 ( t ) ) ) +2 x (
T T T T T T T

26
[ ]
¿ x T ( t ) −( Q+ A T D1 A ) B−B ( Q+ A T D1 A ) x (t ) +2 xT ( t ) ( Q+ AT D1 A + B A T D1 ) f ( Ax ( t ) ) −2 f T ( Ax ( t ) ) D1 Af ( A

(4.9)

where D 1=diag { d 11 ,d 12 , ….. d 1 n } .

Lemma 2.2.5 implies that

{ P K ( ai x ( t )+ ai y − ji ) } ≤ P K ( a i x ( t ) +a i y − j i ) −PK ( ai y − j i ) ( ai x ( t ) ) .
¿ 2 ¿ ¿

That is, f i ( ai x ( t ) ) ≤ f i ( ai x ( t ) ) a i x (t )
2

For any diagonal positive definite matrix D 2=diag { d 21 , d22 , ….. d 2 n } ,we have

0 ≤ 2 f ( Ax ( t ) ) D2 Ax ( t )−2 f ( Ax ( t ) ) D2 f ( Ax ( t ))
T T
(4.10)

From (4.4), it is easy for us to obtain

f i ( ai x ( t ) ) ≤ (a i x ( t ) ¿2 .
2

So, for any diagonal positive definite matrix D 3=diag { d 31 , d32 , ….. d 3 n } ,

0 ≤ x ( t ) A D3 Ax ( t )−f ( Ax ( t )) D 3 f ( Ax ( t ) )
T T T
(4.11)

Combining (4.9) with (4.10 - 4.11) yields

[ T T
]
D V ( t , x ) ≤ x ( t ) −( Q+ A D1 A ) B−B ( Q+ A D 1 A ) x ( t )+2 x ( t ) ( Q+ A D1 A+ B A D1 ) f ( Ax ( t ) )−2 f ( Ax
α T T T T T

[ T
]
¿ x ( t ) −( Q+ A D1 A ) B−B ( Q+ A D1 A ) x (t ) +2 x ( t ) ( Q+ A D1 A + B A D1 ) f ( Ax ( t ) ) +2 x ( t ) A D2 f ( Ax ( t ) )−
T T T T T T T

[ T T
]
¿ x ( t ) −( Q+ A D1 A ) B−B ( Q+ A D1 A ) + A D3 A x ( t ) +2 x (t ) ( Q+ A D1 A + B A D1 + A D2) f ( Ax ( t ) ) + f
T T T T T T

27
[ T T
]
¿ x ( t ) −( Q+ A D1 A ) B−B ( Q+ A D1 A ) + A D3 A x ( t ) +2 x (t ) ( Q+ A D1 A + B A D1 + A D2) f ( Ax ( t ) ) + f
T T T T T T

[ ]
¿ x T ( t ) −( Q+ A T D1 A ) B−B ( Q+ A T D1 A ) + AT D 3 A x ( t ) +2 xT (t ) ( Q+ AT D 1 A + B A T D 1 + A T D 2) f ( Ax ( t ) ) + f

T
¿ δ (t) Ω δ (t ) (4.12)

where δ (t )=¿

Therefore, if Ω<0 , system (4.3) is globally Mittag- Leffler stable by

Lemma 2.2.6. This proof is completed.

CHAPTER 5

NUMERICAL EXAMPLES

In this section, some numerical examples are provided to illustrate the

effectiveness of theorems proposed in this research work.

EXAMPLE 5.1

28
Consider the discussed neural network with successive time delay

{ ẏ ( t )=−Cy ( t ) + f ( Ay ( t ) ) + g( By ( t−d 1 ( t )−d2 ( t )) ) , t ≥ 0 ,


y i ( 0 )= y i 0 ,i=1,2 , … , n .
(3.1)

with parameters

A=
[−1.5
0
0
−1.5 ]
[ 1.6
B ¿ −2.5
−2.5
1.6 ]
C¿ 1 [1.5 1.51 ]
and setting d 1 ( t ) +d 2 ( t ) =0.8 then applying Theorem 3.1 in MATLAB LMI

TOOLBOX, we can obtain a feasible solutions are

[6.1204
Q ¿ 0.1849 6.5421
0.1849
]
The considered system neural network with successive time delay is

asymptotically stable.

EXAMPLE 5.2

Consider the following two-dimensional system

D y 1 ( t )=P K ( −6.4993 y 1 ( t )−12.0275 y 2 ( t )−6 )−7.0214 y 1 ( t ) ,


α

D α y 2 ( t )=P K ( −0.6867 y 1 ( t )−5.6614 y 2 ( t )+6 )−7.4367 y 2 ( t ) ,

29
(5.1)

where α =0.8 , K={ y=[ y 1 , y 2 ¿ ¿ ¿ T |−2 ≤ y i ≤ 2, i=1,2 } .

Since all conditions in Theorem 4.2.1 are satisfied, system (5.1) in

Example 5.1 is globally Mittag-Leffler stable. By using the LMI Toolbox of

MATLAB, a feasible solution of LMI in Theorem 4.2.1 is given as follows:

Q= [ 121.0774
14.4221
12.8225
135.7928
, ]
D 1= [ 0.8517
0
0
2.7326 ]
,

[
D2= 10.3981
0
0
11.2321
,
]
D 3= [ 12.2223
0
0
18.4125
, ]
By this method, we can obtain the equilibrium point y ¿ ≈ ¿of system (5.1).

The state trajectories of system (5.1) with the initial value y ( 0 )=¿ are given

shows that the solution of system (5.1) with initial value condition is

asymptotically stable.

30
CHAPTER 6

CONCLUSION

This research work mainly investigate the Mittag-Leffler stability for a class

of fractional order static neural network with successive time delay. Some

convex integral terms are introduced into Lyapunov functions. Some novel

LMI-based global Mittag-Leffler stability conditions of Fractional-order

projection neural networks (FPNN) are presented. The numerical simulations of

an illustrative example further confirm the theoretical results. In future, we shall

31
consider the dissipativity analysis of fractional-order projection neural networks

with successive delay.

BIBLIOGRAPHY

[1] Stephen Boyd, Laurent El Ghaoui, Eric Feron, Venkataramanan

Balakrishnan, “Linear Matrix Inequalities in system and Control Theory”,

Stanford University, 1994.

[2] Carsten Scherer, Siep Weiland, “Linear Matrix Inequalities in control”,

Stuttgard university, 2015.

32
[3] S.M. Abedi Pahnehkolaei, A.Alfi, J.A.T.Machado, “Dynamic stability

analysis of fractional order leaky integrator echo state neural networks”,

Commun. Nonlinear Sci. Numer. Simul. 47, 328–333, 2017.

[4] N. Aguila-Camacho, M. Duarte-Mermoud, J. Gallegos, “Lyapunov

functions for fractional order systems”, Commun. Nonlinear Sci. Numer. Simul.

19, 2951–2957, 2014.

[5] W. Chen, H. Dai, Y. Song, Z. Zhang, “Convex Lyapunov functions for

stability analysis of fractional order systems”, IET Control Theory. Appl. 11,

1070–1074, 2017.

[6] K. Gu, V.L. Kharitonov, J. Chen, “Stability of time-delay systems”.

Boston: Birkhuser, 2003.

[7] K. Diethelm, “The Analysis of Fractional Differential Equations” Springer,

New York, pp. 195– 211, 2010.

[8] T.L. Friesz, D.H. Bernstein, N.J. Mehta, R.L. Tobin, S. Ganjlizadeh, “Day-

to-Day dynamic network disequilibria and idealized traveler information

systems”, Oper. Res. 42, 1120–1136, 1994.

[9] A. Zhang, Z.D. Xu, D. Liu, “New stability criteria for neural systems with

interval time-varying delays and non-linear perturbations,” in proceeding of the

2011 Chinese Control and Decision Conference, CCDC 2011, pp. 2995-29999,

May 2011.

33
[10] B. Huang, G. Hui, D. Gong, Z. Wang, X. Meng, “A projection neural

networks with mixed delays for solving linear variational inequality”, Neuro

computing 125, 28–32, 2014.

[11] Y. Li, Y. Chen, I. Podlubny, “Mittag–Leffler stability of fractional order

nonlinear dynamic systems”, Automatica 45, 1965–1969, 2009.

[12] Z. Wu, Y. Zou, “Global fractional-order projective dynamical systems”,

Commun. Nonlinear Sci. Numer.Simul. 19, 2811–2819, 2014.

[13] Z. Wu, Y. Zou, N. Huang, “A system of fractional-order interval

projection neural networks”, J. Comput. Appl. Math. 294, 389–402, 2016.

[14] Z. Wu, J. Li, N. Huang, “A new system of global fractional-order interval

implict projection neural networks”, Neuro computing 282, 111–121, 2018.

[15] P. Li, J. Cao, “Stability in static delayed neural networks: A nonlinear

measure approach,” Neurocomputing, vol. 69, no. 13-15, pp. 1776-1781, Aug.

2006.

[16] J. Liang, J. Cao, “A based –on LMI stability criterion for delayed

recurrent neural networks,” Chaos Solitons Fractals, vol. 28, no. 1, pp. 154-

160, Apr. 2006.

[17] K. Gu, L. Kharitonov, J. Chen, “stability of Time-Delay Systems”,

Boston, MA: Birkhauser, 2003.

[18] S.Boyd, E.I. Ghaoui, E. Feron, V. Balakrishnan, “Linear Matrix Inequality

in System And Control Theory”, Philadelaphia, PA: SIAM, 1994.

34
[19] D. Baleanu, A.N. Ranjbar, S.J. Sadati, H. Delavari, “Lyapunov–

Krasovskii stability theorem for fractional systems with delay”, Rom. J. Phys.

56, 5–6, 2011.

[20] L. Chen, Y. Chai, R. Wu, J. Yang, “Stability and stabilization of a class of

nonlinear fractional-order systems with Caputo derivative”, IEEE Trans.

Circuits Syst. 59, 602–606, 2012.

[21] L. Chen, Y. Chai, R. Wu, T. Ma, H. Zhai, “Dynamic analysis of a class of

fractional-order neural networks with delay”, Neuro computing 111, 190–194,

2013.

[22] J. Di, Y. He, C. Zhang, M. Wu, “Novel stability criteria for recurrent

neural networks with time-varying delay,” Neuro computing 138, 383–391,

2014.

[23] Y. Yang, Y. He, Y. Wang, M. Wu, “Stability analysis of fractional-order

neural networks an LMI approach”, Neuro computing 285, 82–93, 2018.

[24] S. Arik, “An analysis of global asymptotic stability of delayed cellular

neural networks”, IEEE Trans. Neural Networks 13, 5, 1239-1242, 2002.

[25] Y. He, G.P. Liu, D. Rees, “New delay-dependent stability criteria for

neural networks with time-varying delay”, IEEE Trans. Neural Networks 18, 1,

310-314, 2007.

[26] Y. He, M. Wu, J.H. She, G.P. Liu, “An improved global asymptotic

stability criterion for delayed cellular neural networks”, IEEE Trans. Neural

Networks 17, 1, 250-252, 2006.


35
[27] C.C. Hua, C.N. Long, X.P. Guan, “New results on stability analysis of

neural networks with time-varying delays”, Phys. Lett. A 352, 335-340, 2006.

[28] G.J. Yu, C.Y. Lu, S.H. Tsai, B.D. Liu, “Stability of cellular neural

networks with time-varying delay,” IEEE Trans. Circuits Syst. (I): Fundam

Theory Appl. 50, 5, 667-679, 2003.

[29] T. Chen, L. Rong, “Delay-independent stability analysis of Cohen-

Grossberg neural networks”, Phys. Lett. A 317, 436-499, 2003.

[30] M. Duarte-Mermoud, N. Aguila-Camacho, J. Gallegos, R. Castro-Linares,

“Using general quadratic Lyapunov functions to prove Lyapunov uniform

stability for fractional order systems”, Commun. Nonlinear Sci. Numer. Simul.

22, 650–659, 2015.

[31] S. Javad, S. Effati, M. Pakdaman, “A neural network approach for solving

a class of fractional optimal control problems”, Neural Process. Lett. 45, 1–16,

2016.

[32] X. Zhang, Q. Han, “New Lyapunov–Krasovskii functionals for global

asymptotic stability of delayed neural networks”, IEEE Trans. Neural Netw.

20, 533–539, 2009.

36
37

You might also like