2011 Zhang
2011 Zhang
DOI 10.1007/s11071-011-9988-3
O R I G I N A L PA P E R
Received: 5 October 2010 / Accepted: 10 February 2011 / Published online: 3 March 2011
© Springer Science+Business Media B.V. 2011
Abstract This paper is concerned with the passivity extensive application in various areas such as signal
analysis for a class of discrete-time switched neural processing, pattern recognition, static image process-
networks with various activation functions and mixed ing, associative memory and combinatorial optimiza-
time delays. The mixed time delays under considera- tion [1, 2]. The practical applicability of neural net-
tion include time-varying discrete delay and bounded works depends on the existence and stability of the
distributed delay. By using the average dwell time equilibrium point of the neural networks. For instance,
approach and the discontinuous piecewise Lyapunov if a neural network is employed to solve some opti-
function technique, a novel delay-dependent sufficient mization problems, it is highly desirable for the neural
condition for exponential stability of the switched network to have a unique globally stable equilibrium.
neural networks with passivity is derived in terms of a Hence, stability analysis of neural networks has re-
set of linear matrix inequalities (LMIs). The obtained ceived much attention and various stability conditions
condition is not only dependent on the discrete de- have been obtained.
lay bound, but also dependent on the distributed delay Time delays often occur in various neural networks
bound. A numerical example is given to demonstrate due to the finite switching speed of amplifiers or fi-
the effectiveness of the proposed result. nite speed of information processing, and may cause
undesirable dynamic network behaviors such as oscil-
Keywords Passivity · Switched neural networks · lation and instability. According to the way it occurs,
Mixed time delays · Delay-dependent · Average dwell time delays can be generally classified into two types:
time discrete delay and distributed delay. Therefore, there
has been a growing research interest on the stability
analysis of the neural networks with various time de-
1 Introduction
lays, see, e.g., [3–9], and the references therein.
In the past two decades, neural networks have received On the other hand, in many complex networks,
considerable research attention since they have found the connection topology of networks may change fre-
quently. Due to link failures or new creation in the net-
works, the switchings between some different topolo-
D. Zhang () · L. Yu gies are inevitable [10]. To describe the switching phe-
Department of Automation, Zhejiang Provincial United nomenon in neural networks, the so-called switched
Key Laboratory of Embedded Systems,
neural networks are proposed and the stability problem
Zhejiang University of Technology, Hangzhou 310032,
P.R. China has been considered in [11–13]. Nevertheless, most
e-mail: [email protected] of these results focused on the stability of switched
404 D. Zhang, L. Yu
neural networks under arbitrary switching rule by us- Motivated by above discussion, in this paper, we
ing the Common Lyapunov function method [14]. It are concerned with the passivity analysis for discrete-
is noted that most switched systems do not possess a time switched neural networks with various functions
common Lyapunov function, yet they could be stable and mixed time delays. The discrete-time switched
under certain switching laws. Average dwell time tech- neural networks with time-varying discrete delay and
nique has been recognized to be an effective tool of bounded distributed delay is firstly proposed. Based
choosing such switching laws [14, 15]. Although av- on the average dwell time approach and discontinu-
erage dwell time approach has been widely used in the ous piecewise Lyapunov function method, a novel pas-
switched linear systems (see [14, 15]), little attention sivity condition is derived in terms of LMIs. The ob-
has been paid on the analysis of switched neural net- tained condition is both discrete-delay-dependent and
works with average dwell time switching. In this con- distributed-delay-dependent, which can reduce some
conservatism. A numerical example is provided to il-
text, only two works are found. For example, the expo-
lustrate the effectiveness of the proposed method.
nential stability of the switched neural networks with
time-varying discrete delay was considered in [16], Notations The notation used throughout the paper is
where the delay-partitioning method is employed to fairly standard. We use W T , W −1 , λ(W ), Tr(W ) and
reduce the conservatism of the delay-dependent sta- W to denote, respectively, the transpose, the inverse,
bility condition. And in [17], stability of the switched the eigenvalues, the trace, and the induced norm of any
Cohen–Grossberg neural networks with discrete and square matrix W . We use W > 0 to denote a positive-
distributed time delays was approached via the aver- definite matrix W with λmin (W ) and λmax (W ) being
age dwell time approach and some new stability con- the minimum and maximum eigenvalues of W and I
ditions were obtained in terms of LMIs. It is worth to denote the identity matrix with appropriate dimen-
pointing out that the aforementioned results [16, 17] sion. We use diag{. . .} to describe the diagonal matrix
are in the continuous-time settings. In reality, how- X, i.e. X = diag{. . .}. Let R n denote the n dimensional
ever, discrete-time systems become more important Euclidean space. R m×n is the set of all m × n real ma-
than their continuous-time counterparts when imple- trices. The symbol “∗” will be used in some matrix
menting the neural networks in a digital way. In or- expressions to represent the symmetric terms.
der to investigate the dynamical characteristics with
respect to digital signal transmission, it is usually es-
sential to formulate the discrete-time analog. However, 2 Problem formulation
so far, no synthesis result has been reported on the
We propose the following model to represent a dis-
discrete-time switched neural networks with time de-
crete-time neural network with mixed time delays:
lays. This partly motivates the present study.
A relevant issue with stability of nonlinear sys- x(k + 1) = Ax(k) + BF x(k)
tems and neural networks is passivity [18]. Due to
+ CG x k − d(k)
its theoretical importance, the passivity problem for
π
(1)
neural networks with time delays has attracted increas- +D μm H x(k − m) + u(k)
ing attention. See, for example, [19–23]. In [21], the m=1
delay-dependent passivity conditions were proposed y(k) = S x(k)
for the uncertain neural networks with time-varying
discrete delay. In [23], the authors provided some where x(k) = [x1 (k), x2 (k), . . . , xn (k)]T ∈ R n , A =
diag{a1 , a2 , . . . , an } are the state feedback coeffi-
delay-dependent passivity conditions for the uncertain
cient matrix; the n × n matrices B = [bij ]n×n , C =
discrete-time neural networks with mixed time delays.
[cij ]n×n and D = [dij ]n×n are, respectively, the con-
It should be noted that most of the passivity results
nection weight matrix, the discretely delayed connec-
are obtained on assumption that all the neuron activa-
tion weight matrix and distributively delayed connec-
tion functions are the same. To the best of the authors’
tion weight matrix. The positive integer d(k) denotes
knowledge, no result has been seen on the passivity the time-varying discrete delay satisfying
analysis for switched neural networks with various ac-
tivation functions and mixed time delays. 0 ≤ d(k) ≤ d (2)
Passivity analysis for discrete-time switched neural networks with various activation functions 405
where d is a known positive integer. The positive in- delay considered in [23], we consider the bounded dis-
teger π is the upper bound of distributed delays. In (1), tributed delay in this paper, and the delay bound π can
F (x(k)) = [f1 (x1 (k)), f2 (x2 (k)), . . . , fn (xn (k))]T , thereby be used to achieve the mixed delay-dependent
G(x(k)) = [g1 (x1 (k)), g2 (x2 (k)), . . . , gn (xn (k))]T , passivity condition.
H (x(k)) = [h1 (x1 (k)), h2 (x2 (k)), . . . , hn (xn (k))]T
and S(x(k)) = [s1 (x1 (k)), s2 (x2 (k)), . . . , sn (xn (k))]T A switched neural network consists of a set of
denote the neuron activation functions. u(k) = [u1 (k), neural networks and a switching rule. Each of the
u2 (k), . . . , un (k)]T is the input vector. The initial con- neural networks is regarded as an individual sub-
dition associated with model (1) is given by system. We now describe the discrete-time switched
neural networks as follows:
x(θ ) = ϕ(θ ), θ = k0 − τ
(3) x(k + 1) = Aσ (k) x(k) + Bσ (k) F x(k)
k0 − τ + 1, . . . , k0
+ Cσ (k) G x k − d(k)
where τ = max{d, π}. π
+ Dσ (k) μm H x(k − m) + u(k)
(4)
Remark 1 The proposed model (1) includes m=1
π the terms y(k) = S x(k)
with the bounded distributed delays m=1 μm H ×
(k − m). As can been seen, model (1) is different from x(θ ) = φ(θ ), θ = k0 − τ
the existing discrete-time neural networks with time- k0 − τ + 1, . . . , k0
varying discrete delay and bounded distributed delays
due to the term μm . Such a term is introduced from where φ(θ ) is the initial condition of the switched
practical considerations. For example, neural networks neural networks (4), σ (k) : [0, +∞) → Ω = {1, 2,
often have a spatial extent due to the presence of an . . . , l} is the switching signal. For possible switch-
amount of parallel pathways of a variety of axon sizes ing signal σ (k) = i, we denote Ai = Aσ (k) , Bi =
and lengths. Therefore, there may exist not only the Bσ (k) , Ci = Cσ (k) , Di = Dσ (k) . Corresponding to the
distribution of propagation delays over a period of switching signal σ (k), we have the switching sequence
time, but also the signal distortion in some cases, lead- {xk0 ; (i0 , k0 ), . . . , (it , kt ), . . . , |it ∈ Ω, t = 0, 1, . . .},
ing to the model (1). For μm = 1, the model means which means that the it th subsystem is activated when
that the signal transmission only experiences propaga- k ∈ [kt , kt+1 ).
tion delays, while for μm = 1, the signal transmission The following assumptions are needed in develop-
not only experiences propagation delays, but also ex- ment of our main result.
periences distortion.
Assumption 1 [24] For j ∈ {1, 2, . . . , n}, the neu-
Remark 2 All the passivity conditions for the delayed ron activation functions fj (•), gj (•), hj (•) and sj (•)
neural networks in [19–23] assume the same activation in (1) are continuous and bounded.
function. To facilitate the design of neural networks, it
is important to consider the neural networks with var- Assumption 2 [24] For j ∈ {1, 2, . . . , n}, the neuron
ious activation functions. Note that there have some activation functions in (4) satisfies
papers reported on the stability of neural networks fj (υ1 ) − fj (υ2 )
with various activation functions; see, e.g., [24]. How- lj− ≤ ≤ lj+ , ∀υ1 , υ2 ∈ R (5)
υ1 − υ2
ever, it is seen that no passivity result for the neural
gj (υ1 ) − gj (υ2 )
networks with various activation functions has been vj− ≤ ≤ vj+ , ∀υ1 , υ2 ∈ R (6)
found in the literature. υ1 − υ2
hj (υ1 ) − hj (υ2 )
σj− ≤ ≤ σj+ , ∀υ1 , υ2 ∈ R (7)
Remark 3 The discrete-time distributed delay was υ1 − υ2
firstly introduced in [25] for the synchronization of sj (υ1 ) − hj (υ2 )
the discrete-time complex networks. More recently, ςj− ≤ ≤ ςj+ , ∀υ1 , υ2 ∈ R (8)
υ1 − υ2
the passivity problem for the discrete-time stochastic
neural networks with infinite distributed delay is inves- where lj− , lj+ , vj− , vj+ , σj− , σj+ , ςj− , ςj+ are some con-
tigated in [23]. Different from the infinite distributed stants.
406 D. Zhang, L. Yu
Assumption 3 The constant μm ≥ 0 satisfies the fol- the series concerned are convergent, the following in-
lowing convergent condition: equality holds:
π T π
π
π
μm < +∞ and mμm < +∞ (9) aj xj X aj xj
m=1 m=1 j =1 j =1
π
π
For our development, we need the following defini- ≤ aj aj xjT Xxj (13)
tions and lemmas. j =1 j =1
with
k−1
V2i (k) = x T (s)(1 − α)k−s−1 Qi x(s)
Φ11 = −(1 − α)Pi + Qi − ΛL1 s=k−d
Φ23 = (1 − α)d Zi
k−1
× H T x(s) (1 − α)k−m−1 Ri H x(s)
Φ25 = T Υ2 , Φ33 = −(1 − α) (Qi + Zi )
d
s=k−m
+ − + − + −
× ηT (m)Zi η(m) (20)
Σ1 = diag σ1 σ1 , σ2 σ2 , . . . , σn σn
m=k−d
+
σ1 + σ1− σ2+ + σ2− σn+ + σn− V4i (k) + αV4i (k) = μ̄m H T x(k) Ri H x(k)
Σ2 = diag , ,···,
2 2 2
π
+ − + − + − − μm (1 − α)m H T
Γ1 = diag ς1 ς1 , ς2 ς2 , . . . , ςn ςn
+ m=1
ς1 + ς1− ς2+ + ς2− ςn+ + ςn−
Γ2 = diag , ,···, × (x − m)Ri H (x − m) (21)
2 2 2
π It follows from Lemma 1 that
μ̄m = μm
k−1
m=1 −d ηT (m)Zi η(m)
m=k−d
Proof We first consider the exponential stability of the
neural networks (4) with u(k) = 0. To do this, we con-
k−1
where
k−1
≤ −d(k) ηT (m)Zi η(m)
V1i (k) = x (k)Pi x(k)
T m=k−d(k)
408 D. Zhang, L. Yu
n T + − T li+ +li−
k−d(k)−1 x(k) li li e i e i − T
2 ei ei
− d − d(k) ηT (m)Zi η(m) λi
F (x(k)) l + +l −
m=k−d i=1 − i 2 i ei eiT ei eiT
T
≤ x(k) − x k − d(k) Zi x(k) − x k − d(k) x(k)
× ≤0 (27)
F (x(k))
T
− x k − d(k) − x(k − d) Zi x(k)
namely,
− x(k − d) (22)
T
x(k) ΛL1 −ΛL2
χ1 (k) =
Hence, F (x(k)) −ΛL2 Λ
x(k)
V3i (k) + αV3i (k) × ≤0 (28)
F (x(k))
≤ d 2 ηT (k)Zi η(k)
T Similarly, from (6)–(8), we have
− x(k) − x k − d(k) Zi x(k)
T
− x k − d(k) − x k − d(k) x(k − d(k)) T Υ1 −T Υ2
χ2 (k) =
T G(x(k − d(k))) −T Υ2 T
− x(k − d) Zi x(k) − x(k − d) (23)
x(k − d(k))
× ≤0 (29)
G(x(k − d(k)))
It also follows from Lemma 2 that
T
x(k) Σ1 −Σ2
π χ3 (k) =
H (x(k)) −Σ2
− μm (1 − α)m H T (x − m)Ri H (x − m)
m=1 x(k)
π × ≤0 (30)
F (x(k))
≤ −(1 − α)π μm H T (x − m)Ri H (x − m)
T
π
m=1
T x(k) MΓ1 −MΓ2
χ4 (k) =
(1 − α)π S(x(k)) −MΓ2 M
≤− μm H (x − m) Ri
μ̄m
π m=1 ×
x(k)
≤0 (31)
S(x(k))
× μm H (x − m) (24)
m=1
Therefore, we have
Similar to [23, 24], we have from (5) that
Vi (k) + αVi (k)
fi xi (k) − li+ xi (k) fi xi (k) − li− xi (k)
3
≤ Vi (k) + αVi (k) − χν (k)
≤ 0 (i = 1, 2, . . . , n) (25) ν=1
≤ ξ̄ T (k) Φ̄1 + d 2 Φ̄2T Zi Φ̄2 + Φ̄3T Pi Φ̄3 ξ(k) (32)
which is equivalent to
T + − T where
l + +l −
x(k) li li e i e i − i 2 i ei eiT
F (x(k)) l + +l − ξ̄ (k) = x T (k) x T (k − d(k)) x T (k − d) F T (x(k))
− i 2 i ei eiT ei eiT
x(k) GT (x(k − d(k))) H T (x(k))
× ≤ 0, i = 1, 2, . . . , n (26)
F (x(k)) π T
m=1 μ m H T
(x(k − m))
where ei denotes the unit column vector having “1”
Φ̄1 = Φ̄7×7 , Φ̄2 = [Ai − I 0 0 Bi Ci 0 Di ]
element on its ith row and zeros elsewhere.
Consequently, Φ̄3 = [Ai 0 0 Bi Ci 0 Di ]
Passivity analysis for discrete-time switched neural networks with various activation functions 409
with χ = (1 − α)μ1/Ta , σ = max σi− , σi+
1≤i≤n
Φ̄11 = −(1 − α)Pi + Qi − ΛL1 − Σ1
Therefore, from condition (16), one can readily obtain
− (1 − α)d Zi χ < 1. According to Definition 1, system (4) is expo-
Φ̄12 = (1 − α)d Zi , Φ̄14 = ΛL2 nentially stable with u(k) = 0.
To establish the passivity of system (4), we consider
Φ̄16 = Σ2 , Φ̄22 = −2(1 − α)d Zi − T Υ1 the Lyapunov functional as in (17) and then we have
Φ̄23 = (1 − α)d Zi , Φ̄25 = T Υ2
Vi (k) + αVi (k) − 2y T (k)u(k) − γ uT (k)u(k)
Φ̄33 = −(1 − α) (Qi + Zi ),
d
Φ̄44 = −Λ
≤ Vi + αVi − 2y T (k)u(k) − γ uT (k)u(k)
Φ̄55 = −T , Φ̄66 = μ̄m Ri −
4
(1 − α)π − χν (k)
Φ̄77 = − Ri
μ̄m ν=1
By using Schur complement, (14) guarantees that ≤ ξ T (k) Φ1 + d 2 Φ2T Zi Φ2 + Φ3T Pi Φ3 ξ(k) (37)
Vσ (k) (k) ≤ (1 − α)k−kt Vσ (kt ) (kt ) By using Schur complement, (14) guarantees that
≤ (1 − α) k−kt
μVσ (kt−1 ) (kt )
Vi (k) + αVi (k) − 2y T (k)u(k) − γ uT (k)u(k)
≤ (1 − α) k−kt
μVσ (kt −1) (kt ) <0 (38)
≤ · · · ≤ (1 − α) μ k−k0 (k−k0 )/Ta
Vσ (k0 ) (k0 )
(k−k )
Since α > 0, Vi (k) ≥ 0, one obtains that
≤ (1 − α)μ1/Ta 0
Vσ (k0 ) (k0 ) (35)
Vi (k) − 2y T (k)u(k) − γ uT (k)u(k)
In addition, for the constructed Lyapunov functional,
it follows from (35) that < Vi (k) + αVi (k) − 2y T (k)u(k) − γ uT (k)u(k)
2
β1 x(k) ≤ Vσ (k) (k) (39)
(k−k0 )
≤ (1 − α)μ1/Ta Vσ (k0 ) (k0 ) Finally,
(k−k0 )
≤ (1 − α)μ1/Ta β2 φ2L (36)
κ
κ
κ
2 y (j )u(j ) ≥
T
V (j ) − γ uT (j )u(j )
β2 (k−k0 )
which yields x(k)2 ≤ β1 χ φ2L , where j =0 j =0 j =0
π
k
+ σ2 mμm max λmax (Ri ) V (j ) = V (k + 1) − V (0) ≥ 0 (41)
i∈Ω
m=1 j =0
410 D. Zhang, L. Yu
⎡ ⎤
Then we can obtain −0.03 0.01 0.02
B2 = ⎣ 0.02 0 0.02 ⎦
κ
κ
2 y T (j )u(j ) ≥ −γ uT (j )u(j ) (42) 0.01 0.01 −0.04
⎡ ⎤
j =0 j =0 0.04 0.02 −0.1
C2 = ⎣ 0 0.03 0.03 ⎦
for all κ ≥ 0. This completes the proof.
0 −0.01 0.02
⎡ ⎤
Remark 4 In the derivation of the main result, the −0.03 0.02 0
discrete-time Jensen’s inequality (Lemma 1) is used, D2 = ⎣ 0.02 0.02 0.02 ⎦
which may introduce some conservatism. One can use 0 −0.2 0.2
some other technique, e.g., the free weighting matrix
(FWM) method [28] to deal with the time-varying de- The activation functions are assumed to be
lay d(k), and obtains a less conservative result from ⎡ ⎤
tanh(0.8x1 (k))
Theorem 1. However, compared with the FWM based F x(k) = ⎣ tanh(0.6x2 (k)) ⎦
result, the Jensen’s inequality-based condition would tanh(−0.6x3 (k))
be more computational efficient, since no slack vari- ⎡ ⎤
ables are introduced for computation. By solving the tanh(0.8x1 (k))
LMIs (14) and (15) via existing convex optimization G x(k) = ⎣ tanh(0.8x2 (k)) ⎦
algorithms in Matlab LMI Toolbox, one can easily ob- tanh(0.4x3 (k))
tain the upper bounds of the time delays that keep the ⎡ ⎤
tanh(0.6x1 (k))
passivity of system (4). H x(k) = ⎣ tanh(0.6x2 (k)) ⎦
tanh(−0.4x3 (k))
⎡ ⎤
4 Numerical example tanh(0.4x1 (k))
S x(k) = ⎣ tanh(0.4x2 (k)) ⎦ (43)
In this section, a numerical example is given to illus- tanh(0.4x3 (k))
trate the effectiveness of the proposed method.
Consider the switched neural networks (4) with the We suppose πm=1 μm = 2m=1 (0.9)m , i.e. the up-
two modes and the parameters of each mode are given per bound of the distributed delay is π = 2. Given
as follows: these parameters, it can be verified that μ̄m = 1.71,
Mode 1: L1 = Υ1 = Σ1 = Γ1 = diag{0, 0, 0}, and L2 =
⎡ ⎤ diag{0.4, 0.3, −0.3}, Υ2 = diag{0.4, 0.4, 0.2}, Σ2 =
0.2 0 0
diag{0.3, 0.3, −0.2}, Γ2 = diag{0.2, 0.2, 0.2}.
A1 = ⎣ 0 0.3 0 ⎦
Choosing α = 0.05 and μ = 1.05, the discrete de-
0 0 0.4
lay bound that keeps the passivity of the neural net-
⎡ ⎤
−0.03 0.01 0.02 works (4) can be obtained as d = 10.
B1 = ⎣ 0.02 0.02 0 ⎦ On the other hand, if given the discrete delay bound
0 −0.01 −0.04 d = 8, the upper bound of the distributed delay can be
⎡ ⎤ obtained as π = 6. For given α = 0.05 and μ = 1.05,
0.04 0.02 −0.01
the average dwell time Ta > Ta∗ = 0.9512. Therefore,
C1 = ⎣ 0 0.02 0.03 ⎦
if taking Ta = 1, the decay rate of system (4) is λ =
−0.01 0 0.02
⎡ ⎤ 0.9975 < 1.
−0.02 0.01 0
D1 = ⎣ 0.02 0.03 0.02 ⎦
0 −0.2 0.2 5 Conclusion
Mode 2:
⎡ ⎤ In this paper, the passivity analysis for discrete-time
0.3 0 0 switched neural networks with various activation func-
A2 = ⎣ 0 0.3 0 ⎦ tions and mixed time delays has been investigated.
0 0 0.4 With help of the average dwell time approach and
Passivity analysis for discrete-time switched neural networks with various activation functions 411
the discrete-time Jensen’s inequality, a novel mixed 11. Yuan, K., Cao, J., Li, H.: Robust stability of switched
delay-dependent passivity criterion has been estab- Cohen–Grossberg neural networks with mixed time-
varying delays. IEEE Trans. Syst. Man Cybern., Part B,
lished. The passivity condition is converted into a fea- Cybern. 36(6), 1356–1363 (2006)
sibility problem of a set of LMIs, which can be eas- 12. Li, P., Cao, J.: Global stability in switched recurrent neural
ily checked by utilizing standard numerical software. networks with time-varying delay via nonlinear measure.
A numerical example has been given to show the ef- Nonlinear Dyn. 49(1–2), 295–305 (2007)
13. Ahn, C.K.: An H∞ approach to stability analysis of
fectiveness of the proposed method. switched Hopfield neural networks with time-delay. Non-
With the discrete-time switched neural networks linear Dyn. 60(4), 703–711 (2010)
(4) and the provided analysis method, the state esti- 14. Liberzon, D.: Switching in Systems and Control.
mation and the stabilization of discrete-time switched Birkhauser, Basel (2003)
15. Qiu, J., Feng, G., Yang, J.: New results on robust energy-
neural networks with mixed time delays are the inter- to-peak filtering for discrete-time switched polytopic linear
esting topics worth further investigation. systems with time-varying delay. IET Control Theory Appl.
2(9), 795–806 (2008)
Acknowledgements The authors would like to thank the 16. Wu, L., Feng, Z., Zheng, W.: Exponential stability analy-
anonymous reviewers for their valuable suggestions to improve sis for delayed neural networks with switching parameters:
the quality of this paper. This work was supported by the Na- Average dwell time approach. IEEE Trans. Neural Netw.
tional Natural Science Funds of China under Grant 60834003, 21(9), 1396–1407 (2010)
60974017 and 61074039, the Natural Science Foundation of 17. Lian, J., Zhang, K.: Exponential stability for switched
Zhejiang province, P.R. China under Grant Y1100845. Cohen–Grossberg neural networks with average dwell
time. Nonlinear Dyn. 63(3), 331–343 (2011)
18. Yu, W.: Passivity analysis for dynamic multilayer neuro
identifier. IEEE Trans. Circuits Syst. I, Fundam. Theory
References Appl. 50(1), 173–178 (2003)
19. Park, J.H.: Further results on passivity analysis of delayed
cellular neural networks. Chaos Solitons Fractals 34(5),
1. Cichoki, A., Unbehauen, R.: Neural Networks for Opti- 1546–1551 (2007)
mization and Signal Processing. Wiley, Chichester (1993) 20. Song, Q., Liang, J., Wang, Z.: Passivity analysis of discrete-
2. Haykin, S.: Neural Networks: A Comprehensive Founda- time stochastic neural networks with time-varying delays.
tion. Prentice Hall, New York (1998) Neurocomputing 72(7–9), 1782–1788 (2009)
3. Wang, Z., Liu, Y., Liu, X.: On global asymptotic stability of 21. Lu, C., Liao, C., Tsai, H.: Delay-range-dependent global ro-
neural networks with discrete and distributed delays. Phys. bust passivity analysis of discrete-time uncertain recurrent
Lett. A 345(4–6), 299–308 (2005) neural networks with interval time-varying delay. Discrete
4. He, Y., Liu, G., Rees, D., Wu, M.: Stability analysis for Dyn. Nat. Soc. 2009, 14 (2009). Article ID 430158
neural networks with time-varying interval delay. IEEE 22. Chen, Y., Wang, H., Xue, A., Lu, R.: Passivity analysis
Trans. Neural Netw. 18, 1850–1854 (2007) of stochastic time-delay neural networks. Nonlinear Dyn.
5. Li, H., Chen, B., Zhou, Q., Fang, S.: Robust exponential 61(1–2), 71–82 (2010)
stability for uncertain stochastic neural networks with dis- 23. Li, H., Wang, C., Shi, P., Gao, H.: New passivity results
crete and distributed time-varying delays. Phys. Lett. A for uncertain discrete-time stochastic neural networks with
372(19), 3385–3394 (2008) mixed time delays. Neurocomputing 73(16–18), 3291–
6. Wu, Z., Su, H., Chu, J., Zhou, W.: New results on robust 3299 (2010)
exponential stability for discrete recurrent neural networks 24. Liu, Y., Wang, Z., Liu, X.: Asymptotic stability for neural
with time-varying delays. Neurocomputing 72, 3337–3342 networks with mixed time delays: the discrete-time case.
(2009) Neural Netw. 22, 67–74 (2009)
7. Li, H., Chen, B., Zhou, Q., Qian, W.: Robust stability 25. Liu, Y., Wang, Z., Liang, J., Liu, X.: Synchronization and
for uncertain delayed fuzzy Hopfield neural networks with state estimation for discrete-time complex networks with
Markovian jumping parameters. IEEE Trans. Syst. Man distributed delays. IEEE Trans. Syst. Man Cybern., Part B,
Cybern., Part B, Cybern. 39(1), 94–102 (2009) Cybern. 38(5), 1314–1325 (2008)
8. Wu, Z., Su, H., Chu, J., Zhou, W.: Improved delay- 26. Liu, Y., Wang, Z., Liang, J., Liu, X.: Stability and syn-
dependent stability condition of discrete recurrent neural chronization of discrete-time Markovian jumping neural
networks with time-varying delays. IEEE Trans. Neural networks with mixed mode-dependent time delays. IEEE
Netw. 21(4), 692–697 (2010) Trans. Neural Netw. 20(7), 1102–1116 (2009)
9. Zeng, Z., Huang, T., Zheng, W.: Multistability of recurrent 27. Jiang, X., Han, Q., Yu, X.: Stability criteria for linear
neural networks with time-varying delays and the piecewise discrete-time systems with interval-like time-varying delay.
linear activation function. IEEE Trans. Neural Netw. 21(8), In: Proceeding of the American Control Conference, Port-
1371–1378 (2010) land, OR, USA, June 8–10, 2005
10. Zhao, J., Hill, D.J., Liu, T.: Synchronization of complex dy- 28. He, Y., Liu, G.P., Rees, D., Wu, M.: H∞ filtering for
namical networks with switching topology: a switched sys- discrete-time systems with time-varying delay. Signal
tem point of view. Automatica 45(11), 2502–2511 (2009) Process. 89, 275–282 (2009)