0% found this document useful (0 votes)
19 views206 pages

Method of The Management of Garbage Coll-301-470

Uploaded by

Chad Galloway
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views206 pages

Method of The Management of Garbage Coll-301-470

Uploaded by

Chad Galloway
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 206

Quantum Direct Communication Wiretapping


Piotr Zawadzki( )

Institute of Electronics, Silesian University of


Technology, Akademicka 16, 44-100 Gliwice, Poland
[email protected]

Abstract. The analyses of the ping-pong communication paradigm


anticipated possibility to undetectably wiretap some variants of quan-
tum direct communication. The only proposed so far instantiation of
attacks of this type is formulated for the qubit based version of the
pro- tocol and it implicitly assumes the existence of losses. The
essential fea- tures of undetectable attack transformations are
identified in the study and the new generic eavesdropping scheme is
proposed. The scheme does not refer to the properties of the vacuum
state, so it is fully consistent with the absence of losses assumption. It
is formulated for the space of any dimension and it can be used to
design the family of circuits that enable undetectable eavesdropping.

Keywords: Ping-Pong protocol · Quantum direct communication

1 Introduction
The ping-pong communication paradigm is frequently used in quantum
cryp- tography to realize the tasks impossible in the classical approach. The
most notable areas of its application are quantum key distribution (QKD)
and quan- tum direct communication (QDC) that aim at secure key
agreement and con- fidential communication without encryption in open
communication channels, respectively. The first applications of this
quantum communication technique to QKD and QDC should be credited to
Long et al. [1] and Deng et al. [2]. How- ever, in these first proposals it was
assumed that communicating parties possess long term quantum memories.
Unfortunately, this strong requirement excluded their practical
implementation. The first practically feasible QDC protocol that exploits
the properties of the ping-pong communication paradigm applied to
Einstein-Podolsky-Rosen (EPR) pairs has been proposed by Bostro¨m et al. [3,
4]. Although the Bostro¨m’s QDC protocol, frequently referred as the ping-
pong pro- tocol, is “only” quasi secure in perfect channels [5], it can be used
as an engine for unconditionally secure QKD. The privacy amplification step
applied to the data received from the QDC core protocol can reduce
eavesdropper’s knowledge on the final key to the arbitrary small value
provided that his information gain is less than the mutual information of
legitimate parties. Protocols of this type are referred as deterministic QKD
and some of them have been recently experimen- tally demonstrated [6, 7].
Oc Springer International Publishing AG 2017
P. Gaj et al. (Eds.): CN 2017, CCIS 718, pp. 287–294, 2017.
DOI: 10.1007/978-3-319-59767-6 23
288 P. Zawadzki

Due to its conceptual simplicity, the idea of ping-pong communication


para- digm is a subject of the ongoing research – it is adapted to higher
dimensional systems [8–11] and/or modified to enhance capacity via dense
coding [12, 13]. Recently, Paviˇci´c [14] proposed the attack that
demonstrates inability to double the capacity of the protocol via dense
coding. The attack targets only semi- nal version of the protocol utilizing
qubits as information carriers. The custom control modes that detect the
malevolent circuit have been proposed [15, 16]. However, it was not known
whether the circuit can be generalized to higher dimensions. In this
contribution we propose such generalization.
The paper is organized as follows. In Sect. 2, we introduce notation and
sum- marize the operation of the Paviˇci´c’s attack. Section 3 presents the
main con- tribution. In particular, it is shown that Paviˇci´c’s circuit can be
described as CNOT gate in the state space supplemented with the vacuum
state. Then, on the basis of this observation, the attack is generalized to the
qudit based systems. The impact of the obtained results is summarized in
Sect. 4.

2 Analysis
Personification rules are used to simplify the description of the protocol –
Alice and Bob are the names of the legitimate parties while the malevolent
eavesdrop- per is referred as Eve. Communication protocol described below
is a ping-pong paradigm variant analysed in [14]. Compared to the seminal
version [3], it differs only in the encoding operation – the sender uses dense
coding instead of phase flips and the remaining elements of the
communication scenario are left intact.
The communication process is started by Bob. He creates EPR pair

|Ψ −⟩ = (|0h⟩|1t⟩ − |1h⟩|0t⟩) / 2. (1)

and sends one of the qubits to Alice. The sent qubit is further referred as
the sig- nal/travel particle. Alice encodes two classic bits µ, ν using unitary
transforma- tion Aof the form
X Z µ,ν = X
µ ν
|where
⟩⟨ | | ⟩⟨= | 1Z 0 +| ⟩⟨
0 |1−, | ⟩⟨=
0 0 1 1 are| bit-flip and phase-flip operations, respectively. After
encoding, depending on the values of the information bits, the system state
lands in one of the four EPR
pairs √
|ΨB⟩ = (|0h⟩|(1 ⊕ µ)t⟩ − (−1)ν
|1h⟩|(0 ⊕ µ)t⟩) 2, (2)
/
where ⊕ denotes summation modulo 2. Alice sends signal particle back to
Bob, who detects applied transformation by collective measurement of both
qubits.
Passive eavesdropping is impossible, but Eve can mount man in the
middle (MITM) attack. As a countermeasure, Alice and Bob have to verify
whether Alice received a genuine qubit. In some randomly selected protocol
cycles, instead of the encoding step, she measures the travel qubit and
signals that fact to Bob. Bob also measures the possessed/home qubit. Now,
the legitimate parties are in position to verify the expected correlation of
Quantum Direct Communication Wiretapping 289
outcomes, which is preserved only if they share a genuine entangled state.
They can use public classical channel for this purpose. This way Alice and
Bob can convince themselves that the quantum
288 P. Zawadzki

channel is not spoofed with a confidence approaching certainty, provided


that they have executed a sufficient number of control cycles.
An incoherent attack is yet another way of active information
interception. The qubit that travels forth and back between legitimate
parties is the subject Qof some quantum action introduced by Eve (Fig. 1).
This malevolent activity can be described as unitary transformation in the
space extended with two additional qubits. Consequently, Eve can implement
the most generic attack by using ancilla system composed of two qubit
registers.

Fig. 1. The incoherent attack.

The circuitP from Fig. 2 has been proposed by Paviˇci´c [14] as the
quantum Q action able to detect Alice’s bit-flip actions. It is composed of
two Hadamard gates followed by the controlled polarization beam splitter
(CPBS), which is a generalization of the polarization beam splitter (PBS)
concept. The PBS is a two port gate that swaps horizontally polarized
photons |0x⟩ (|0y⟩) entering its input to the other port |0y⟩ (|0x⟩) on output
while vertically polarized ones
|1x⟩ (|1y⟩) retain in their port |1x⟩ (|1y⟩) i.e.:

PBS|vx⟩|0y⟩ = |0x⟩|vy⟩, PBS|vx⟩|1y⟩ = |vx⟩|1y⟩, (3a)


PBS|0x⟩|vy⟩ = |vx⟩|0y⟩, PBS|1x⟩|vy⟩ = |1x⟩|vy⟩, (3b)

where |v⟩ denotes the vacuum state. The CPBS behaves as normal PBS if
the control qubit is set to |0t⟩. The roles of horizontal and vertical
polarization are exchanged for control
| ⟩ qubit set
P to 1t . The circuit
implements the following actions

Ptxy|0t⟩|χ0⟩ = |0t⟩|aE⟩, Ptxy|1t⟩|χ0⟩ = |1t⟩|dE⟩, (4a)


Ptxy|0t⟩|χ1⟩ = |0t⟩|dE⟩, Ptxy|1t⟩|χ1⟩ = |1t⟩|aE⟩, (4b)

where
|χ0⟩ = |vx⟩|0y⟩, |χ1⟩ = |0x⟩|vy⟩
and
√ √
|aE⟩ = (|0x⟩|vy⟩ + |vx⟩|1y⟩) / 2, |dE⟩ = (|vx⟩|0y⟩ + |1x⟩|vy⟩) / 2.
290 P. Zawadzki

Fig. 2. The P circuit from [14].

Eve’s subsystem is initially decoupled, so without loss of generality it


may be assumed that the system is in the state

|ψinit⟩ = |Ψ— ⟩|χ0⟩. (5)

Under attack, the travel qubit, in its way to Alice, is entangled with the ancilla:

|ψhtE⟩ = (|0h⟩|1t⟩|dE⟩ − |1h⟩|0t⟩|aE⟩) / 2. (6)

It is clear from (6) that attack does not introduce errors nor losses in
control mode and the expected correlation of outcomes is preserved. Let us
consider bit-flip encoding. It transforms the state of the system to
1
|ψbit⟩ = (Ih ⊗ Xt)|ψhtE⟩ = √ (|0h⟩|0t⟩|dE⟩ − |1h⟩|1t⟩|aE⟩) . (7)
2
In its way back to Bob, the travel qubit is affected by the disentangling
trans- formation P−1. It follows from (4), that
−1 −1
Ptxy|0t⟩|dE⟩ = |0t⟩|χ1⟩, Ptxy|1t⟩|aE⟩ = |1t⟩|χ1⟩.

so the resulting state takes the form

|φ ⟩ = P−1 | | 0 h ⟩| 0 t ⟩ − |1h⟩|1t⟩
ψ ⟩= √ ⊗ |χ ⟩ = |Φ−⟩ ⊗ |χ ⟩. (8)
bit txy bit 1 1
2
Bob’s decoding is limited to the ht space. His part of the system looks like
as if there was no wiretapping device on the line and he indeed correctly
detects Alice’s bit-flip action. Eve’s observations take place in xy space. She
observes that the state χ0 has been flipped to χ1 , so she is able to infer the
value of bit µ |from
⟩ her observations. |Similar ⟩ analysis can be conducted for
the phase flip encoding
|φphase⟩ = Ptxy−1(Ih ⊗ Zt)|ψhtE⟩ = −|Ψ + ⟩ ⊗ |χ0⟩, (9)
but this time Eve observes no change in her ancilla and Bob also correctly
decodes phase flip operation. It is clear that
P circuit enables detection of
bit- flip operation without (a) disturbing expected correlation in the control
mode,
(b) introducing errors in transmission mode. In other words, it permits
unde- tectable protocol wiretap.
Quantum Direct Communication Wiretapping 291

3 Results
The in depth analysis of Pthe -circuit operation reveals that properties
described by expressions (4) determine the success of the attack: property
(4a) provides entanglement undetectable by the control mode and property
(4b) permits decoupling without disturbing information encoded by Alice.
In fact, any map Q from Fig. 1 that satisfies

Q|0t⟩|χE⟩ → |0t⟩|αE⟩, Q|1t⟩|χE⟩ → |1t⟩|δE⟩, (10a)


Q|0t⟩|φE⟩ → |0t⟩|δE⟩, Q|1t⟩|φE⟩ → |1t⟩|αE⟩ (10b)

can be used to attack the protocol, provided that probe states |χE⟩, |φE⟩ are
per-
fectly distinguishable and |αE⟩ =/ |δE⟩ are some ancilla states. The operation
of
the protocol under attack is then described by

1
Q− (Ih ⊗ It ⊗ IE) Q |Ψ −⟩|χE⟩ = |Ψ −⟩|χE⟩, (11a)
−1
Q (Ih ⊗ Xt ⊗ IE) Q |Ψ −⟩|χE⟩ = |Φ−⟩|φE⟩, (11b)
1
Q− (Ih ⊗ Zt ⊗ IE) Q |Ψ −⟩|χE⟩ = −|Ψ +⟩|χE⟩, (11c)
−1 +
Q (Ih ⊗ XtZt ⊗ IE) Q |Ψ ⟩|χE⟩ = −|Φ ⟩|φE⟩.

(11d)

It follows from (11) that the registers used for communication are left
untouched and decoupled but the Eve’s ancilla state| is ⟩flipped
| ⟩from χE to
φE when Alice applies bit-flip operation. In consequence, Eve can
successfully decode a half of the message content as long as the detection
states |χE⟩, |φE⟩ are perfectly distin- guishable. The map Q that satisfies (10)
can be considered as a generalization of the P-circuit.
The simpler equivalents of the P-circuit can be found on a basis of the
intro- duced generalization. Let us consider map Q of the form

Q|0t⟩|0x⟩|0y⟩ → |0t⟩|0x⟩|0y⟩, Q|1t⟩|0x⟩|0y⟩ → |1t⟩|1x⟩|0y⟩, (12a)


Q|0t⟩|1x⟩|0y⟩ → |0t⟩|1x⟩|0y⟩, Q|1t⟩|1x⟩|0y⟩ → |1t⟩|0x⟩|0y⟩ (12b)

The map Q defined in (12) satisfies (10) in obvious way. But the map from
(12) actsCN
as gate with travel qubit on its control input and x register
as a target.
OT Such version Q of is also practically feasible as the attacks
involving probes entangled
CN via operation have been already
proposed in QKD contextOT [17, 18]. As
CN aOT
result, both, P gate and -
circuit are equivalent in terms of provided information gain, detectability
and practical feasibility. Consequently,
P in spite of the -circuit superficial
otherness, there is no need to design control modes that address that circuit
in a special way [16].
The explicit form of theP -circuit from Fig. 2 does not provide hints how
it can be generalized to protocols formulated for qutrits [9] or qudits [10].
However, the key properties identified in (10) can be easily transferred to
systems of higher dimension. The conditions (10) can be rewritten in the
form
290 P. Zawadzki
(m) (m⊕k)
Q|kt⟩|αE ⟩ → |kt⟩|δE ⟩, k, m = 0, . . . , D − 1 (13)
292 P. Zawadzki

where ⊕ denotes addition modulo D – the dimension of the particles used for sig-
nalling and |αE(k)⟩ and |δE(k)⟩ are the sets of D distinguishable states in the ancilla
system. The inverse action can be written as
−1 (m)
(m⊖k)
Q |kt⟩|δE ⟩ → |kt⟩|αE ⟩, k, m = 0, . . . , D − 1. (14)
The expressions (13) and (14) are in fact the main contribution of the paper
– they define sufficient conditions of undetectable eavesdropping.
Let the system of legitimate parties and the ancilla be initially in the
decou- pled state
(0)
|ψhtE⟩ = Σ
D−1
√1 |kh⟩|kt⟩ ⊗ |αE ⟩. (15)
D k=0

When the travel qudit arrives to Alice, the state of the system under attack
is given as
D−1
1
Q|ψhtE⟩ = √ |kh⟩|kt⟩|δE(k)⟩. (16)
D
Σ k=0
The correlation of the outcomes of the control measurements is still
preserved. Alice uses the following generalizations of phase-flip and bit-flip
operators
D−1 D−1
Σ Σ
Z= ω |k⟩⟨k|,
k
X= |k ⊕ 1⟩⟨k|, ω = ej2π/D. (17)
k=0 k=0

This way she can encode two classic µ, ν “cdits” (i.e. symbols from 0, . . . , D

1 alphabet) in a single protocol transaction. Information encoding
transforms the state of the system to
D−1
µ ν
1 kν
Xt Zt Q|ψhtE⟩ = √ ω |k ⟩|(k ⊕ µ) ⟩|δ(k) ⟩. (18)
h t E
ΣD k=0

In its way back to Bob, the travel qudit is disentangled from the ancilla
according to the Formula (14)
D−1
−1 µ ν 1 (k)
Q Xt Zt Q|ψhtE⟩ = √ ωkν |kh⟩Q −1 |(k ⊕ µ)t⟩|δE ⟩ (19)
ΣD k=0
1 D−1
Σ
= √ ωkν |kh⟩|(k ⊕ µ)t⟩ ⊗ |α(0⊖µ)
E ⟩. (20)
D k=0

Let us note that expression in braces describes the state of signal particles
that legitimate parties expect to find. Thus the introduced coupling is not
detectable in the control mode nor it introduces errors in the
transmission mode. On the other hand, Eve can unambiguously detect the
value of the cdit µ provided that the states |α(k)⟩ are properly selected.
E
Quantum Direct Communication Wiretapping 293

4 Conclusion
A generic attack that provides undetectable eavesdropping of dense coded
infor- mation in the ping-pong protocol is proposed. It can be considered as
a gener- alization
P of the -circuit [14]. In contrast
P to -circuit, the introduced
scheme does not refer to the vacuum state, so it can be applied to the
protocol work- ing under perfect quantum channel assumption. The
provided analysis
CN revealed
OT that the trivial and
P quite complicated -circuit
are equivalent in terms of provided information gain, detectability and
practical feasibility. Consequently, there is no need to design
P control modes
that address the -circuit in a special way [16] in spite of its superficial
specificity. The identification of essential prop- erties of the attack permitted
its generalization to systems of any dimension. The existence of attacks with
similar properties has been already forecast in relation to qubit [2], qutrit [9]
and qudit [11] based protocol. However, no explicit form of the attack
transformation has been given. Presented result can be considered as a
constructive proof of their existence.

Acknowledgements. Author acknowledges support by the Ministry of Science and


Higher Education funding for statutory activities and Rector of Silesian University
of Technology grant number 02/030/RGJ17/0025 in the area of research and
development.

References
1. Long, G.L., Liu, X.S.: Theoretically efficient high-capacity quantum-key-
distribution scheme. Phys. Rev. A 65, 032302 (2002)
2. Deng, F.G., Long, G.L., Liu, X.S.: Two-step quantum direct communication
proto- col using the Einstein-Podolsky-Rosen pair block. Phys. Rev. A 68,
042317 (2003)
3. Bostr¨om, K., Felbinger, T.: Deterministic secure direct communication using
entan- glement. Phys. Rev. Lett. 89(18), 187902 (2002)
4. Ostermeyer, M., Walenta, N.: On the implementation of a deterministic secure
coding protocol using polarization entangled photons. Opt. Commun. 281(17),
4540–4544 (2008)
5. Bostr¨om, K., Felbinger, T.: On the security of the ping-pong protocol. Phys.
Lett. A 372(22), 3953–3956 (2008)
6. Cer`e, A., Lucamarini, M., Di Giuseppe, G., Tombesi, P.: Experimental test of
two- way quantum key distribution in the presence of controlled noise. Phys.
Rev. Lett. 96, 200501 (2006)
7. Chen, H., Zhou, Z.Y., Zangana, A.J.J., Yin, Z.Q., Wu, J., Han, Y.G., Wang, S.,
Li, H.W., He, D.Y., Tawfeeq, S.K., Shi, B.S., Guo, G.C., Chen, W., Han, Z.F.:
Experimental demonstration on the deterministic quantum key distribution
based on entangled photons. Sci. Rep. 6, 20962 (2016)
8. Wang, C., Deng, F.G., Long, G.L.: Multi-step quantum secure direct communi-
cation using multi-particle Green-Horne-Zeilinger state. Opt. Commun. 253(13),
15–20 (2005)
9. Vasiliu, E.V.: Non-coherent attack on the ping-pong protocol with completely
entangled pairs of qutrits. Quant. Inf. Process. 10(2), 189–202 (2010)
10. Zawadzki, P.: Security of Ping-Pong protocol based on pairs of completely
entan- gled qudits. Quant. Inf. Process. 11(6), 1419–1430 (2012)
294 P. Zawadzki

11. Zawadzki, P., Pucha-la, Z., Miszczak, J.: Increasing the security of the ping-
pong protocol by using many mutually unbiased bases. Quant. Inf. Process.
12(1), 569– 575 (2013)
12. Cai, Q., Li, B.: Improving the capacity of the Bostr¨om-Felbinger protocol.
Phys. Rev. A 69, 054301 (2004)
13. Wang, C., Deng, F.G., Li, Y.S., Liu, X.S., Long, G.L.: Quantum secure direct
communication with high-dimension quantum superdense coding. Phys. Rev. A
71, 044305 (2005)
14. Paviˇci´c, M.: In quantum direct communication an undetectable eavesdropper
can always tell ψ from φ Bell states in the message mode. Phys. Rev. A 87,
042326 (2013)
15. Zawadzki, P.: An improved control mode for the ping-pong protocol operation in
imperfect quantum channels. Quant. Inf. Process. 14(7), 2589–2598 (2015)
16. Zhang, B., Shi, W.X., Wang, J., Tang, C.J.: Quantum direct communication
pro- tocol strengthening against Paviˇci´c’s attack. Int. J. Quant. Inf. 13(07),
1550052 (2015)
17. Brandt, H.E.: Entangled eavesdropping in quantum key distribution. J. Mod. Opt.
53(16–17), 2251–2257 (2006)
18. Shapiro, J.H.: Performance analysis for Brandts conclusive entangling probe.
Quant. Inf. Process. 5(1), 11–24 (2006)
A Qutrit Switch for Quantum Networks

Joanna Wi´sniewska1(✉) and Marek Sawerwain2


1
Institute of Information Systems, Faculty of Cybernetics,
Military University of Technology, Kaliskiego 2, 00-908 Warsaw, Poland
[email protected]
2
Institute of Control and Computation Engineering,
University of Zielona G´ora, Licealna 9, Zielona G´ora 65-417, Poland
[email protected]

Abstract. The chapter contains information about a construction of


quantum switch for qutrits. It is an expansion of the idea of quantum
switch for qubits. For the networks based on quantum effects the
switch could realize an operation of changing the direction of
transferred infor- mation. A presented definition of quantum switch
may be easily general- ized to qudits. The chapter contains also the
figures representing circuits realizing the quantum switch and a proof
of its correctness. Addition- ally, there is an analysis of entanglement
level in the circuit which also characterizes the correct operating of the
switch.

Keywords: Quantum information transfer · Quantum switch · Qutrits

1 Introduction
The features and properties of quantum information – e.g. biological [10]
and quantum model [8, 9], non-cloning theorem – cause that some
operations used nowadays as parts of the algorithms or performed in
computer networks need new definitions. One of these issues is a switch
swapping the signals in an electronic circuit.
In [16] a definition and construction of quantum switch operating on
qubits were presented. The constant development and research of the
quantum com- puting covers also the issue of information units with
freedom level greater than two [5, 6, 11]. That causes the need to define a
quantum switch for qutrits and generally for qudits, which could be used in
the computer networks of the future, but also in processors and in logic
arrays of quantum circuits [12, 13].
In this chapter we discus one of the possible realizations of quantum
switch for qutrits, i.e. qudits with three levels of freedom. Just like
previously known switch for qubits, the quantum switch for qutrits swaps
the information between inputs A and B when the third, so-called, control
signal is |2⟩. When the control signals are equal to |0⟩ or |1⟩ then the
operation of swapping is not performed. Moreover, the problem of discrete-
time switched dynamical system [1] is still discussed in many areas of
application for quantum and classical systems.
Oc Springer International Publishing AG 2017
296 P. Zawadzki
P. Gaj et al. (Eds.): CN 2017, CCIS 718, pp. 295–304, 2017.
DOI: 10.1007/978-3-319-59767-6 24
296 J. Wi´sniewska and M. Sawerwain

The chapter’s reminder is organized in the following way: in Sect. 2 we


intro- duce the basic definitions of gates used in the construction of
quantum switch for qutrits. Section 3 contains the circuits implementing the
switch and a proof of their correct operating. In Sect. 4 the evolution of
entanglement’s level for exemplary quantum states is presented. The values
were calculated for each computational step during the operating process of
the switch.
A summary and a draft of further works were presented in Sect. 5. The
last element of the chapter is a references section.

2 Preliminary Definitions
In this section we present some basic information and definitions concerning
the quantum computing. The more detailed introduction may be found in
many textbooks [4, 15].
One of the most important notions in quantum computing is a qubit
which is a normalized vector in Hilbert space H2. The definition of qubit,
more precisely the definition of qubit’s state expressed in Dirac notation as |
ψ⟩, is a superposi- tion of two vectors:
|ψ⟩ = α|0⟩ + β|1⟩, (1)
where α, β ∈ C and |α0|2 + |α1|2 = 1. The normalized vectors |0⟩, |1⟩ may be
presented as:
1 0
|0⟩ = , |1⟩ = . (2)
0 1
The vectors |0⟩, |1⟩ constitute so-called standard computational base, but a
com- putational base for qubits may be set on any two orthonormal vectors.
However, in this chapter we use a qudit as a unit of quantum
information. A qudit is a generalization of qubit. The qudit state |φ⟩ we
express as a super- position of d orthonormal vectors:
d−1
Σ
|φ⟩ = αi|i⟩, (3)
i=0

Σd−1 2
where i=0|αi| = 1 and |i⟩ denotes the i-th state of a chosen computational
base.
Just like for classical bits the qudits (e.g. qubits) may be joined into so-
called quantum register. The n-qudit state/register |Ψ ⟩ may be presented as
a tensor product:

|Ψ ⟩ = |ψ0⟩ ⊗ |ψ1⟩ ⊗ |ψ2⟩ ⊗ ... ⊗ |ψn−1⟩. (4)

Usually the quantum register consists of qudits sharing the same freedom
level d. However, there are also known the solutions like the quantum version
of k-nearest neighbours algorithm [17] where in one quantum register qubits
and qudits are used.
A Qutrit Switch for Quantum Networks 297

Remark 1. It is a very important issue that not all quantum states can be
expressed with use of a tensor product. If such a decomposition is not
possible then the state is entangled. The quantum entanglement is an
extremely impor- tant element of many algorithms and protocols, e.g.
quantum teleportation [3], quantum communications [14].
The evolution of quantum register’s state, in so-called quantum circuit
model, is performed with use of unitary operations and the operation of
measurement.
Remark 2. To solve problems presented in this chapter the operation of
mea- surement is not required, so we omit the details concerning this issue,
but the further information can be found in quantum computing textbooks,
e.g. [15].
The unitary operations are realized with use of quantum gates which
may be classified as 1-qubit/1-qudit gates and n-qubit/n-qudit gates where
(n > 1). The 1-qubit quantum gates X, Y and Z are termed as Pauli gates.
The controlled negation gate (CNOT ) is an example of 2-qubit gate.
An operation performed by a 1-qubit gate X (called also a NOT gate)
can be expressed as:

X|ψ⟩ = X(α|0⟩ + β|1⟩) = β|0⟩ + α|1⟩ = |ψ′⟩,


X(ψ′) = X(β|0⟩ + α|1⟩) = α|0⟩ + β|1⟩ = |ψ⟩. (5)

The X gate swaps the values of α and β, so the double application of this
gate allows to return to the initial state of the qubit |ψ⟩. For qudits, when
the basic states are defined with value d, the X gate performs:
d
X|j⟩ = |j ⊕ 1⟩, (6)
d
where j ⊕ 1 stands for the operation (j + 1) mod d.
The circuit performing the operation of quantum switch needs the CNOT
gate. Formally the operation realized by the CNOT gate is expressed as:

|ab⟩ = |a, a ⊕ b⟩, (7)

where ⊕ stands for addition modulo 2. This operation for qudits is the
addition modulo d.

Remark 3. The generalized CNOT gate is not self-adjoint: CNOT ·CNOT † =/


I
where † denotes the Hermitian adjoint and I is an identity matrix. It means
that the operation of SWAP cannot be realized for qudits with use of three
CNOT gates (see Fig. 1).

Remark 3 implies that it is necessary to present a new CNOT -like gate.


This gate should allow to build the SWAP gate in a similar way as we can
do it for qubits. In [7] a gate C X˜ is described. This gate is self-adjoint and
the SWAP gate can be built with use of the C X˜ gate like in Fig. 1.
298 J. Wi´sniewska and M. Sawerwain

An operation performed by the C X˜ gate on qudits |a⟩, |b⟩ expressed in the


standard computation base is
d
˜
CX|ab⟩ = |a, −a ⊖ b⟩, (8)
d
where −a ⊖ b = (−a − b) mod d.
Remark 4. For d = 2 the gate
is an equivalent of CNOT gate because
C X˜ (−a − b) mod 2 = a + b
mod 2.

Fig. 1. The SWAP gate and the circuits (see also paper [2]) implementing it for
qubits and qudits. The graphical representation of the SWAP gate used in this
chapter is
presented as a subfigure (a), its implementation is given in (b) and the SWAP gate
for qudits with the C X˜ gate is presented as a subfigure (c)

The construction of quantum switch needs also a gate with two control
qubits. The mentioned gate is called Toffoli gate. For qubit states Toffoli
gate performs the following operation:
T |abc⟩ = |ab(ab ⊕ c)⟩. (9)
Generalization of this gate needs of course using an addition modulo d:

TQ|abc⟩ = |ab((ab + c) mod d)⟩. (10)

However, in the definition of quantum switch an additional gate C C X˜ is


used. The C C X˜ gate needs two control signals |a⟩ and |b⟩:

CCX˜ |abc⟩ = |ab((−a · −b − c) mod d)⟩. (11)

Remark 5. The C C X˜ gate is not an equivalent to T Q gate. There are some


configurations of control lines (signals) where these two gates work in the
same way, e.g. for control signals with the first qutrit equal to |0⟩. In the
other cases, when the first qutrit is in the state |1⟩ or |2⟩, mentioned gates
operate differently.
A Qutrit Switch for Quantum Networks 299

3 Quantum Switch for Qutrits


In [16] a quantum switch is defined as a 3-qubit, so-called, Controlled-
SWAP gate which swaps the states of first two qubits according to state of
the third qubit. It means that we can divide the operations performed by
the quantum switch into two cases.
Let us denote the state of the first qubit as |A⟩ and the state of the
second qubit as |B⟩. In the first case the state of the third qubit is |0⟩:

|A⟩|B⟩|0⟩ ⇒ |A⟩|B⟩|0⟩. (12)


As we can see in this case the quantum switch does not affect the quantum
state. In the second case the state of the third qubit is |1⟩:

|A⟩|B⟩|1⟩ ⇒ |B⟩|A⟩|1⟩. (13)

Now the states of first two qubits are swapped and this action is the aim of
quantum switch’s work. Because of the presence of the third qubit, taking
state zero or one, it may be stated that the quantum switch is controlled
with classical Boolean values.
Utilizing the CNOT and Toffoli gate, defined in Sect. 2, we are able to
specify the circuits realizing the quantum switch. The examples of
mentioned circuits, described in [16], are shown at Fig. 2.

Fig. 2. The circuits realizing the operation of quantum switch for qubits. If the
control state is |0⟩ (case (a)) the switch does not change the order of first two input
states. When the control state is expressed as |1⟩ (case (b)) the quantum switch
swaps the input states

The quantum switch for qudits may be defined similarly like the switch for
qubits. Let us assume that |A⟩ and |B⟩ represent qudit states

|A⟩|B⟩|n⟩ ⇒ |A⟩|B⟩|n⟩, (14)

and n = 0, 1, 2, 3, . . . , d − 2.
The states |A⟩ and |B⟩ will be swapped when the state of the third qudit
equals |d − 1⟩:
|A⟩|B⟩|d − 1⟩ ⇒ |B⟩|A⟩|d − 1⟩. (15)
300 J. Wi´sniewska and M. Sawerwain

Unfortunately, utilizing the gates and is not sufficient to obtain


C X˜ CCX ˜
a correctly working switch. If the state of control qudit is |d − 1⟩ then the
switch swaps the states of first two qudits. In the other cases an error in the
probability amplitudes distribution will be introduced into the system
because of the construction of C X˜ and C C X˜ gates.
To avoid the mentioned above situation the error correction is necessary
when the state of control qudit is |n⟩. The appropriate gate have to be
added to the circuit realizing quantum switch. Figure 3 depicts the circuit
for the generalized switch. The gate marked as CB performs the correction
only if the state of the first qudit is not |d − 1⟩.

Fig. 3. The circuits realizing the operation of quantum switch for qudits. If the
control state is |n⟩ where n = 0, 1, 2, 3, . . . , d − 2 the switch does not change the
order of first two input states. When the control state is expressed as |d − 1⟩ the
quantum switch swaps the input states. The block SB denotes the swap operations
and CB represents the gate for rearranging amplitudes in subspaces where the state
of the third qudit is different than |d − 1⟩

3.1 Correctness of Qutrit Switch Gate


In this section a proof of algebraic correctness is presented. The proof
concerns the circuit realizing a quantum switch for qutrits.
It is assumed that there are three qutrits. The states of |A⟩ and |B⟩ are
unknown:

|A⟩ = α0|0⟩ + β0|1⟩ + γ0|2⟩, |B⟩ = α1|0⟩ + β1|1⟩ + γ1|2⟩. (16)

The third qutrit works as a control state and the only values that may be
assigned to it are: |0⟩, |1⟩ or |2⟩.
The first case to analyze is when the control qutrit is in the state |2⟩.
The initial state then may be expressed as:

|ψ02⟩ = (|A⟩⊗|B⟩ ⊗ |2⟩) = α0α1|002⟩ + α0β1|012⟩ + α0γ1|022⟩


+ α1β0|102⟩ + β0β1|112⟩ + β0γ1|122⟩
+ α1γ0|202⟩ + β1γ0|212⟩ + γ0γ1|222⟩, (17)
A Qutrit Switch for Quantum Networks 301
1
in the denotation of state 0|ψ2⟩ the superscript refers to the control state
while the subscript describes the number of the computational step Si (zero
stands for the initial state).
Performing the computational step S1, that is the C X˜ operation, when
the
second qutrit is the control signal, the following state will be obtained:

|ψ12⟩ = α0α1|002⟩ + α0γ1|012⟩ + α0β1|022⟩


+ β0γ1|102⟩ + β0β1|112⟩ + α1β0|122⟩
+ β1γ0|202⟩ + α1γ0|212⟩ + γ0γ1|222⟩. (18)

The step S2 causes the reorganization of the probability amplitudes


distrib- ution:

|ψ22⟩ = α0α1|002⟩ + α0γ0|012⟩ + α0β0|022⟩


+ β1γ0|102⟩ + β0β1|112⟩ + α0β1|122⟩
+ β0γ1|202⟩ + α0γ1|212⟩ + γ0γ1|222⟩. (19)

After the step S3 we obtain the correct final state for this case:

|ψ32⟩ = α0α1|002⟩ + α1β0|012⟩ + α1γ0|022⟩


+ α0β1|102⟩ + β0β1|112⟩ + β1γ0|122⟩
+ α0γ1|202⟩ + β0γ1|212⟩ + γ0γ1|222⟩ = |B⟩ ⊗ |A⟩ ⊗ |2⟩. (20)

The step S4 does not change the values of amplitudes. The error correction
operation is needed when the control qutrit equals |0⟩ or |1⟩ (in these two
cases we expect the final state to be, respectively, |AB0⟩ or |AB1⟩).
When the control qutrit is |0⟩ then the state of the system after the step
S3 is:

|ψ30⟩ = α0α1|000⟩ + α0β1|010⟩ + α0γ1|020⟩


+ γ0γ1|100⟩ + α1γ0|110⟩ + β1γ0|120⟩
+ β0β1|200⟩ + β0γ1|210⟩ + α1β0|220⟩ (21)

The correct final state in this case we will obtain after the change of
amplitudes’ distribution in the computational step S4:

|ψ40⟩ = α0α1|000⟩ + α0β1|010⟩ + α0γ1|020⟩


+ γ0γ1|100⟩ + α1γ0|110⟩ + β1γ0|120⟩
+ β0β1|200⟩ + β0γ1|210⟩ + α1β0|220⟩

(22) For the control qutrit |1⟩ the state of the system after the step S3 is:

|ψ31⟩ = α0α1|001⟩ + γ0γ1|011⟩ + β0β1|021⟩


+ α1β0|101⟩ + α0γ1|111⟩ + β1γ0|121⟩
+ α1γ0|201⟩ + β0γ1|211⟩ + α0β1|221⟩ (23)
302 J. Wi´sniewska and M. Sawerwain

Just like before the error correction is needed in the computational step S4 to
obtain the final state:

|ψ41⟩ = α0α1|001⟩ + α0β1|011⟩ + α0γ1|021⟩


+ α1β0|101⟩ + β0β1|111⟩ + β0γ1|121⟩
+ α1γ0|201⟩ + β1γ0|211⟩ + γ0γ1|221⟩. (24)

Remark 6. The control qutrit (generally the control qudit) accepts the
classical states, so the switch may be called a quantum switch with the
classical control.

4 The Level of Entanglement During Information


Switching
The correct operating of quantum switch can be evaluated by the level of
entan- glement. In the computational steps S1 and S2 the CNOT gate was
used, so it is expected to cause the entanglement in the system. The level of
this entan- glement should be decreased after the step S3 where the initial
states are to be swapped. If the swapping is completed, the level of
entanglement is equal to zero. Performing the step S4 does not affect the
level of entanglement.

Fig. 4. The values of entanglement level during the work of quantum switch for
qutrits. The results were obtained with use of Concurrence measure after every
computational step: S1, S2, S3, S4 (the other values are interpolated). Three cases
were analyzed respecting the value of control qutrit: |2⟩, |1⟩, |0⟩

If the signals were not swapped then the level of entanglement does not
decrease after the step S3. In this case we will not obtain the state |AB0⟩
neither
A Qutrit Switch for Quantum Networks 303

|AB1⟩. However, after the error correction in step S4 the level of entanglement
should be equal to zero.
To evaluate the level of entanglement the Concurrence measure is used:

C(|AB⟩) = 2 (1 − Tr (σ2 )), (25)


A

where σ2 is a square of the state matrix after the operation of partial trace
A
and
after the rejection of |B⟩ state and the state of control qutrit.
Figure 4 presents the values of entanglement level calculated with use of
the Concurrence measure. The measure was utilized only for states obtained
after the steps S1, S2, S3 and S4. The other values are interpolated. Despite
the simplification, introduced by the interpolation, it may be noticed that
the level of entanglement changes with the following computational steps. If
the states are swapped the entanglement decreases to zero after the step S3.
The chart refers to the switch performing in two states:
2 1 2
|A⟩ = 3
|2⟩, |B⟩ |0⟩ + |1⟩. (26)
3 |0⟩ 5 5
+ =
3
Naturally, before the first computational step S1 the value of
entanglement is zero because the input state for the switch is a fully
separable state.

5 Conclusions
In this chapter we presented the quantum switch for qutrits. The switch is
con- trolled by the state of the third qutrit. The first two states are swapped
when the state of the third qudit is |2⟩. The proposed solution is similar to
the switch for qubits. Our aim was to define a switch which imitates the
behavior of previously known solution, but works for qutrits. The presented
circuit and its correct- ness (see: Sect. 3.1) allow to conclude that it is
possible to build such a switch. However, the construction of qubit switch
using CNOT and Toffoli gates is not open for the simple generalization for
qudits. As it is presented at Fig. 3, the
circuit realizing the quantum switch for qutrits needs the gate which is
C C X˜
not equivalent to Toffoli gate according to Remark 5.

Acknowledgements. We would like to thank for useful discussions with the Q-INFO
group at the Institute of Control and Computation Engineering (ISSI) of the
University of Zielona G´ora, Poland. We would like also to thank to anonymous
referees for useful comments on the preliminary version of this paper. The numerical
results were done using the hardware and software available at the “GPU µ-Lab”
located at the Institute of Control and Computation Engineering of the University
of Zielona G´ora, Poland.

References
304 J. Wi´sniewska and M. Sawerwain
1. Babiarz, A., Czornik, A., Klamka, J., Niezabitowski, M.: The selected problems
of controllability of discrete-time switched linear systems with constrained
switch- ing rule. Bull. Pol. Acad. Sci. Tech. Sci. 63(3), 657–666 (2015).
doi:10.1515/ bpasts-2015-0077
A Qutrit Switch for Quantum Networks 305

2. Balakrishnan, S.: Various constructions of Qudit SWAP Gate. Phys. Res. Int.
2014, Article ID 479320 (2014)
3. Bennett, C.H., Brassard, G., Crepeau, C., Jozsa, R., Peres, A., Wootters, W.:
Teleporting an unknown quantum state via dual classical and EPR channels.
Phys. Rev. Lett. 70, 1895–1899 (1993)
4. Bru¨ning, E., Petruccione, F.: Theoretical Foundations of Quantum
Information Processing and Communication. Springer, Berlin (2010)
5. Daboul, J., Wang, X., Sanders, B.C.: Quantum gates on hybrid qudits. J. Phys.
A: Math. Gen. 36(10), 2525–2536 (2003)
6. Di, Y.M., Wei, H.R.: Synthesis of multivalued quantum logic circuits by
elementary gates. Phys. Rev. A 87, 012325 (2013)
7. Garcia-Escartin, J.C., Chamorro-Posada, P.: A SWAP gate for qudits. Quant.
Inf. Process. 12(12), 3625–3631 (2013)
8. Hayashi, M., Ishizaka, S., Kawachi, A., Kimura, G., Ogawa, T.: Introduction to
Quantum Information Science. Springer, Berlin (2015)
9. Klamka, J., Gawron, P., Miszczak, J., Winiarczyk, R.: Structural programming
in quantum octave. Bull. Pol. Acad. Sci. Tech. Sci. 58(1), 77–88 (2010)
10. Kuppusamy, L., Mahendran, A.: Modelling DNA and RNA secondary structures
using matrix insertion-deletion systems. Int. J. Appl. Math. Comput. Sci. 26(1),
245–258 (2016)
11. Landau, A., Aharonov, Y., Cohen, E.: Realisation of Qudits in coupled potential
wells. Int. J. Quant. Inf. 14(5), 1650029 (2016)
12. Metodi, T.S., Thaker, D.D., Cross, A.W., Chong, F.T., Chuang, I.L.: A
quantum logic array microarchitecture: scalable quantum data movement and
computation. In: Proceedings of 38th Annual IEEE/ACM International
Symposium on Microar- chitecture, MICRO-38, p. 12 (2005)
13. Mohammadi, M.: Radix-independent, efficient arrays for multi-level n-qudit
quan- tum and reversible computation. Quant. Inf. Process. 14, 2819–2832
(2015)
14. Muralidharan, S., Li, L., Kim, J., Lu¨tkenhaus, N., Lukin, M.D., Jiang, L.:
Optimal architectures for long distance quantum communication. Sci. Rep. 6,
Article no. 20463 (2016)
15. Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information.
Cambridge University Press, New York (2000)
16. Ratan, R., Shukla, M.K., Oruc, A.Y.: Quantum switching networks with
classical routing. In: 2007 41st Annual Conference on Information Sciences and
Systems, Baltimore, MD, pp. 789–793 (2007)
17. Schuld, M., Sinayskiy, I., Petruccione, F.: Quantum computing for pattern clas-
sification. In: Pham, D.-N., Park, S.-B. (eds.) PRICAI 2014. LNCS (LNAI), vol.
8862, pp. 208–220. Springer, Cham (2014). doi:10.1007/978-3-319-13560-1 17
306 J. Wi´sniewska and M. Sawerwain

SLA Life Cycle Automation and Management


for Cloud Services


Waheed Aslam Ghumman and Alexander Schill( )
Technische Universit¨at Dresden, Dresden, Germany
{Waheed-Aslam.Ghumman,Alexander.Schill}@tu-dresden.de

Abstract. Cloud service providers mostly offer service level agreements


(SLAs) in descriptive format which is not directly consumable by a
machine or a system. Manual management of SLAs with growing
usage of cloud services can be a challenging, erroneous and tedious
task espe- cially for the cloud service users (CSUs) acquiring multiple
cloud ser- vices. The necessity of automating the complete SLA life
cycle (which includes SLA description in machine readable format,
negotiation, moni- toring and management) becomes imminent due to
complex requirements for the precise measurement of quality of service
(QoS) parameters. In this work, the complete SLA life cycle
management is presented using an extended SLA specification to
support multiple CSU locations. A time efficient SLA negotiation
technique is integrated with the extended SLA specification for
concurrently negotiating with multiple cloud ser- vice providers
(CSPs). After a successful negotiation process, the next major task in
the SLA life cycle is to monitor the cloud services for ensuring the
quality of service according to the agreed SLA. A distrib- uted
monitoring approach for the cloud SLAs is elaborated, in this work,
which is suitable for services being used at single or multiple locations.
The discussed monitoring approach reduces the number of communica-
tions of SLA violations to a monitoring coordinator by eliminating the
unnecessary communications. The presented work on the complete
SLA life cycle automation is evaluated and validated with the help of
experi- ments and simulations.

1 Introduction
The quality of a cloud service is parameterized by a service level agreement
(SLA) between a cloud service user (CSU) and a cloud service provider
(CSP). An agree- ment (SLA) between a CSU and a CSP is finalized by
following different steps of the SLA life cycle, i.e. definition of business
objectives, transforming the business objectives to service definitions,
discovering the appropriate service providers and negotiating with them
over the quality of service (QoS) parameters. Subse- quently, monitoring
and management of the final SLA are important phases of the SLA life
cycle. The decision of choosing between a traditional solution and
acquisition of a cloud service as a better replacement, is influenced by
budget constraints, financial benefits, performance expectations, expeditious
availabil- ity and rapid elasticity of resources. These utilities of a cloud
service over the
Oc Springer International Publishing AG 2017
P. Gaj et al. (Eds.): CN 2017, CCIS 718, pp. 305–318, 2017.
DOI: 10.1007/978-3-319-59767-6 25
306 W.A. Ghumman and A. Schill

traditional solution may diminish due to inadequacy of the automation


processes in different phases of the SLA life cycle (i.e., definition,
negotiation, monitoring and management). For instance, manually
negotiating with multiple cloud ser- vice providers may consume a
significant amount of time and may compel costly delays. Cloud service
providers generally offer SLAs in descriptive/natural lan- guage format
which is not directly consumable by a machine/system. A cloud service user
is conventionally responsible itself to monitor and enforce a nat- ural
language based SLA by first manually transforming the SLA details into a
suitable machine readable format. In this paper, the complete SLA life cycle
management is presented which is based on different automation
components that are joined collectively. The rest of the paper is organized
as given in the following. Section 2 describes a specification for the cloud
SLAs as a basic struc- ture to describe the SLAs into a machine readable
format. Specification for the distributed SLAs (deployed on more then one
locations) is also presented in Sect. 2 which extends the SLA specification
presented in [5]. A time-efficient and automated negotiation technique for
multiple providers, along with a suitable negotiation protocol, is elaborated
in Sect. 3. Section 4 describes a distributed monitoring approach for cloud
SLAs. Section 5 describes the overview of the complete framework. Results
and analysis of different experiments are explained in Sect. 6. A comparison
with related work is described in Sect. 7. In the end, conclusions and future
directions are summarized in Sect. 8.

2 SLA Specification
In this section, a brief description of the “Structural Specification for the
SLAs in Cloud Computing (S3LACC)” [5] is given and also extended (as
S3LACC+) to support distributed SLAs. The S3LACC is a basic structure
to define cloud SLAs in a machine readable format.
An SLA contains one or more service level objectives (SLOs), e.g. service
availability, throughput or response time. Each SLO is measured by one or
more metrics, e.g. service availability SLO may contain metrics such as
availability percentage, maximum number of outages per month and
maximum duration per outage. S3LACC combines the SLA template
parameters, negotiation, mon- itoring and management parameters in a
single structure. The basic structure of S3LACC is based on the cloud SLAs
specific standards and guidelines.1,2 S3LACC classifies the fundamental
components (service level objectives and metrics) of the cloud SLAs in a
hierarchical way which makes the extension of its basic structure an
uncomplicated task. A priority level can be assigned to an SLO by changing
its Weight. A metric can be a qualitative (e.g. service reliability) or a
quantitative (e.g. service availability) metric and both types of
1
NIST Special publication 500-307, available online at: http://www.nist.gov/itl/
cloud/upload/RATAX-CloudServiceMetricsDescription-DRAFT-20141111.pdf.
2
Cloud service level agreement standardization guidelines by European Commis-
sion (2014). Available online at: https://ec.europa.eu/digital-single-market/news/
cloud-service-level-agreement-standardisation-guidelines.
SLA Life Cycle Automation and Management for Cloud Services 307

Fig. 1. UML representation of S3LACC+


308 W.A. Ghumman and A. Schill

metrics contain different types of values (descriptive and numeric,


respectively). A metric can have a direct or inverse ratio type, i.e. if by
increasing the value of a metric, the utility level of the metric also increases
then its ratio type is direct, otherwise inverse. A metric also contains one or
more monitoring schedule which defines the basic monitoring parameters,
e.g. the location of the monitored data that is associated with that metric.
A guarantee or an obligation has similar structure, i.e. a precondition
(which works as a triggering event) and an action that is performed if the
precondition is true. In practical scenarios, a CSU may acquire a cloud
service for using it at multiple locations simultaneously. In such cases,
monitoring the cloud service to detect SLA violations at different locations
becomes a complex task, e.g. if S3LACC is used without any change then it
is not possible to assign the location specific monitoring parameters in the
same SLA. Moreover, one cloud service may offer different service levels for
different global locations (a feature that may become part of the future
cloud services, if not available in the present services), which leads to
different negotiation para- meters for different locations. An extension to
the S3LACC is required to fulfill the needs for the distributed nature of the
SLAs. So, in this paper, an extension to the S3LACC is presented which is
termed as S3LACC+ and which enables the definition of the SLAs that are
suitable for multiple locations. The S3LACC+ contains an additional class
Location along with the basic S3LACC parameters as shown in Fig. 1. By
adding the LocationID to each metric, it becomes possible to have different
negotiation parameters for different locations in the same SLA template.
The monitoring parameters of only relevant metrics can be passed to each
location, which helps to enforce privacy among different locations. A sin- gle
SLA can possibly contain segregated metrics data for each location using
S3LACC+. A formal and detailed description of basic S3LACC parameters
is available in [5].

3 SLA Negotiation
In this section, a dynamic and time-efficient SLA negotiation strategy
(termed as flip-flop [4]) is described which uses S3LACC basically but the
same negotiation strategy is applied to S3LACC+ to support multiple
locations in an SLA. The negotiation protocol used in flip-flop strategy is
based on Rubinstein’s alternat- ing offer protocol [17] with few alterations,
i.e. negotiation process is limited by a deadline and offer acceptance is a two
step process (one party sends an accep- tance to an offer/counter-offer and
other party sends a confirmation). The two step acceptance protocol is
helpful to negotiate concurrently with multiple CSPs,
i.e. if one CSP sends an acceptance then the CSU can evaluate the
negotiations with other CSPs before making the final decision by estimating
the expected final offers from all other CSUs using the concession
extrapolation method as part of the flip-flop negotiation process. The flip-
flop negotiation strategy uses time duration as a deadline rather than fixed
number of negotiation rounds. Cloud services generally require quick
provisioning and having fixed number of negotiation rounds as a deadline
may delay the negotiation time duration due
SLA Life Cycle Automation and Management for Cloud Services 309

to network delays. The flip-flop negotiation strategy works on a principle


that opponent’s expected final offer at the end of the negotiation process is
predicted after each negotiation round (before preparing an offer) and the
same expected final offer from the opponent is tried to be reached earlier in
time using the flip- flop strategy. This strategy aims to reduce the
negotiation process time which can be very advantageous in time-critical
cloud services. The flip-flop negotiation strategy operates as given in the
following:

– A CSU prepares its initial offers (until third negotiation round) according
to the negotiation parameters as described in the SLA template using the
S3LACC+. Initial offer contains the most suitable values for the CSU.
– From the fourth offer, the previous three counter offers (along with the
time at which they were received) are used as input to derive a function
(αj(Tu) for each metric Mj at time Tu [4]) using the polynomial
interpolation method, where:
⎛ ⎞
ι+2 v+3 b−→a

α (T ) Σ
j u ⎜ Tu − T (COκ ) ⎟ (1)
= ⎝ κ=v+1
T (COb−→a) − T (COb−→a) ⎠ι,j
b−→a
ζ
ι=v ι κ
0≤v≤q κ=/ ι

such that T (COb−→a) represents the point in time during the negotiation
process when a κ-th
κ counter offer from the CSP b is received to the CSU
a and ζb−→a is the ι-th concession that the opponent b has offered between
two counter
ι,j offers.
– The CSP’s final offer is predicted with the help of function αj(Tu) (using
the polynomial extrapolation method).
– The CSU computes its concession to generate the next offer and adjusts
the new offer with an extra amount of concession. The extra amount of
concession is calculated using the decremented (by one) number of
negotiation of rounds,
i.e. if with CSU’s normal concession value was reaching an agreement
in n number of rounds then the extra concession value reaches the
agreement in n − 1 number of negotiation rounds. This step of
increasing the CSU’s concession is termed as flip. In formal terms,
CSU’s concession is set by using a partial function γj(Tu) as given in
the following (Eq. 2):

V b−→a −j,kV
⎪⎪j,q if Vj,w > V
γj(Tu) = ⎨
a−→b NRrem
a−→b
b−→a
j,q (2)
⎪⎪
⎩ V j,w
− Vj,k
NRrem if Vj,w ≤ Vj,q
b−→a

where V b−→a is the expected final offer value from the CSP b to the CSU
a for thej,qmetric Mj, Vj,w is the worst-possible/reserve value, V a−→b is the
offer value from the CSU a to the provider b with normalj,kconcession and
NRrem is the expected number of remaining negotiation rounds
301 W.A. Ghumman and A. Schill
0 (considering the time consumed per negotiation round and the total
amount of negotiation time).
310 W.A. Ghumman and A. Schill

– If the CSP responds with an equivalent or more percentage increase in its


concession then the CSU continues the flip step. However, in case of a
negative response (due to a greedy strategy by the CSP), the CSU
adjusts its regular concession and decreases it by the same amount that
it was increased in the previous flip step. This process of decreasing the
CSU’s concession is termed as flop step.
– This flip-flop process continues in every negotiation round until an
agreement is reached or the negotiation process times-out.

In context of S3LACC+, a custom negotiation strategy (e.g. linear,


conceder or Boulware) can be dynamically integrated with the existing SLA
structure by updating the DynamicConcessionValues parameter in the
QuantitativeMet- ric class. The flip-flop negotiation strategy is evaluated in
Sect. 6, outlining its potential benefits in detail.

4 SLA Monitoring and Management


After the successful negotiation process with one of the CSPs, a final/agreed
SLA is stored at a central location by a monitoring coordinator (MC). The
monitoring coordinator distributes the monitoring parameters to each
location that intends to use the related cloud service. A distributed
monitoring strategy is needed to efficiently report SLA violations at each
location. The number of communications from each location to the MC is
important due the fact that an excessive number of communications may
consume the network bandwidth and a lesser number of communications
may result in missing an important event at a location. So, a precise
method of distributed monitoring can decrease the unnecessary
communications by ensuring the fact that important events are reported to
the MC. The monitoring parameters defined in an SLA (using the
S3LACC+ implementation) work as a basis of the SLA monitoring process.
The distributed SLA monitoring strategy using partial violations [6] is
combined with S3LACC+ implementation to perform the SLA monitoring
for multiple locations as described in the following:

– Each SLO is assigned the minimum number of violations Vmin that must
occur at a location before reporting to the MC.
– A partial violation value v (from the interval [0, 1]) is assigned to each
metric at design time. This partial violation value is based on the type of
violation,
e.g. if violation is minor then a smaller v is assigned to that violation and
vice versa.
– When a violation occurs, its corresponding v value is calculated and
added to the existing violation value total S. Whenever S ≥ 1, then one
violation is added to the the existing number of violations (Vcurrent) for the
SLO and if Vcurrent ≥ Vmin then all collected violations are reported to the
MC along with the necessary information. After reporting, counters are
set to zero for the next monitoring round. In Sect. 6, the benefits of this
threshold-based reporting approach are validated and further illustrated.
SLA Life Cycle Automation and Management for Cloud Services 311
1
As a location reports SLA violation(s) to the MC, the guarantees and
obligations parameters in the SLA are checked for the related location and if
a precondition in a guarantee or obligation class is fulfilled then the
corresponding action is taken. An action in a guarantee/obligation may
include different SLA manage- ment tasks, e.g. a claim to be sent to the
CSP for service credits along with the SLA violation data that is received
from a location, sending a message to a financial system for deductions from
the monthly payments to the CSP, renego- tiating with the CSP depending
on the ReNegotiationParameters in the SLA or adjusting the service usage
if an obligation (on the CSU side) requires so. A custom management task
can be embedded in the action function of a guarantee or obligation which
makes this SLA specification very useful in context of the cloud services.

5 S3LACC+ Framework Overview


In this section, a collective functioning of the different phases of the SLA life
cycle is described using S3LACC+. Generally, an SLA template and a
final/agreed SLA are two different documents, whereas S3LACC+ combines
them in a single document and each CSU and a CSP maintain their
personal copy of the SLA (which includes template parameters, negotiation,
monitoring and management parameters). It is assumed that all CSPs share
the same SLA structure using the S3LACC+ implementation. The complete
SLA life cycle for the S3LACC+ framework is described in the following
(see also Fig. 2):
– A CSU requests the SLA templates from all of the CSPs. These SLA
tem- plates contain the basic information, e.g. CSP name, maximum
negotiation time or names of possible negotiable SLOs and their
respective metrics. Each CSP may set its own deadline (different from
other CSPs) for the negotiation process in the SLA template. A CSU
may also set the deadline for the nego- tiation process with few
constraints, i.e. the CSU can not set the deadline that is greater than the
minimum of all CSPs’ deadlines and the minimum deadline among all
the CSPs and the CSU is shared with all CSPs included in the
concurrent negotiation process.
– The CSU prepares its SLA template (based on the SLA templates
acquired from the CSPs) and adds the negotiation parameters according
to its business objectives.
– The concurrent negotiation service starts the negotiation process with the
each CSP according to the SLA negotiation described in the Sect. 3.
– After the successful negotiation process, with one of the CSPs, the CSU
adds
the monitoring parameters for each location and communicates them to
the respective locations.
– Each location starts using the cloud service and marked SLA parameters are
monitored according to the monitoring approach described in the Sect. 4.
– SLA violations are reported to the MC according to the rules defined in
the monitoring parameters. The MC sends the SLA violations to the SLA
management service which consults the guarantees and obligations parts
of the SLAs to take the appropriate action against each SLA violation.
312 W.A. Ghumman and A. Schill

Fig. 2. An overview of the complete SLA life cycle management using S3LACC+

– As the SLA built using the S3LACC+ is based on an object oriented


app- roach, so it can be serialized to an XML file for interaction(s) with
external system(s).

The S3LACC+ framework is useful for continuous change management and


for repeating the whole SLA life cycle by muting the already negotiated
parameters (that require no further change) and by only marking a few
metrics as negotiable based on the changes in business objectives.

6 Experiments and Validation


The S3LACC+ framework is implemented in Java and multiple experiments
are performed to evaluate the complete SLA life cycle. The sample
experiments are conducted with different number of SLOs and metrics
including quantitative and qualitative metrics. The negotiation process is
tested by comparing the result (overall agreement utility) achieved with and
without using the flip-flop negotia- tion strategy in a concurrent setup. A
CSP is allowed to adopt a greedy strategy with 33% chances for all of the
experiments. For instance, Table 1 shows the experiment results for the
negotiation process which is concurrently completed with 10 CSPs. It can be
noted that, in most of the cases, the flip-flop negotia- tion strategy performs
better than a normal concession strategy. This automated
SLA Life Cycle Automation and Management for Cloud Services 313

negotiation strategy eliminates the human-intervention during the complex


nego- tiation scenarios and modifies its concession amount depending on the
response from the opponent, i.e. if an opponent responds positively then this
strategy seeks to reach the same agreement (that is expected at the end of
negotiation process) in a lesser amount of time. If an opponent is responding
with a greedy approach to benefit from the increased concession (during the
flip step) from the CSU, then the flop step aims to recover the loss made
during the previous step by reducing the CSU’s concession. A graphical
representation of the experiment results of Table 1 are shown in Fig. 3.

Fig. 3. Comparison of overall agreement utility achieved with and without flip-flop
negotiation strategy

A simulation service (also used in the previous related work [6]) is imple-
mented which induces the SLA violations for different numbers of SLOs to
eval- uate the effect on the total number of communications made to the
monitoring coordinator. Table 2 shows results of one experimental
simulation for 10 SLOs. Figure 4 gives the graphical representation of
monitoring simulation for 4 SLOs. The monitoring simulation data for an
experiment (shown in Table 2) includes the increasing number of induced
partial violations in its second column. The columns number 3 to number 7
(in Table 2) classify the number of values that fall under the interval
(mentioned in the second row of respective columns). The last column (in
Table 2) represents the resulted number of communications to the
monitoring coordinator. The partial violation limits for each metric of an
SLO is set randomly. Multiple experiments with different numbers of SLOs
show the similar behavior in total number of communications which
validates the consistency of the monitoring approach used for the
S3LACC+ framework.
314 W.A. Ghumman and A. Schill

Table 1. Experimental results for overall agreement utility achieved with and without
using the flip-flop negotiation strategy

Agreement utility
Without flip-flop With flip-flop
CSP1 4.55 21.91
CSP2 39.27 30.59
CSP3 30.59 39.27
CSP4 13.23 13.23
CSP5 56.63 56.63
CSP6 30.59 39.27
CSP7 56.63 56.63
CSP8 47.95 56.63
CSP9 39.27 39.27
CSP10 30.59 39.27

Table 2. Experiment data and results with 10 SLOs

SLOs Total partial Number of partial violations per interval Number of


violations [0,.2[ [.2,.4[ [.4,.6[ [.6,.8[ [.8,1] communications
10 20 8 0 1 7 4 0
10 40 5 9 8 7 11 2
10 60 11 15 10 7 17 4
10 80 13 17 18 15 17 8
10 100 18 16 20 26 20 7
10 120 19 22 26 25 28 12
10 140 32 33 20 28 27 12
10 160 34 33 36 27 30 17
10 180 46 32 32 39 31 14
10 200 35 37 47 37 44 18

7 Related Work and Analysis


In this section, a comparison of existing approaches for SLA specification,
moni- toring and management is described. First, an overall analysis is
given that com- pares different SLA specification languages with S3LACC+
and its features with respect to capabilities of SLA negotiation and
monitoring/management. Quali- tative metrics play an important role in
cloud SLAs specifically, e.g. reliability is a major factor while selecting a
cloud service which is a qualitative metric in general, requiring a different
method of specification and negotiation than quan- titative metrics. Most of
the specification languages compared in Table 3 contain
SLA Life Cycle Automation and Management for Cloud Services 315

Fig. 4. Experiment results using the monitoring simulation for 4 SLOs

no method for processing the qualitative metrics whereas S3LACC+


provides a comprehensive support for the qualitative metrics. Table 3 shows
a brief fea- ture based comparison of WSLA [9], WS-Agreement [1], SLAng
[12], SLA* [8], SLALOM [2], Stamou et al. [18], Joshi and Pearce [7], CSLA
[11], Kotsokalis et al. [10] and S3LACC+.
In the second column of the Table 3, target domain (original domain for
which the specification was given) is mentioned, next columns show if the
SLA negotia- tion, monitoring and management are supported by the
specification or not. The word Partial in the negotiation column represents
that either negotiation para- meters are partially definable or negotiation
strategy is not integrated within the specification. S3LACC+ enables
complete integration of the static and dynamic negotiation parameters.
Also, S3LACC+ enables a user to include any custom negotiation strategy
within the SLA template. Another feature of S3LACC+ adds the capability
of merging the SLA template and the final SLA as a single document.
Partial in monitoring/management column represents that either full SLA
monitoring is not supported by the specification or a customizable SLA
monitoring technique is not possible to integrate using the specification.
SLA management of cloud services includes tasks such as preparing
claims in case of service violations, updating SLA parameters if
requirements change or performing an action triggered due to a monitoring
event. Zhang [19] present an approach for life cycle based SLA management
for web services. An SLA manage- ment platform is presented in [19] to
define SLAs for web services, registration of SLAs, monitoring and mapping
of provider supplied parameters to service user’s
316 W.A. Ghumman and A. Schill

Table 3. Comparative analysis of S3LACC+ framework with other approaches

Source Original domain Negotiation Monitoring/management


WSLA Web services Yes (static) Yes
WS-Agreement Web services Yes (static) Partial
SLAng Internet/web services Yes (static) Partial
SLA* Domain independent Partial No
SLALOM IT services No Partial
Stamou et al. Cloud data services No Partial
Joshi et al. Cloud services Partial Yes
CSLA Cloud services No Yes
SLAC Cloud services Partial Yes
Kotsokalis et al. IT services Yes Yes
S3LACC Cloud services Yes Yes

QoS parameters. In a most recent survey, Faniyi et al. [3] present an


overview of SLA management for cloud services in which it is argued that
cloud SLAs have still not standardized enough to be automatically
deployed. It is also concluded in [3] (based on detailed analysis) that
majority of approaches related to SLAs have considered between one to
three SLA parameters. Rak et al. [15] base their work for SLA monitoring
on the mOSAIC API [14] (which offers development of inter-operable,
portable and provider independent cloud applications). In [16], mOSAIC
API is used as basis for user-centric SLA management. Maarouf et al. [13]
present a model for the SLA life cycle management in a more recent paper
where different phases of the SLA life cycle are discussed and modelled
using UML (unified modeling language) diagrams. However, this work does
not includes any SLA specification itself.

8 Conclusions and Future Work


In this paper, automation and management of the complete SLA life cycle is
presented which extends an existing SLA specification (S3LACC) for
multiple locations. S3LACC consists a core structure (with favorable
relationship among SLA elements), which is easily extensible to meet the
customer specific require- ments and it can also be easily modified for future
changes, i.e. an extension (S3LACC+) is presented in this work to support
SLAs for multiple locations based cloud services. The extended S3LACC+
specification targets the com- plete SLA life cycle whereas most of the
existing specifications lack one or other critical phases of SLA life cycle. An
SLA specification is not very beneficial if one of the SLA life cycle phases is
not supported or complete features of the cloud service specific SLAs are not
supported. The negotiation strategy used in this work (flip-flop negotiation)
enables a CSU and a CSP to conclude the negotia- tion process in lesser
time, hence efficient use of cloud resources is ensured which
SLA Life Cycle Automation and Management for Cloud Services 317

is an essence of cloud computing. The flip-flop negotiation strategy can be


easily integrated in an SLA template using S3LACC+ and a CSU can make
use of this efficient negotiation strategy without making any changes to the
SLA template. Similarly, the monitoring strategy used in this work enables
distributed and con- tinuous monitoring which can be joined with the
S3LACC+ easily as well. The used monitoring approach decreases the
number of communications made from different service locations towards
the monitoring coordinator. Also, this mon- itoring approach helps a
monitoring coordinator to define different monitoring parameters for
different locations rather than a global monitoring strategy. The future
directions of this work include the extension of S3LACC+ for a CSP per-
spective and to design a negotiation strategy that enables offline
negotiations to reduce the number of round trips between a CSU and a CSP
during the nego- tiation process. These extensions require special
considerations with respect to security and privacy issues as well.

References
1. Andrieux, A., Czajkowski, K., Dan, A., Keahey, K., Ludwig, H., Nakata, T.,
Pruyne, J., Rofrano, J., Tuecke, S., Xu, M.: Web services agreement
specification (WS-Agreement). Open Grid Forum. 128, 216 (2007)
2. Correia, A., Amaral, V., et al.: SLALOM: a language for SLA specification and
monitoring (2011). arXiv preprint arXiv:1109.6740
3. Faniyi, F., Bahsoon, R.: A systematic review of service level management in the
cloud. ACM Comput. Surv. 48(3), 43:1–43:27 (2015)
4. Ghumman, W.A., Schill, A., L¨assig, J.: The flip-flop SLA negotiation strategy
using concession extrapolation and 3D utility function. In: IEEE 2nd
International Conference on Collaboration and Internet Computing, pp. 159–168,
November 2016
5. Ghumman, W.A., Schill, A.: Structural specification for the SLAs in cloud com-
puting (S3LACC). In: 13th International Conference on the Economics of Grids,
Clouds, Systems, and Services, September 2016
6. Ghumman, W.A., Schill, A.: Continuous and distributed monitoring of cloud
SLAs using S3LACC. In: The 11th IEEE International Symposium on Service-
Oriented System Engineering, April 2017
7. Joshi, K.P., Pearce, C.: Automating cloud service level agreements using seman-
tic technologies. In: 2015 IEEE International Conference on Cloud Engineering
(IC2E), pp. 416–421. IEEE (2015)
8. Kearney, K.T., Torelli, F., Kotsokalis, C.: SLA*: an abstract syntax for service
level agreements. In: 11th IEEE/ACM International Conference on Grid
Computing, pp. 217–224, October 2010
9. Keller, A., Ludwig, H.: The WSLA framework: specifying and monitoring service
level agreements for web services. J. Netw. Syst. Manage. 11(1), 57–81 (2003)
10. Kotsokalis, C., Yahyapour, R., Rojas Gonzalez, M.A.: Modeling service level
agree- ments with binary decision diagrams. In: Baresi, L., Chi, C.-H., Suzuki, J.
(eds.) ICSOC/ServiceWave-2009. LNCS, vol. 5900, pp. 190–204. Springer,
Heidelberg (2009). doi:10.1007/978-3-642-10383-4 13
11. Kouki, Y., Ledoux, T.: CSLA: a language for improving cloud SLA
management. In: International Conference on Cloud Computing and Services
Science, CLOSER, vol. 2012, pp. 586–591 (2012)
318 W.A. Ghumman and A. Schill

12. Lamanna, D.D., Skene, J., Emmerich, W.: Specification language for service
level agreements. EU IST 34069 (2003)
13. Maarouf, A., Marzouk, A., Haqiq, A.: Practical modeling of the SLA life cycle in
cloud computing. In: 2015 15th International Conference on Intelligent Systems
Design and Applications (ISDA), pp. 52–58, December 2015
14. Moscato, F., Aversa, R., Martino, B.D., Forti, T.F., Munteanu, V.: An analysis
of mOSAIC ontology for cloud resources annotation. In: Federated Conference
on Computer Science and Information Systems, pp. 973–980, September 2011
15. Rak, M., Venticinque, S., M´ahr, T., Echevarria, G., Esnal, G.: Cloud
application monitoring: the mOSAIC approach. In: IEEE Third International
Conference on Cloud Computing Technology and Science (CloudCom), pp. 758–
763, November 2011
16. Rak, M., Aversa, R., Venticinque, S., Martino, B.: User centric service level
management in mOSAIC applications. In: Alexander, M., et al. (eds.) Euro-Par
2011. LNCS, vol. 7156, pp. 106–115. Springer, Heidelberg (2012). doi:10.1007/
978-3-642-29740-3 13
17. Rubinstein, A.: Perfect equilibrium in a bargaining model. Econometrica 50(1),
97–109 (1982)
18. Stamou, K., Kantere, V., Morin, J.H., Georgiou, M.: A SLA graph model for
data services. In: Proceedings of the Fifth International Workshop on Cloud
Data Management, pp. 27–34, October 2013
19. Zhang, S., Song, M.: An architecture design of life cycle based SLA
management. In: Proceedings of the 12th International Conference on Advanced
Communication Technology, pp. 1351–1355, February 2010
Queueing Theory
Performance Modeling
Using Queueing Petri Nets


Tomasz Rak( )

The Faculty of Electrical and Computer


Engineering, Rzeszow University of Technology,
Rzesz´ow, Poland [email protected]
http://trak.kia.prz.edu.pl

Abstract. In this paper, a performance model is used for studying dis-


tributed web systems (an J2EE web application with Oracle backend-
database). Performance evaluation is done by obtaining load test mea-
surements. Queueing Petri Nets (QPN) formalism supports modeling
and performance analysis of distributed World Wide Web
environments. The proposed distributed web systems modeling and
design methodology has been applied for evaluation of several system
architectures under differ- ent external loads. Experimental analysis is
based on benchmark with realistic workload. Furthermore,
performance analysis is done to deter- mine the system response time.

Keywords: Distributed web system models · Response time analysis ·


Queueing Petri Nets · Performance modeling

1 Introduction
Distributed Web systems development assumes that the systems consist of a
set of distributed nodes. Groups of nodes (clusters) are organized in layers
con- ducting predefined services. This approach makes possible easily to
scale the system. An example of a web system is a stock trading system
used by profes- sional traders. In this system, it may be a requirement for
certain positions to be bought or sold when market events occur.
Modeling and design stock trading systems as distributed web systems
deve- lope in two ways. On the one hand, formal models which can be used
to analyze performance parameters are proposed. To describe systems such
formal methods like Queueing Petri (QN) and Petri Nets (PN) [7, 10, 15, 16]
are used. For example [10], a closed queueing model of SPECjAppServer2002
benchmark comprising of client, application server cluster, database server
and production line stations is described. Sometimes elements of the control
theory are used to manage the movement of packages in web servers [19].
Experiments are the second way [26]. Applying experiments and models
greatly influences validity of the systems being developed.
Our approach described in this paper may be treated as extension of
selected solutions summed up in [13, 14], where we propose a QPN [2]
models for Internet
Oc Springer International Publishing AG 2017
P. Gaj et al. (Eds.): CN 2017, CCIS 718, pp. 321–335, 2017.
DOI: 10.1007/978-3-319-59767-6 26
322 T. Rak

system The final QPN based model can be executed and used for modeled
system performance prediction. In our solution we propose QPN models for
one kind of distributed web system with all types of quotes [14]. The models
have been used as a background for developing a programming tool which is
able to map timed behavior of QN by means of simulation. Subsequently we
developed our individual method of modeling and analysis of distributed
Web systems. The well known software toolkits as Queueing Petri net
Modeling Environment (QPME)
[9] can be naturally used for our models simulation and performance
analysis. The remaining work is organized as follows. Section 3 presents
distributed
Web system architecture and describe modeling approach. Section 4
presents performance analysis results. The final section contains concluding
remarks.1

2 Related Work
In this section, we review some related work in the area of performance web
systems modeling. Several approaches have been proposed for previous
perfor- mance analysis. Existing modeling approaches are mostly based on
stochastic models such as QN (classical product-form, extended or layered)
or stochas- tic PN. Building such models requires experience in stochastic
modeling and analysis. The research community has proposed high-level
network modeling approaches that support automatic generation of low-
level predictive models. Most existing model-based approaches are based
either on black-box statistical models [3] or on highly detailed protocol-level
simulation models [23]. Several recent surveys review performance modeling
tools, with a focus on model-based prediction [1], evaluation of component-
based systems [11], and analyzing soft- ware architectures. An outlook into
the future and directions for future research in software performance
engineering are given in [24]. The related work can be divided into
publications based on analysis of QN and PN.

2.1 Queueing Nets Models


Layered Queueing Network (LQN) performance models have been used for
study- ing of software and web systems [20]. Cao et al. [6] proposes queueing
model of a web server. Kattepur and Nambiar [8] have proposed a
theoretically model performance of multi-tiered applications and use
queuing networks and Mean Value Analysis models. Similarly, Tiwari and
Myanampati compare LQN with QPN using the LQNS and HiQPN tools to
model the SPECjAppServer2001 benchmark application [22].

2.2 Petri Nets Models


PN is the first choice for system modeling because it is applicable for the
eval- uation and prediction of web systems both in terms of theoretical
support and
1
We assumed that the reader is familiar with Queueing Petri Net formalism [2] and
Performance Modeling Using Queueing Petri Nets 323
performance analysis tool [9].
324 T. Rak

quantitative analysis. Many efficient simulation techniques for stochastic PN


are available. Regarding the stochastic PN, such as QPNs, so far they have
mainly been used to model specific components at a high level of detail [25].
Addition- ally, in other work [18], nodes appear as a part of a larger
modeling landscape and are typically modeled as a black-box. Traditional
methods based on math- ematical derivation and theoretical explanation fail
to solve the problem of the explosive growths of data quantity, data amount
and calculation amount existing in the analytical process of the formalized
model of web systems. Some authors use it for modeling:
– databases [17],
– web architectures [4],
– grid environments [12],
– cloud systems [5],
– virtualized environments [21].
Performance models are an abstraction of a combined hardware and
software system describing its performance relevant structure and behavior.
It substan- tiate this, here we demonstrate the usage of popular modeling
technique QPN. They are very popular and helpful for the qualitative and
quantitative analy- sis. QPN in addition to quantitatively performance
modeling of sources, it also can depict the dependency relationship among
multilayered systems considering the characteristics of web system involving
diverse application service, complex service behavior and superimposed
hierarchical structures. The modeling app- roach presented in this paper
differs from that of previous work because we model different types of
requests. Moreover, our QPN models have a more intu- itive structure that
maps the infrastructure to the QPN model. This makes the model easier to
comprehend by system developers. In the current work we utilize the
versatility of QPN to study a web system. System analysis is often needed
with respect to both qualitative and quantitative aspects. From a practical
point of view, QPN paradigm provides a number of benefit over
conventional model- ing paradigm. Using QPNs one can integrate hardware
and software aspects of system behavior into the same model. QPN have
greater expressive power than QN (quantitative analysis) and PN
(qualitative analysis). QNs have a queue, scheduling discipline and are
suitable for modeling competition of equipment (hardware contention). PNs
have tokens representing the tasks and are suitable for modeling software.
They easy lend themselves to modeling blocking and syn- chronization.
QPNs have the advantages of QNs (e.g., evaluation of the system
performance, the network efficiency) and PNs (e.g., logical assessment of the
system correctness).

3 Distributed Web System


3.1 Distributed Web System Architecture

Distributed Internet system architecture is made up of several layers:


Performance Modeling Using Queueing Petri Nets 325

– The first one presents information (system offer) for clients in the web
pages form and contains clusters.
– The second one manages transactions (clients requests) and provides the
clus- tering functionality that allows load balancing.
– The third one controls the transactions, as a single element of this layer
or multiple servers with database replication.
– the fourth one is a data storage system.
In our approach the presented architecture has been simplified to two layers:
– Front-end layer is based on the presentation and processing mechanisms.
– Back-end layer keeps the system data.
Architecture composed of these layers is used for e-business systems. The
presented double-layer system architecture realizes distributed Web system
func- tions. Access to the system is realized through transactions. Clustering
mecha- nism was used in a fron-end layer. These simplifications have no
influence on the modeling process, which has been shown repeatedly e.g.
[10].
The characteristic feature of many distributed Web systems is a large
number of clients using the Internet services (e.g. stock trading system) at
the same time. The distributed Web system clients have different response
time requirements. In the case of the described systems class, clients are
often focused on an one event related to the same system offer (the same
database resources). Based on these futures we used stock trading system
as a benchmark with two-layered architecture (a cluster in a front-end layer
and one database in a back-end layer).

3.2 Modeling of Distributed Web System


Typically, distributed Web systems are composed of layers where each layer
con- sists of a set of servers - a server cluster. The layers are dedicated for
proper tasks and exchanged requests between each other. To explain our
approach to distributed Web system modeling a typical structure will be
modeled and sim- ulated. The first layer (front-end) is responsible for
presentation and processing of client requests. Nodes of this layer are
modeled by Processor Sharing queues. The next layer (back-end)
implements system data handling. A node of this layer is modeled by using
the First In First Out queue. Requests are sent to the system and then can
be processed in both layers. The successfully processed requests are send
back to the client.
Consequently, an executable (in a simulation sense) QPN model is
obtained. Tokens generated by arrival process are transferred in sequence by
models of a front-end layer and by a back-end layer. QPN extend coloured
stochastic PN by incorporating queues and scheduling strategies into places
forming queueing places. This very powerful modeling formalism has the
synchronization capabil- ities of PN while also being capable of modeling
queueing behaviors. The queue mean service time, the service time
probability distribution function and the number of servicing units defined
for each queueing system in the model are the main parameters of the
modeled system. In the demonstrated model it has
326 T. Rak

been assumed that queues belonging to a front-end layer have identical


parame- ters. QPN consists of a set of connected queueing places. Each
queueing place is described by arrival process, waiting room, service process
and additionally depository. We used several queueing systems (e.g. –
/M/PS/∞, – /M/FIFO/∞) most frequently used to represent properties of
distributed Web system compo- nents. For example -/M/1/PS/∞ is:
exponential service times, single server, Processor Sharing service discipline
and unlimited number of arrivals in the system. In the QPME software tool,
it is possible to construct QPN with queue- ing systems having Processor
Sharing and First In First Out disciplines. As it was mentioned above, the
main application of the software tool presented in the paper is modeling and
evaluation of distributed Web systems. To effectively model the Internet
requests from the clients, a separate token colour (type) has been used.

4 Response Time Analysis


In our solution we propose a very popular formal method - QPN [2]. This
method is based on QN and PN. Queuing theory deals with modeling and
optimizing different types of service units. QN usually consists of a set of
connected queuing systems. The various queue systems represent computer
components. QN are very popular for the quantitative analysis. To analyze
any queue system it is necessary to determine: arrival process, service
distribution, service discipline and waiting room (scheduling strategies). PN
are used to specify and analyze the concurrency in systems. The system
dynamics is described by rules of tokens flow. The net scheme can be
subjected to a formal analysis in order to carry out a qualitative analysis,
based on determining its logical validity. PN are referred as the connection
between the engineering description and the theoretical approach. PN are
well-known models used to describe and analyze the service units. PN
cannot be used for a quantitative analysis due to lack of time aspects.
The studies focus on incoming load measuring, e.g. measure of the response
time or presentation of an overall modeling plan. Queuing Nets -
quantitative analysis - have a queue and scheduling discipline and are
suitable for modeling competition of equipment. PN - qualitative analysis -
have tokens representing the tasks and are suitable for modeling a software.
QPN have the advantages of QN and PN. QPN formalism is a very popular
formal method of functional and perfor- mance modeling (performance
analysis). These nets provide sufficient power to express modeling and
analyzing of complex on-line systems. The choice of QPN was caused by a
possibility of obtaining the different character information. The main idea
of QPN is to add queueing and timing aspects to the net places. QN
consists of a collection of service stations and clients. The service stations
(queue) represent system resources while the clients represent users or
transac- tions. A service station is composed of one or more servers and a
waiting area. Tokens enter the queueing place through the firing of input
transitions, as in other PN. When a request arrives at a service station, it is
immediately serviced if a free server is available. Otherwise, the request has
to wait in the waiting
Performance Modeling Using Queueing Petri Nets 327

area. Different scheduling strategies can be used to serve the requests


waiting in the waiting area. Places (QPN) are of two types: ordinary and
queued. A queued place (resource or state) is composed of a queue (service
station) and a depository for tokens that completed their service at a queue.
After being served by the service station (coloured) tokens are placed onto a
depository. Input tran- sitions are fired and then tokens are inserted into a
queueing place according to the queue’s scheduling strategy. Queueing
places can have variable scheduling strategies and service distributions
(timed queueing places). Tokens in the queue are not available for output
transitions while tokens in a depository are available to all output
transitions of the queued place. Immediate queueing places impose a
scheduling discipline on arriving tokens without a delay [2].
QPN have been recently applied in the performance evaluation of
component- based distributed systems, databases and grid environments
because they are more expressive to represent simultaneous resource
possession and blocking. Here, QPN models are used to predict the
distributed Web system performance. The response time was chosen to
analyze from many Performance Engineering parameters.

4.1 Experiments Parameters


First, we present results of our experimental analysis. The goal is to check -
among others - the service demand parameter for front-end and back-end
nodes. The application servers, considered as a front-end tier/layer, are
responsi- ble for the method execution. All sensitive data is stored in a
database system (back-end tier/layer). When a method has to retrieve or
update data, the appli- cation server makes the corresponding calls to the
database system. Deployment details are as follows: Gbit LAN network and
six nodes (HP ProLiant DL180 G6). Software environment consists of: 64-
bit Linux operating systems, work- load generator, Apache Tomcat
Connector (as a load balancer), GlassFish (as a application server - first is
Domain Administration Server) and Oracle (as a database server).
Distributed Web systems are usually built on middleware plat- forms such as
J2EE. We use the DayTrader [27] performance benchmark which is
available as open source application. Overall, the DayTrader application is
primarily used for performance research on a wide range of software compo-
nents and platforms. DayTrader is a suite of workloads that allows
performance analysis of J2EE application server. DayTrader is benchmark
application built around the paradigm of an online stock trading system. It
drives a trade scenario that allows to monitor the stock portfolio, inquire
about stock quotes, buy or sell stock. The load generator is implemented
using multi-threaded Java applica- tion connected to DayTrader benchmark
(Fig. 1). By client business transactions we mean the stock-broker
operations: Buy Quote, Sell Quote, Update Profile,
Show Quote, Get Home, Get Portfolio, Show Account and Login/Logout.
Each business transaction emulates a specific class (type) of clients [14].
Some of the most important requests are Buy Quote and Sell Quote
(Fig. 2), but now we used requests class (type) in our simulations.
328 T. Rak

Fig. 1. Workload generator schema

Fig. 2. Real response time for: (a) 5 requests per second workload, (b) 10 requests
per second workload, (c) 15 requests per second workload, (a) 20 requests per second
workload

Experiments - (Table 1) - (Test 1 - one node in a front-end (FE) and one


node in a back-end (BE) layer) have shown that mean number of requests
per second for a front-end layer is about 1300. Respectively the mean
measured number of requests per second for a back-end layer is about 7500
requests per second. We can also see, that the delay in the requests
processing is mainly caused by the waiting time for service in a BE node in
all cases (Figs. 3(a) and 4(a)), but
Performance Modeling Using Queueing Petri Nets 329

Table 1. Parameters of cluster experiments

Server/Parameter Test 1 Test 2 Test 3


Hardware
Client 10.10.10.1 10.10.10.1 10.10.10.1
Load balancer No 10.10.10.3 10.10.10.3
GlassFish application 10.10.10.3 10.10.10.4, 10.10.10.5 10.10.10.4,
server nodesa 10.10.10.5,
10.10.10.6
Oracle database node 10.10.10.2 10.10.10.2 10.10.10.2
Software
Application server 30 2 × 30 3 × 30
threads pool
Database connections 40 2 × 40 3 × 40
pool
Client workload
Number of requests 15 15 15
per second
Number of clientsb 30, 120, 210, 300 30, 120, 210, 300 30, 120, 210, 300
Experiments
Experiment time [s] 300 300 300
a
With the domain administration server (10.10.10.3) as a specially designated Glass-
Fish Server instance that hosts administrative applications.
b
Four subtests in all cases.

the main problem is the performance of the system response time (System -
one node in a FE layer and one node in a BE layer).
Starting the server cluster in front-end layer requires a mechanism that
would allow for an equable distribution of load. It must also be a gateway
that transfers requests and responses between a user and an application. In
such scenario, only a gateway is visible from the outside and - on the basis
of the request - it determines which part of the system (application server),
and how, will be used to perform the request. Built-in load balancer is not
available in the free version of the GlassFish server. Apache Tomcat
Connector has been used as the load balancer. Also cluster - (Table 1) -
(Test 2 - two or Test 3 - three nodes in a FE and one node in a BE layer)
experiments have shown that the mean number of requests per second for a
front-end layer is about 2400. The mean measured number of requests per
second for a back-end layer is the same as earlier. We can also see, that the
delay in the requests processing is mainly caused by the waiting time for
service in the BE node in all cases (Figs. 3(b), (c) and 4(b), (c)), but the
main problem is still the performance of the system response time (System -
two or three nodes in a FE layer and one node in a BE layer). We use
experimental results in our simulations.
321 T. Rak
0

Fig. 3. Real response time for one class requests Buy Quote: (a) 1 node in a front-end
layer, (b) 2 nodes in a front-end layer, (c) 3 nodes in a front-end layer

Fig. 4. Real response time for all classes requests: (a) 1 node in a front-end layer, (b)
2 nodes in a front-end layer, (c) 3 nodes in a front-end layer
330 T. Rak

4.2 Queueing Petri Net Models


Multiple front-end nodes and one back-end node are the main configuration
scenario. QPN models (Fig. 5) are used to predict the system response time.
We use QPME [9] tool. QPME is an open-source tool for stochastic
modeling and analysis based on the QPN modeling formalism used in many
works [9, 10].
Software and client workload parameters are the same as in the experi-
ment environment. Clients think time are modeled by Infinite Server
scheduling strategy (CLIENTS place). Servers of a front-end layer are
modeled using the Processor Sharing queuing systems (FE CPU places). A
back-end server is mod- eled by First In First Out queue (BE I/O place).
Places (FE and BE) used to stop incoming requests when they await
application server threads and database server connections respectively.
Application server threads and database server connections are modeled
respectively by THREADS and CONNECTIONS places (Fig. 5). The
process of requests arrival to the system is modeled by expo- nential
distribution with the λ parameter (client think time) corresponds to the
number of clients request per second. Service in all queueing places is
modeled by an exponential distribution. Service demands in layers based on
experimental results in Sect. 4.1:

– dF E CP U = 0, 714 [ms],
– dBE I/O = 0, 133 [ms].
Initial marking for places corresponds to input parameters of cluster experiment:
– number of clients (number of tokens in CLIENTS place),
– application server threads pool (number of tokens in THREADS place),
– database server connections pool (number of tokens in CONNECTIONS
place).
In these models we have many types2 of tokens:
– quotes (Buy Quote, Sell Quote, Update Profile, Show Quote, Get Home,
Get Portfolio, Show Account and Login/Logout),
– application server threads,
– connections to the database server.
Total response time is a sum of all individual response times of queues
and depositories in a simulation model without the client queue response
time (client think time).

4.3 Simulation Results

Many simulations were performed for various input parameters:


– 30 threads for one front-end node, 60 threads for two front-end nodes, 90
threads for three front-end nodes
2
A colour specifies a type of tokens that can be resided in the place.
Performance Modeling Using Queueing Petri Nets 331
1

Fig. 5. Model of distributed web system with front-end cluster (selected example)

– 40 connections for one front-end node, 80 connections for two front-end


nodes, 120 connections for three front-end nodes
– Client think time for all types of tokens equals 66,67 [ms]
– Simulation time 300 [s].
The number of clients was increasing in accordance with values
(CLIENTS place). We used some scenarios in which we have a nine
requests classes - trans- actions (Buy Quote, Sell Quote, Update Profile,
Show Quote, Get Home, Get Portfolio, Show Account and Login/Logout).
The scenarios involves the response time of the entire system (Sys), the
response time of a front-end layer and a back- end layer (FE+BE) and the
response time of a back-end layer (BE). The number of application server
nodes is 1, 2 and 3. QPN model was used to predict the per- formance of
the system for the scenarios mentioned above and it was developed using
QPME 2.0.3
We investigate the behavior of the system as the workload intensivity
increases. As a result, the response time of transactions is improved for
cases with more number of front-end nodes. Increasing number of nodes is
resulted in simultaneously increasing number of application server threads
and connections to the database.
As we can see (Fig. 6) overall response time decreases while the number
of nodes was increasing (15 requests per second). Response time of one
front-end
3
Queueing Petri net Modeling Environment 2.0 does not support timed transitions,
so it was approximated by a serial network consisting of an immediate transition
(Black rectangles represent immediate transitions.), a timed queueing place and a
second immediate transition [9].
332 T. Rak

Fig. 6. Mean response time of simulation results (system, fron-end and back-end
layer, back-end layer) for different number of nodes and clients (15 requests per
second workload)

node architecture for all cases is the biggest. A difference of response time
between 2 and 3 nodes is much smaller. When more nodes in front-end layer
are added the analysis of their impact on other elements of the system
should be preluded.
Overall system response time increases with increasing workload, even
with a larger number of nodes.
The convergence of simulation results with the real systems results
confirms a correctness of the modeling methods. The validation results show
that the model is able to predict the performance with a relative error about
20% for different number of nodes in front-end layer (Table 2).

Table 2. Modeling response time error for scenario with 300 clients

Number of nodes Model [ms] Measured [ms] Error [%]


1 241,22 211,92 13,8
2 127,76 105,19 21,4
3 69,22 57,50 20,3

5 Conclusions
We can not always add new devices to improve performance, because the
initial cost and maintenance will become too large. Also not every system
can or should be virtualized or put in the cloud computing. Because the
overall system capacity is unknown we propose a combination of
benchmarking and modeling solution. It is still an open issue how to obtain
an appropriate distributed Web system.
Our earlier works propose Performance Engineering frameworks [15, 16] to
evalu- ate performance during the different phases of their life cycle. The
demonstrated
Performance Modeling Using Queueing Petri Nets 333

research results are an attempt to apply QPN formalism to the development


of a software tool that can support distributed Web system design. The idea
of using QPN was proposed previously in [10, 13, 14]. In the presented
approach an alternative implementation of QPN has been proposed. The
rules of model- ing and analysis of distributed Web systems applying
described net structures was introduced. Our present approach predicts
response time for distributed Web system and the benchmark used in our
work has got realistic workload. The modeling approach presented in this
paper differs from our previous works because all types of tokens (requests
classes) were not used earlier. This paper deals with the problem of
calculating performance values like the response time in distributed Web
systems environment.
At present, we have demonstrated the potential in modeling distributed
Web systems using QPN, understood as monolithic modeling technique used
to assess the performance. We develop a framework that helps to identify
performance requirements. Next we analyze the response time
characteristics for several dif- ferent workloads and configuration scenarios.
The study demonstrates the mod- eling power and shows how discussed
models can be used to represent system bahaviour.

References
1. Balsamo, S., Di Marco, A., Inverardi, P., Simeoni, M.: Model-based performance
prediction in software development: a survey. IEEE Trans. Softw. Eng. 30(5),
295– 310 (2004)
2. Bause, F.: Queueing Petri Nets - A Formalism for the Combined Qualitative
and Quantitative Analysis of Systems. IEEE Press, New York (1993)
3. Becker, S., Koziolek, H., Reussner, R.: The palladio component model for model-
driven performance prediction. J. Syst. Softw. 82(1), 3–22 (2009)
4. Brosig, F., Meier, P., Becker, S., Koziolek, A., Koziolek, H., Kounev, S.:
Quantita- tive evaluation of model-driven performance analysis and simulation of
component- based architectures. IEEE Trans. Softw. Eng. 41(2), 157–175 (2015)
5. Cao, Y., Lu, H., Shi, X., Duan, P.: Evaluation model of the cloud systems based
on queuing petri net. In: Wang, G., Zomaya, A., Perez, G.M., Li, K. (eds.)
ICA3PP 2015. LNCS, vol. 9532, pp. 413–423. Springer, Cham (2015).
doi:10.1007/
978-3-319-27161-3 37
6. Cao, J., Andersson, M., Nyberg, C., Kihl, M.: Web server performance modeling
using an M/G/1/K*PS queue. In: International Conference on Telecommunica-
tions, vol. 2 (2003)
7. Chen, X., Ho, C.P., Osman, R., Harrison, P.G., Knottenbelt, W.J.:
Understanding, modelling and improving the performance of web applications in
multi-core vir- tualised environments. In: ACM/SPEC International Conference
on Performance Engineering, pp. 197–207 (2014)
8. Kattepur, A., Nambiar, M.: Performance modeling of multi-tiered web
applications with varying service demands. In: IEEE International Parallel and
Distributed Processing Symposium Workshop, pp. 415–424 (2015)
334 T. Rak

9. Kounev, S., Spinner, S., Meier, P.: QPME 2.0 - a tool for stochastic modeling
and analysis using queueing petri nets. In: Sachs, K., Petrov, I., Guerrero, P.
(eds.) From Active Data Management to Event-Based Systems and More.
LNCS, vol. 6462, pp. 293–311. Springer, Heidelberg (2010). doi:10.1007/978-3-
642-17226-7 18
10. Kounev, S., Rathfelder, C., Klatt, B.: Modeling of event-based communication
in component-based architectures: state-of-the-art and future directions. J.
Electr. Notes Theor. Comput. Sci. 295, 3–9 (2013)
11. Koziolek, H.: Performance evaluation of component-based software systems: a
sur- vey. Perform. Eval. 67(8), 634–658 (2010)
12. Nou, R., Kounev, S., Julia, F., Torres, J.: Autonomic QoS control in enterprise
grid environments using online simulation. J. Syst. Softw. 82(3), 486–502 (2009)
13. Rak, T.: Response time analysis of distributed web systems using QPNs. Math.
Prob. Eng. 2015, Article ID 490835, 1–10 (2015)
14. Rak, T.: Performance analysis of distributed internet system models using QPN
simulation. IEEE Ann. Comput. Sci. Inf. Syst. 2, 769–774 (2014)
15. Rak, T., Werewka, J.: Performance analysis of interactive internet systems for a
class of systems with dynamically changing offers. In: Szmuc, T., Szpyrka, M.,
Zendulka, J. (eds.) CEE-SET 2009. LNCS, vol. 7054, pp. 109–123. Springer,
Hei- delberg (2012). doi:10.1007/978-3-642-28038-2 9
16. Rak, T., Samolej, S.: Distributed internet systems modeling using TCPNs. In:
IEEE International Multiconference on Computer Science and Information
Tech- nology, vol. 1 and 2, pp. 559–566 (2008)
17. Rygielski, P., Kounev, S.: Data Center network throughput analysis using
queueing petri nets. In: IEEE International Conference on Distributed
Computing Systems Workshops, pp. 100–105 (2014)
18. Rygielski, P., Kounev, S., Zschaler, S.: Model-based throughput prediction in
data center networks. In: IEEE International Workshop on Measurements and
Network- ing, pp. 167–172 (2013)
19. Samolej, S., Szmuc, T.: HTCPNs–based modelling and evaluation of dynamic
com- puter cluster reconfiguration. In: Szmuc, T., Szpyrka, M., Zendulka, J.
(eds.) CEE- SET 2009. LNCS, vol. 7054, pp. 97–108. Springer, Heidelberg
(2012). doi:10.1007/
978-3-642-28038-2 8
20. Shoaib, Y., Das, O.: Web application performance modeling using layered
queueing networks. Electr. Notes Theor. Comput. Sci. 275, 123–142 (2011)
21. Spinner, S., Walter, J., Kounev, S.: A reference architecture for online
performance model extraction in virtualized environments. In: International
Conference on Per- formance Engineering, pp. 57–62 (2016)
22. Tiwari, N., Mynampati, P.: Experiences of using LQN and QPN tools for perfor-
mance modeling of a J2EE application. Comput. Meas. Group Conf. 1, 537–548
(2006)
23. de Wet, N., Kritzinger, P.: Using UML models for the performance analysis of
network systems. Comput. Netw. 49(5), 627–642 (2005)
24. Woodside, M., Franks, G., Petriu, C.D.: The future of software performance
engi- neering. In: Future of Software Engineering, pp. 171–187 (2007)
25. Zaitsev, D.A., Shmeleva, T.R.: A parametric colored petri net model of a
switched network. Netw. Syst. Sci. 4, 65–76 (2011)
Performance Modeling Using Queueing Petri Nets 335

26. Zatwarnicki, K.: Operation of cluster-based web system guaranteeing web page
response time. In: Bˇadicˇa, C., Nguyen, N.T., Brezovan, M. (eds.) ICCCI
2013. LNCS, vol. 8083, pp. 477–486. Springer, Heidelberg (2013). doi:10.1007/
978-3-642-40495-5 48
27. DayTrader. https://geronimo.apache.org/GMOxDOC22/daytrader-a-more-
complex-application.html
Self-similarity Traffic and AQM Mechanism
Based on Non-integer Order PI α D β Controller

Adam Doman´ski1(✉) , Joanna Doman´ska2 , Tadeusz Czach


´orski2 , and Jerzy Klamka2
1
Institute of Informatics, Silesian Technical University,
Akademicka 16, 44-100 Gliwice, Poland
[email protected]
2
Institute of Theoretical and Applied Informatics,
Polish Academy of Sciences, Baltycka 5, 44-100 Gliwice, Poland
{joanna,tadek,jerzy.klamka}@iitis.gliwice.pl

Abstract. In this paper the performance of fractional order PID con-


troller as AQM mechanism and impact of traffic self-similarity on net-
work utilization are investigated with the use of discrete event
simulation models. The researches show the influence of selection of
PID parameters and degree of traffic self-similarity on queue behavior.
During the tests we analyzed the length of the queue, the number of
rejected packets and waiting times in queues. In particular, the paper
uses fractional Gaussian noise as a self-similar traffic source. The
quantitative analysis is based on simulation.

1 Introduction
Most AQM mechanism proposed by IETF to control the network congestions
are based on preventive packed dropping. For the most known active
mechanism the number of discarded packets grows with the increase in
queue occupancy. The basic active queue management algorithm is Random
Early Detection (RED) algorithm. It was primarily proposed in 1993 by
Sally Floyd and Van Jacobson [1]. Since that time a number of studies how
to improve the basic algorithm have been proposed. We have also proposed
and evaluated a few variants, [2–7].
In 2001 the use of the PI controller as AQM mechanism was proposed by
C.V. Hollot, V. Misra and D. Towsley [8]. Based on the first
implementation, a number of PI controllers have been proposed later [9–11].
In recent years the fractional order calculus becomes very popular. The
arti- cles [12–14] show that non-integer order controllers may have better
performance than classic integer order. The first application of the fractional
order PI con- troller as a AQM policy in fluid flow model of a TCP
connection was presented in [15]. The detailed influence of fractional order
PI controller on queue behavior was presented in article [16].
Measurements and statistical analysis (performed already in the 90s) of
packet network traffic show that this traffic displays a complex statistical
nature.

Oc Springer International Publishing AG 2017


P. Gaj et al. (Eds.): CN 2017, CCIS 718, pp. 336–350, 2017.
Performance Modeling Using Queueing Petri Nets 337
DOI: 10.1007/978-3-319-59767-6 27
Self-similarity Traffic and AQM Mechanism 337

It is related to such statistic phenomena as self-similarity, long-range


dependence and burstiness [17–20].
Self-similarity of a process means that the change of time scales does not
influence the statistical characteristics of the process. It results in long-
distance autocorrelation and makes possible the occurrence of very long
periods of high (or low) traffic intensity. These features have a great impact
on a network per- formance [21]. They enlarge the mean queue lengths at
buffers and increase the probability of packet losses, reducing this way the
quality of services provided by a network [22].
As a consequence of this fact, it is needed to propose new or to adapt
known types of stochastic processes when modeling these negative
phenomena in net- work traffic. Several models have been introduced for the
purposes of model- ing self-similar processes in the network traffic area.
These models of traffic use fractional Brownian Motion [23], chaotic maps
[24], fractional Autoregres- sive Integrated Moving Average (fARIMA) [25],
wavelets and multifractals and processes based on Markov chains: SSMP
(Special Semi-Markov Process) [26], MMPP (Markov-Modulated Poisson
Process) [27, 28], HMM (Hidden Markov Model) [29].
The main purpose of the paper is to present simulation results for the
AQM mechanism in which fractional discrete calculus is used. Section 2
presents the- oretical bases for PI α D β controller next used in simulation.
Section 3 briefly describes a self-similar traffic used in this article and
presents the obtained results.

2 An AQM Mechanism Based on PI α D β Controller


A proportional-integral-derivative controller (PID controller) is a traditional
mechanism used in feedback control systems. The article [12] indicates that
the introduction non-integer controllers may improve closed loop control
quality. Therefore here we propose to use the PI α D β (PID controller with
non integer integral and derivative order) instead of the RED mechanism to
determine the probability of packet drop. Equation (1) is based on our
proposition discussed in [16] for PI α controller and extended here to the
case of PI α D β .
This probability is calculated in the following way:

p = max{0, −(KP ek + KI∆αek + KD∆βek)} (1)


where KP , KI, KD are tuning parameters, ek is the error in current slot ek
=
q − qd, q - actual queue size, qd - desired queue size and ∆αek is defined as
follows:
k
△αek = Σ
(−1)j α ek−1 (2)
j
j=0
338 A. Doman´ski et
al.
where α ∈ R is generally a not-integer fractional order, ek is a
differentiated discrete function and generalized Newton symbol α
is
defined as follows: j

α ⎧
1 for j = 0
= ⎨ α(α − 1)(α − 2)..(α − j + 1) (3)
j ⎩ for j = 1, 2, . . .
j!
This definition unifies the definition of derivative and integral to one
differin- tegral definition. We have the fractional integral of the considered
function ek for α < 0. If the parameter α is positive, we obtain in the same
way a frac- tional derivative and, to distinguish, we denote this parameter
as β. If α = 0 the operation (2) does not influence the function ek.
Figure 1 presents a comparison of the increase of packet dropping
probability in PI α and PD β controllers as a function of the queue length
increased due to arrivals of packets. Naturally, the response depends on the
choice of parameters. As can be seen, the integral order affects the time of
controller reaction (below a certain threshold there is no packet dropping).
The derivative order influences on increases packet dropping probability.

Fig. 1. Packet dropping probability in PI α controller (the influence of the integral


order α, KP = 0.00115, KI = 0.0011) (left), and in PDβ controller (the influence
of the derivative order β, KP = 0.00115, KD = 0.01) (right)

3 PI α D β Controller Under Self-similar Traffic


In this article we use fractional Gaussian noise as an example of exactly self-
similar traffic source. Fractional Gaussian noise (fGn) has been proposed
as a model [30] for the long-range dependence postulated to occur in a
variety of hydrological and geophysical time series. Nowadays, fGn is one of
the most commonly used self-similar processes in network performance
evaluation. The fGn process is the stationary Gaussian process that is
exactly self-similar [31]. The Hurst parameter H characterizes a process in
terms of the degree of self- similarity. The degree of self-similarity increases
with the increase of H [32]. A Hurst value smaller or equal to 0.5 means the
lack of long range dependence.
Self-similarity Traffic and AQM Mechanism 339

We use a fast algorithm for generating approximate sample paths for a


fGn process, introduced in [33]. We have generated the sample traces with
the Hurst parameter with the range of 0.5 to 0.90. After each trace
generation, the Hurst parameter was estimated. The simulations were done
using the Simpy Python simulation packet.
During the tests we analyzed the following parameters of the
transmission with AQM: the length of the queue, queue waiting times and
the number of rejected packets. The service time represented the time of a
packet treatment and dispatching. Considered input traffic intensities were
λ = 0.5, independently of Hurst parameter. The distribution of service time
was also geometric. Its parameter changed during the test. The high traffic
load was considered for parameter µ = 0.25. The average traffic load we
obtained for µ = 0.5. A small network traffic was considered for parameter µ
= 0.75.

Table 1. FIFO queue

µ Hurst parameter Mean queue length Mean waiting time Rejected packets
0.25 0.50 299.099 119.380 249520 49.90%
0.25 0.70 298.118 119.158 249879 49.97%
0.25 0.80 296.878 118.883 250354 50.07%
0.25 0.90 248.553 102.061 256587 51.32%
0.50 0.50 163.7547 32.7147 889 0.17%
0.50 0.70 145.8734 29.8820 13342 2.66%
0.50 0.80 141.3440 29.5832 23828 4.76%
0.50 0.90 133.8558 32.5300 89659 17.93%
0.75 0.50 1.2930 0.1586 0
0.75 0.70 3.0506 0.5101 0
0.75 0.80 6.9882 1.2976 0
0.75 0.90 55.9574 11.5942 21484 4.29%

In order to better demonstrate the influence of degree of selfsimilarity on


queue behavior first experiment focused on the FIFO queue. The Fig. 2
presents the distribution of the queue length. This figure clearly shows
dependence of the queue occupancy on the degree of traffic selfsimilarity.
The figure shows three
situations: most overloaded network node (ρ =µ = 2), medium overloaded
λ

situation (ρ = 1) and almost empty buffer for (ρ = 2 ). The detailed results


3
obtained during the simulation present Table 1. For overloaded buffer (µ =
0.25 and µ = 0.50) the number of dropped packets increased with the traffic
degree of selfsimilarity increasing. This effect becomes more evident with
congestion decrease. In the case of an unloaded buffer (µ = 0.75) packet loss
occur only in the case of traffic with a high degree of selfsimilarity (Hurst
parameter H = 0.9).
340 A. Doman´ski et
al.

Fig. 2. The influence of degree of traffic selfsimilarity on queue distribution,


FIFO queue, queue size = 300, λ = 0.5, µ = 0.25 (left), µ = 0.5 (right), µ = 0.75
(bottom)

The presented results show how models that do not consider self-similar
traffic may underestimate the queues occupancy and packet lost in routers.
In a first phase of the research we consider the influence of the PI α
controller on queue behavior. During the simulation the controller
parameters were set as follows: KP = 0.00115, KI = 0.0011. The integral
orders α changed and I received the following values: −0.8, −1.0 and −1.2.
For the integral orders
α = −1 the controller becomes standard PI control loop feedback mechanism.
The Tables 2, 3 and 4 present the obtained results. The queues distribution are
presented in Figs. 3 and 4 (the queue distribution for controller with
parameter α = 0.8 is similar to distribution shown in Fig. 3). The controller
desired point was set at 100 packet. It should be noted that regardless of
the integral order the controller behaved properly.
These studies showed a very interesting controller behavior. In the case
of overloaded FIFO queue for traffic of the high degree of self-similarity (H
= 0.9) compared to less self-similar traffic the mean queue length decreases
rapidly (see Table 1). This phenomenon also occurs in the case of standard
AQM mechanisms [34]. In the case of PI α occurrence of this phenomenon
depends on the integral term and becomes less noticeable with the decrease
in α. Comparing the mean queue length for H = 0.9 and H = 0.8 can be
stated that for α = −1.2 the
mean queue length decreases by 19% for ρ = 2 and decreases by 3% for ρ = 1.
For α = −1.0 the mean queue length decreases by 8% for ρ = 2 and decreases
Self-similarity Traffic and AQM Mechanism 341
1
by 2% for ρ = 1. Whereas for α = −0.8 the mean queue length increases by
6% for ρ = 2 and increases by 2% for ρ = 1. On the other hand, the number
of discarded packets analyze shows that for traffic with high degree of self-
similarity (H = 0.9 and H = 0.8) with integral order growth decreases the
number of
dropped packets (for standard AQM queue, the situation is exactly opposite).
Interesting results were also obtained for the low traffic intensity. The
mean queue length grows with integral order decreasing. The Fig. 1 explains
these phenomena. The controller response to increasing queue in depends on
the queue previous moments. The controller reaction is delayed with the
integral order increasing.

Table 2. PIα queue, KP = 0.00115, KI = 0.0011, α = −0.8

µ Hurst parameter Mean queue length Mean waiting time Rejected packets
0.25 050 105.2024 42.0998 250646 50.13%
0.25 070 112.1730 44.8029 250112 50.02%
0.25 080 118.2300 47.1973 249966 49.99%
0.25 090 126.4218 53.4780 263974 52.79%
0.50 050 53.8331 10.7526 3954 0.79%
0.50 070 55.8842 11.6289 23524 4.7%
0.50 080 52.0427 11.1830 38757 7.75%
0.50 090 54.3019 13.9019 112126 22.43%
0.75 050 1.2806 0.1561 0
0.75 070 2.9819 0.4962 0
0.75 080 6.6740 1.2359 440 0.08%
0.75 090 26.1258 5.4831 32047 6.40%

The second phase of the researches shows how derivative term changes
the queue occupancy and packet waiting times. The Figs. 5, 6 and 7 present
the queue distribution for PID β controller (α = −1). The results for PI
controller were present in Fig. 3 and Table 3. Comparing the figures does
not show a significant
visual amendments. Differences in the controllers responses show Tables 5, 6
and 7. The most interesting results were obtained for controller with
derivative terms β = 0.8. For high traffic (µ = 0.25 and µ = 0.5) the
controller reduces the mean queue length and at the same time reduces the
number of packet losses. The further derivative order increasing (Tables 6
and 7) reduces the mean queue length and at the same time increases
number of dropped packets. However, these differences are much smoother
as in the case of integral order α decreasing
(see Table 4).
The last phase of the simulation evaluates the impact of derivate term
on PIα controller. Controller with integral term α = −1.2 is an example of
strong mech- anism. For this controller the lowest values of mean queue
length and waiting
342 A. Doman´ski et
al.

Table 3. PIα queue, KP = 0.00115, KI = 0.0011, α = −1.0

µ Hurst parameter Mean queue length Mean waiting time Rejected packets
0.25 050 105.4769 42.0835 249914 49.98%
0.25 070 109.6279 43.6825 249538 49.90%
0.25 080 112.7636 45.0588 250246 50.04%
0.25 090 103.7300 43.9278 264321 52.86%
0.50 050 50.8356 10.1493 4024 0.80%
0.50 070 51.4054 10.6806 23202 4.64%
0.50 080 48.9152 10.5024 38644 7.72%
0.50 090 47.8680 12.2617 112735 22.54%
0.75 050 1.2892 0.1578 0
0.75 070 3.0249 0.5047 0
0.75 080 6.4268 1.1868 587 0.11%
0.75 090 25.3451 5.3177 32175 6.43%

Fig. 3. The influence of degree of traffic selfsimilarity on queue distribution, PI α


queue, queue size = 300, KP = 0.00115, KI = 0.0011, α = −1.0, λ = 0.5, µ =
0.75 (left),
µ = 0.5 (right), µ = 0.25 (bottom)
Self-similarity Traffic and AQM Mechanism 343

Table 4. PIα queue, KP = 0.00115, KI = 0.0011, α = −1.2

µ Hurst parameter Mean queue length Mean waiting time Rejected packets
0.25 050 102.9224 41.0802 250026 50.0%
0.25 070 103.0052 41.1281 250100 50.01%
0.25 080 102.2977 40.8632 250214 50.04%
0.25 090 81.8945 34.7584 265013 53.0 %
0.50 0.50 49.525675 9.8806 3789 0.75%
0.50 0.70 50.429557 10.4798 23351 4.67%
0.50 0.80 47.869046 10.2771 38708 7.74%
0.50 0.90 46.207823 11.8211 112356 22.47%
0.75 0.50 1.283052 0.1566 0
0.75 0.70 3.016572 0.5031 0
0.75 0.80 6.400407 1.1817 655 0.13%
0.75 0.90 24.920464 5.2269 32173 6.43%

Fig. 4. The influence of degree of traffic selfsimilarity on queue distribution, PI α


queue, queue size = 300, KP = 0.00115, KI = 0.0011, α = −1.2, λ = 0.5, µ =
0.75 (left),
µ = 0.5 (right), µ = 0.25 (bottom)
344 A. Doman´ski et
al.

Table 5. PID queue, KP = 0.00115, KI = 0.0011, α = −1.0, KD = 0.01, β = 0.8

µ Hurst parameter Mean queue length Mean waiting time Rejected packets
0.25 0.50 105.2382 41.9506 249687 49.93%
0.25 0.70 109.0697 43.5297 249941 49.98%
0.25 0.80 111.5187 44.5991 250457 50.0%
0.25 0.90 102.2642 43.2207 263868 52.77%
0.50 0.50 52.9857 10.5912 4385 0.87%
0.50 0.70 51.2877 10.6640 23537 4.70%
0.50 0.80 49.3514 10.6034 38920 7.78%
0.50 0.90 47.5944 12.1854 112550 22.50%
0.75 0.50 1.2878 0.1575 0
0.75 0.70 3.0133 0.5025 0
0.75 0.80 6.3137 1.1643 0
0.75 0.90 25.2454 5.2913 31734 6.34%

Fig. 5. The influence of degree of traffic selfsimilarity on queue distribution, PI α D β


queue, queue size = 300, KP = 0.00115, KI = 0.0011, α = −1.0, KD = 0.01, β = 0.8,
λ = 0.5, µ = 0.75 (left), µ = 0.5 (right), µ = 0.25 (bottom)
Self-similarity Traffic and AQM Mechanism 345

Table 6. PID queue, KP = 0.00115, KI = 0.0011, α = −1.0, KD = 0.01, β = 1.0

µ Hurst parameter Mean queue length Mean waiting time Rejected packets
0.25 0.50 105.1122 41.9762 250138 50.02%
0.25 0.70 109.4128 43.4958 248962 49.79%
0.25 0.80 112.3291 44.8387 249994 49.99%
0.25 0.90 104.6598 44.3687 264588 52.91%
0.50 0.50 52.6690 10.5310 4544 0.90%
0.50 0.70 51.1150 10.6226 23324 4.66%
0.50 0.80 48.9330 10.5084 38738 7.74%
0.50 0.90 47.6431 12.1726 111753 22.35%
0.75 0.50 1.2823 0.1564 0
0.75 0.70 2.9884 0.4975 0
0.75 0.80 6.4988 1.2012 581 0.11%
0.75 0.90 25.2075 5.2839 31798 6.35%

Fig. 6. The influence of degree of traffic selfsimilarity on queue distribution, PI α D β


queue, queue size = 300, KP = 0.00115, KI = 0.0011, α = −1.0, KD = 0.01, β = 1.0,
λ = 0.5, µ = 0.75 (left), µ = 0.5 (right), µ = 0.25 (bottom)
346 A. Doman´ski et
al.

Table 7. PID queue, KP = 0.00115, KI = 0.0011, α = −1.0, KD = 0.01, β = 1.2

µ Hurst parameter Mean queue length Mean waiting time Rejected packets
0.25 0.50 105.3953 42.0064 249638 49.92%
0.25 0.70 109.8195 43.8410 250000 50.00%
0.25 0.80 112.8646 44.9803 249592 49.91%
0.25 0.90 105.0905 44.5343 264473 52.89%
0.50 0.50 50.5028 10.0713 3482 0.69%
0.50 0.70 50.6084 10.5039 22747 4.54%
0.50 0.80 48.2802 10.3441 37731 7.54%
0.50 0.90 47.9325 12.2640 112284 22.45%
0.75 0.50 1.2882 0.1576 0
0.75 0.70 3.0119 0.5022 0
0.75 0.80 6.3877 1.1789 561 0.11%
0.75 0.90 25.1610 5.2733 31731 6.34%

Fig. 7. The influence of degree of traffic selfsimilarity on queue distribution, PI α D β


queue, queue size = 300, KP = 0.00115, KI = 0.0011, α = −1.0, KD = 0.01, β = 1.2,
λ = 0.5, µ = 0.75 (left), µ = 0.5 (right), µ = 0.25 (bottom)
Self-similarity Traffic and AQM Mechanism 347

times were obtained. At the same time increase the number of dropped
packets is insignificant. The results of the controller with derivative term
and derivative order β = 0.8 are shown in Fig. 8. Table 8 presents the
detailed results. In this case, the controller with derivative term response is
softer.

Table 8. PID queue, KP = 0.00115, KI = 0.0011, α = −1.2, KD = 0.01, β = 0.8

µ Hurst parameter Mean queue length Mean waiting time Rejected packets
0.25 0.50 102.8881 41.0135 249696 49.93%
0.25 0.70 102.9803 41.1081 250038 50.00%
0.25 0.80 102.2453 40.8194 250084 50.01%
0.25 0.90 81.9062 34.7162 264699 52.93%
0.50 0.50 52.6182 10.5227 4659 0.93%
0.50 0.70 49.8837 10.3693 23532 4.70%
0.50 0.80 48.0090 10.3122 38918 7.78%
0.50 0.90 46.0250 11.7805 112565 22.51%
0.75 0.50 1.2859 0.1571 0
0.75 0.70 3.0204 0.5039 0
0.75 0.80 6.5480 1.2112 610 0.12%
0.75 0.90 24.9491 5.2310 31991 6.39%

Fig. 8. The influence of degree of traffic selfsimilarity on queue distribution, PI α D β


queue, queue size = 300, KP = 0.00115, KI = 0.0011, α = −1.2, KD = 0.01, β = 0.8,
λ = 0.5, µ = 0.75 (left), µ = 0.5 (right), µ = 0.25 (bottom)
348 A. Doman´ski et
al.
4 Conclusions
Our article presents the impact of the degree of self-similarity (ex- pressed
in Hurst parameter) on the length of the queue, queue waiting times and
the number of rejected packets. Obtained results are closely related to the
degree of self-similarity. The experiments are carried out for the four types
of traffic (H = 0.5, 0.7, 0.8, 0.9). During the test we also changed the
parameter of distri- bution of service time. This change allowed us to
consider the different queues loading.
The article presents an evaluation of the fractional order PI α D β
controller used as an active queue management mechanism. The
effectiveness of the con- troller as an AQM mechanism depends on proper
parameters of the PID selec- tion. In the case of fractional order controller
we need to consider two additional parameters: fractional derivative (β) and
integral (α) orders. The controllers behavior was also compared to FIFO
queue.
The results showed the usefulness of the PI α D β controller as AQM
mecha- nism. The proper selection of the controller parameters is important
in adapta- tion to various types of traffic (degree of self-similarity or various
intensity).

References
1. Floyd, S., Jacobson, V.: Random early detection gateways for congestion
avoidance. IEEE/ACM Trans. Netw. 1(4), 397–413 (1993)
2. Doman´ska, J., Doman´ski, A., Czach´orski, T., Klamka, J.: Fluid flow
approximation of time-limited TCP/UDP/XCP streams. Bull. Pol. Acad. Sci.
Tech. Sci. 62(2), 217–225 (2014)
3. Doman´ski, A., Doman´ska, J., Czach´orski, T.: Comparison of AQM control
systems with the use of fluid flow approximation. In: Kwiecien´, A., Gaj, P.,
Stera, P. (eds.) CN 2012. CCIS, vol. 291, pp. 82–90. Springer, Heidelberg
(2012). doi:10.1007/
978-3-642-31217-5 9
4. Doman´ska, J., Augustyn, D., Doman´ski, A.: The choice of optimal 3rd order
poly- nomial packet dropping function for NLRED in the presence of self-similar
traffic. Bull. Pol. Acad. Sci. Tech. Sci. 60(4), 779–786 (2012)
5. Augustyn, D.R., Doman´ski, A., Doman´ska, J.: A choice of optimal packet
dropping function for active queue management. In: Kwiecien´, A., Gaj, P.,
Stera, P. (eds.) CN 2010. CCIS, vol. 79, pp. 199–206. Springer, Heidelberg
(2010). doi:10.1007/
978-3-642-13861-4 20
6. Doman´ska, J., Doman´ski, A., Augustyn, D., Klamka, J.: A RED modified
weighted moving average for soft real-time application. Int. J. Appl. Math.
Comput. Sci. 24(3), 697–707 (2014)
7. Doman´ska, J., Doman´ski, A., Czach´orski, T.: The drop-from-front strategy
in AQM. In: Koucheryavy, Y., Harju, J., Sayenko, A. (eds.) NEW2AN 2007.
LNCS, vol. 4712, pp. 61–72. Springer, Heidelberg (2007). doi:10.1007/978-3-540-
74833-5 6
8. Hollot, C., Misra, V., Towsley, D., Gong, W.: On designing improved controllers
for AQM routers supporting TCP flows. In: IEEE/INFOCOM 2001, pp. 1726–
1734 (2001)
9. Michiels, W., Melchor-Aquilar, D., Niculescu, S.: Stability analysis of some
Self-similarity Traffic and AQM Mechanism 349
classes of TCP/AQM networks. Int. J. Control 79, 1136–1144 (2006)
341 A. Doman´ski et
0 al.
10. Melchor-Aquilar, D., Castillo-Tores, V.: Stability analysis of proportional-
integral AQM controllers supporting TCP flows. Computacion y Sistemas 10,
401–414 (2007)
11. Ustebay, D., Ozbay, H.: Switching resilient pi controllers for active queue
manage- ment of TCP flows. In: Proceedings of the 2007 IEEE International
Conference on Networking, Sensing and Control, pp. 574–578 (2007)
12. Podlubny, I.: Fractional order systems and PI λ D µ controllers. IEEE Trans.
Autom. Control 44(1), 208–214 (1999)
13. Chen, Y., Petras, I., Xue, D.: Fractional order control - a tutorial. In: American
Control Coference, pp. 1397–1411 (2009)
14. Babiarz, A., Czornik, A., Klamka, J., Niezabitowski, M.: Theory and
Applications of Non-integer Order Systems. Lecture Notes in Electrical
Engineering, vol. 407. Springer, Heidelberg (2017)
15. Krajewski, W., Viaro, U.: On robust fractional order PI controller for TCP
packet flow. In: BOS Coference: Systems and Operational Research, Warsaw,
Poland, September 2014
16. Domanski, A., Domanska, J., Czachorski, T., Klamka, J.: Use of a non integer
order PI controller with an active queue management mechanism. Int. J. Appl.
Math. Comput. Sci. 26, 777–789 (2016)
17. Crovella, M., Bestavros, A.: Self-similarity in world wide web traffic: evidence
and possible causes. IEEE/ACM Trans. Netw. 5, 835–846 (1997)
18. Doman´ski, A., Doman´ska, J., Czach´orski, T.: The impact of self-similarity on
traffic shaping in wireless LAN. In: Balandin, S., Moltchanov, D., Koucheryavy,
Y. (eds.) NEW2AN 2008. LNCS, vol. 5174, pp. 156–168. Springer, Heidelberg
(2008). doi:10.
1007/978-3-540-85500-2 14
19. Doman´ska, J., Doman´ska, A., Czach´orski, T.: A few investigations of long-
range dependence in network traffic. In: Czach´orski, T., Gelenbe, E., Lent, R.
(eds.) Infor- mation Sciences and Systems 2014, pp. 137–144. Springer, Cham
(2014). doi:10. 1007/978-3-319-09465-6 15
20. Doman´ska, J., Doman´ski, A., Czach´orski, T.: Estimating the intensity of long-
range dependence in real and synthetic traffic traces. In: Gaj, P., Kwiecien´, A.,
Stera, P. (eds.) CN 2015. CCIS, vol. 522, pp. 11–22. Springer, Cham (2015).
doi:10.1007/
978-3-319-19419-6 2
21. Doman´ska, J., Doman´ski, A.: The influence of traffic self-similarity on QoS
mech- anism. In: Proceedings of the International Symposium on Applications
and the Internet, SAINT, Trento, Italy, pp. 300–303 (2005)
22. Stallings, W.: High-Speed Networks: TCP/IP and ATM Design Principles.
Prentice-Hall, New York (1998)
23. Norros, I.: On the use of fractional brownian motion in the theory of
connectionless networks. IEEE J. Sel. Areas Commun. 13(6), 953–962 (1995)
24. Erramilli, A., Singh, R., Pruthi, P.: An application of deterministic chaotic maps
to model packet traffic. Queueing Syst. 20(1–2), 171–206 (1995)
25. Harmantzis, F., Hatzinakos, D.: Heavy network traffic modeling and simulation
using stable farima processes. In: 19th International Teletraffic Congress,
Beijing, China, pp. 300–303 (2005)
26. Robert, S., Boudec, J.: New models for pseudo self-similar traffic. Perform. Eval.
30(1–2), 57–68 (1997)
27. Andersen, A.T., Nielsen, B.F.: A Markovian approach for modeling packet
traffic with long-range dependence. IEEE J. Sel. Areas Commun. 16(5), 719–732
(1998)
Self-similarity Traffic and AQM Mechanism 341
1
28. Doman´ska, J., Doman´ski, A., Czach´orski, T.: Modeling packet traffic with
the use of superpositions of two-state MMPPs. In: Kwiecien´, A., Gaj, P.,
Stera, P. (eds.) CN 2014. CCIS, vol. 431, pp. 24–36. Springer, Cham (2014).
doi:10.1007/
978-3-319-07941-7 3
29. Doman´ska, J., Doman´ski, A., Czach´orski, T.: Internet traffic source
based on hidden Markov model. In: Balandin, S., Koucheryavy, Y., Hu, H.
(eds.) NEW2AN/ruSMART -2011. LNCS, vol. 6869, pp. 395–404. Springer,
Heidelberg
(2011). doi:10.1007/978-3-642-22875-9 36
30. Mandelbrot, B., Ness, J.: Fractional brownian motions, fractional noises and
appli- cations. SIAM Rev. 10, 422–437 (1968)
31. Samorodnitsky, G., Taqqu, M.: Stable Non-Gaussian Random Processes:
Stochastic Models with Infinite Variance. Chapman and Hall, New York (1994)
32. Rutka, G.: Neural network models for internet traffic prediction. Electron.
Electr. Eng. 4(68), 55–58 (2006)
33. Paxson, V.: Fast, approximate synthesis of fractional Gaussian noise for
generating self-similar network traffic. ACM SIGCOMM Comput. Commun.
Rev. 27(5), 5–18 (1997)
34. Doman´ski, A., Doman´ska, J., Czach´orski, T.: The impact of the degree of
self- similarity on the NLREDwM mechanism with drop from front strategy. In:
Gaj, P., Kwiecien´, A., Stera, P. (eds.) CN 2016. CCIS, vol. 608, pp. 192–203.
Springer, Cham (2016). doi:10.1007/978-3-319-39207-3 17
350 A. Doman´ski et
al.
Stability Analysis of a Basic Collaboration
System via Fluid Limits

Rosario Delgado1(✉) and Evsey Morozov2


1
Departament de Matem`atiques, Universitat Aut`onoma de Barcelona,
Edifici C- Campus de la UAB., Av. de l’Eix Central s/n.,
08193 Bellaterra (Cerdanyola del Vall`es), Barcelona,
Spain [email protected]
2
Institute of Applied Mathematical Research, Russian Academy of Sciences,
Petrozavodsk State University, Petrozavodsk, Russia
[email protected]

Abstract. In this work, the fluid limit approach methodology is


applied to find a sufficient and necessary stability condition for the
Basic Colla- boration (BC) system with feedback allowed, which is a
generalization of the so-called W -model. In this queueing system, some
customer classes need cooperation of a subset of (non-overlapping)
servers. We assume that each customer class arrives to the system
following a renewal input with general i.i.d. inter-arrival times, and
general i.i.d. service times are also assumed. Priority is given to
customer classes that can not be served by a single server but need a
cooperation.
Keywords: Stability · Fluid limit approach · Skorokhod problem ·

Workload · BC system · W -model

1 Introduction
In this paper, we study a generalization of the so-called queueing W -model
which, in the simplest setting, consists of two single-server stations, 1, 2,
and three infinite-capacity buffers, 1, 2, 3, with independent renewal inputs
of class- k customers, respectively, k = 1, 2, 3. Server i processes class-i
costumers, i = 1, 2, but both servers are required to process class-3
customers which have preemptive-resume priority. (For more detailed
description see [9, 15].)
We generalize the W -model, which is in turn a particular case of the so-
called
(BC) system with J infinite buffer servers and K ≥ J customer classes. Each
sparsely connected model [15]. More exactly, we consider a Basic
Collaboration customer class needs cooperation of a subset of (non-
overlapping) servers (it is
called concurrent service). At the same time, there may be customer classes
that
R. Delgado—Supported by Ministerio de Econom´ıa y Competitividad, Gobierno
de Espan˜a, project ref. MTM2015 67802-P (MINECO/FEDER, UE).
E. Morozov—Supported by Russian Foundation for Basic Research, projects 15-07-
02341, 15-07-02354, 15-07-02360.
Oc Springer International Publishing AG 2017
P. Gaj et al. (Eds.): CN 2017, CCIS 718, pp. 351–365, 2017.
DOI: 10.1007/978-3-319-59767-6 28
352 R. Delgado and E. Morozov

only need one server to be served. Concurrent customer classes on a server


can only occur between a class that needs its cooperation with another
server(s), and a class that only needs it to be served, without cooperation.
In this setting, to keep work-conserving service discipline, we assume the
mentioned priority of the customer requiring cooperation. We assume i.i.d.
general inter-arrival and i.i.d. service times. Such a system is also called
joint service model [10], or concurrent server release [2]. Queueing systems
with concurrent service have been considered in a number of works, and
pioneering ones are [2, 8, 13, 18]. For the buffer-less (loss) concurrent service
systems, the performance analysis has been developed in a number of works
[1, 11, 12, 16–18]. However, analysis of the buffered concurrent service
system is much more challenging.
A comprehensive study of the concurrent service system has been
developed in [13], where the author used the matrix analytic method to
deduce a stability condition. However, this condition requires solving a
matrix equation of a large dimension, and moreover, the corresponding
matrices are not explicitly defined. The authors of [14] study a multi-server
system in which each customer requires a random number of servers
simultaneously and a random but identical service time at all occupied
servers, which describes the dynamics of modern high perfor- mance clusters.
They assume exponential distributions and an arbitrary number of servers.
In [14], a modification of the matrix-analytic method is developed to obtain
stability criterion of the simultaneous service model in an explicit form.
(Also see [14] for a broad bibliographic review on the subject including
previous references.) Note that the paper [15] considers various sparsely
connected models assuming saturated regime, while, in the present research,
we are seeking for the stability conditions.
In the model presented in this work, a restricted type of feedback is
allowed. Indeed, feedback from each customer class to itself is permitted, as
well as feed- back from each customer class needing cooperation of some
(more than one) servers for processing, to any of the concurrent customer
classes at any of these servers. No other is allowed. From now on, we name
“non-crossing” this type of feedback.
Motivation. BC systems model real situations in which different agents
are able to work together to solve complex problems. Consider the following
scenario introduced in [19]. A user wishes to determine the best package
price for a ski trip given the following criteria: a resort in the Alps, for a
week in February, with slope-side lodging, and the lowest price for all
expenses. To solve this problem, an agent obtains a list of appropriate ski
resorts from a database before spawning other agents to query travel
databases, possibly in different formats, for package prices at those resorts
in February. Agents can perform this task more efficiently when they can
correlate their results and adjust their computations based on the outcome
of a collaboration. Suppose the agents visit local travel agencies and then
share their intermediate results and collaborate before migrating to another
travel agency. If an agent determines that a particular resort does not have
any available lodging meeting the user’s criteria, the agents may determine
to drop queries about trips to that destination. As more information is
gathered,
Stability Analysis of a Basic Collaboration System 353

agents may also make other decisions. As this example demonstrates, agents
can perform complex distributed computations more effectively if they based
on the combined results. To do it, they can divide a complex task into
smaller pieces and delegate them to agents that migrate throughout the
network to accom- plish them. These agents perform computations,
synchronously share results, and collaboratively determine any changes to
future actions, giving service to the user.
Another example are medical centers and hospitals, in which different
types of patients have different requirements concerning technical equipment,
facilities, doctors and nurses, which can be considered as the servers.
We give a brief summary of the research. The main contribution of this
work is that, in contrast to previous works on W -models and concurrent
service systems in general, we obtain stability condition, following fluid
stability analysis developed in [3]. Indeed, our model is more general than the
Generalized Jackson network in [3] (Sect. 5), in which there is only one class
of customers served at each single-server station. Instead, in our model each
server can serve more than one class: at most one customer class requires
cooperation with other servers (multiserver customers), but no limit on the
number of customer classes that do not need cooperation (single-server
customers). Note that multi-class customers but in a single-station network
have been considered in [3] (Sect. 6), as well. In Theorem 1 we first
establish the stability of the fluid limit model associated to the BC system
under sufficient condition. The fluid limit model, which allows to transform
the initial stochastic problem into a (related) deterministic one, is
introduced in Proposition 1. The stability of the fluid limit model means
that the fluid limit of the queue-size process reaches zero in a finite time
interval and stays there. Then, using stability of the fluid limit model and
Theorem 4.2 [3], we deduce positive Harris recurrence of the basic Markov
process describing the network. Similarly to [3], functional laws of large
numbers for the renewal processes or, in other words, the hydrodynamic
scaling by the increasing value of the initial state, are used to obtain the
stability of the fluid limit model via the solution of a Skorokhod problem.
At that, the choice of an appropriate Lyapunov function is the key point of
analysis. By the same approach, we show that if the necessary condition is
violated, then the fluid limit model is weakly unstable. It means that, if the
process starts at zero, then there exists a time at which the fluid limit of the
queue-size process becomes positive. As a result, by Theorem 3.2 [4], the
queueing network is unstable: the queue size grows infinitely with probability
(w.p.) 1 as time increases.
In the paper [7], the fluid approach methodology [3] has been applied to
sta- bility analysis of a cascade network. In this network, known as N-model,
awaiting customers from the preceding queue jumps to the following server,
when it is free, to be served there immediately. That is, in this model each
free server helps the previous one serving some of its customers. It is shown
that sufficient and necessary conditions match if the network is composed
by two stations, but not in general. In the current work, we use the same
methodology to develop stability analysis of a completely different queueing
model, which is a generalization of
354 R. Delgado and E. Morozov

the known W-model [15, 19]. We deal with a BC system in which some
customer classes need cooperation of a subset of the servers working
together. Moreover, we allow some kind of the feedback. Although the
methodology is the same, the stability conditions of these two different
models turn out to be different as well. In particular, in the current work we
find stability criterion. To the best of our knowledge, it is the first time the
fluid approach have been used for this type of the collaboration models. In
addition, the way this methodology is applied presents some interesting
differences. The main difference is that in the present work we deal with a
modification of the fluid limit of the workload process that takes into
account feedback. Instead, the fluid limit of the queue length process has
been used in [7]. In particular, a key point to prove sufficiency is that we
now show that this process is part of a solution of the continuous dynamic
com- plementary problem (DCP), also known as deterministic Skorokhod
problem.
The paper is organized as follows. In Sect. 2, we give notation and
describe the BC in more detail, introducing the associated queueing network
equations. Section 3 contains fluid stability analysis, at that, in Sect. 3.1,
the fluid limit model is constructed, and the proof of stability condition is
given in Sect. 3.2 (Theorem 1).

2 Notation and Description of the BC


with “Non-crossing” Feedback
We first give basic notation. Vector are column vectors and (in)equalities
are interpreted component-wise. vT denotes the transpose of a vector (or a
matrix).
For any integer d ≥ 1, let Rd = { v ∈ Rd : v ≥ 0 }, Zd = { v = (v1, . . . , vd)T ∈
+ + Σd
Rd : vi ∈ Z+ }. For a vector v = (v1, . . . , vd)T ∈ Rd, let |v| = i=1 |vi| . We
denote
of vector v, andthe
diag(v) I isdiagonal matrix with
the d-dimensional diagonal
identity entries
matrix. Webeing the components
say that a sequence
of vectors {vn}n≥1 converges to a vector v as n → ∞ if |vn − v| → 0, and denote
it as lim vn = v . (This convergence is equivalent to the component-wise con-
n→∞
vergence.) For n ≥ 1, let φn : [0, ∞) → Rd be right continuous functions
having limits on the left on (0, ∞), and let function φ : [0, ∞) → Rd be
continuous. We say that φn converges to φ as n → ∞ uniformly on
compacts (u.o.c.) if for any T ≥ 0,
||φn − φ||T := sup
t∈[0,T ] |φn(t) − φ(t)| → 0 as n → ∞, (1)
and write it as lim φn = φ. If function φ is differentiable at a point
n→∞point of φ, and we denote the derivative by φ˙(s).s ∈ (0, ∞)
then s is a regular
Recall that we consider a BC system with J infinite buffer servers and K
≥ J to class-k customers, k ∈ {1, 2, . . . , K}. Let s(k) ⊂ {1, . . . , J } be the set
of servers customer classes. In what follows, we use index k to denote the
quantities related #s(k) ≥ 1 and that, if #s(k) = 1, then server
collaboration is not required. that need to work together to service a class-k
customer. Note that the capacity
Stability Analysis of a Basic Collaboration System 355

Evidently, ∪k=1
K
s(k) = {1, . . . , J }, and we assume non-overlapping property:
for
k′,

each two classes


s(k) ∩ s(k′) = ∅ if min{s(k), s(k′)} > 1. (2)

k =/
Define the customer classes C(j) = {k = 1, . . . , K : j ∈ s(k)} served by
server
j ∈ {1, . . . , J }, and assume that, for each j, the capacity

#{k ∈ C(j) : #s(k) > 1}≤ 1. (3)


In other words, at most one class may capture a given server for coopera-
tion. To ensure work-conserving (or non-idling) discipline, in addition to the
non-overlapping property, we also assume that multiserver customers have
preemptive-resume priority.
Let ξk(i), i ≥ 2, be the independent identically distributed (i.i.d.) inter-
instant 0, and let ηk(i), i ≥ 2, be the i.i.d. service times of the ith class-k
arrival times of the ith class-k customers arriving from outside the system
after customers finishing service after instant 0 (this is time required by any
server in
the set s(k)). All sequences are assumed to be mutually independent. We
denote the generic elements of these sequences by ξk and ηk, respectively.
The residual
0 is independent of {ξk(i), i ≥ 2}. Also the residual service time ηk(1) of a
class- k customer initially being served, if any, is independent of {ηk(i), i ≥
2}, and arrival time ξk(1) of the first class-k customer entering the network
after instant
ηk(1) =st ηk if class k is initially empty.
For each k = 1, . . . , K, we impose the following standard conditions on
inter- arrival and service times, both with general distribution (see [3]):

E ηk < ∞ , (4)
E ξk < ∞ , (5)
P(ξk ≥ x) > 0 , for any x ∈ [0, ∞) . (6)
Then, in particular, the arrival rate α := 1/Eξ ∈ (0, ∞) and the service
rate μk := 1/Eηk > 0, and we denote α = k(α1, . . . , αk K)T and μ = (μ1, . . . , μK)T
.
Also we assume that the inter-arrival times
∫ ∞ are spread out, that is, for some
integer r > 1 and functions f ≥ 0 with f (y) dy > 0,
k 0 k

Σ
r ∫ b
P a ξk(i) ≤ ≥ fk(y) dy, for any 0 ≤ a < b. (7)
≤ i=2 b a

A restricted type of feedback (we name it “non-crossing”) is allowed in


becomes class-k customer with a probability Pkk ∈ [0, 1), and a class-k cus-
our model: a class-k customer, when finishes service, re-enters the system
and becomes a class-ℓ customer if #s(ℓ) = 1 and s(ℓ) ∈ s(k), that is, if ℓ and
k are tomer needing cooperation of servers s(k) with #s(k) > 1, when
finishes service,
concurrent customer classes at server s(ℓ), with probability Pkℓ ∈ [0, 1). Then,
356 R. Delgado and E. Morozov

ℓ=1
ΣK
with probability 1 − Pkℓ ≥ 0, a class-k customer leaves the system

upon
Stability Analysis of a Basic Collaboration System 357

service. Thus, P := (Pkℓ)K


k,ℓ=1 is the (sub-stochastic) routing (or flow) matrix of
hence, the inverse matrix Q = (I − P T )−1 is well defined.
the network. It is assumed that spectral radius of P is strictly less than 1,
and
Define vector λ = (λ1, . . . , λK)T as (the unique) solution to the traffic
equation
λ = α + P T λ, equivalently, λ = Q α, (8)
where λk can be interpreted as the
Σ potential long run arrival rate of
class-k
customers into the system. Let ρj = λk/μk be the traffic intensity for
k∈C(j)
server j, and ρ := (ρ1, . . . , ρJ )T .
ics of the queueing network: the exogenous arrival process E =
{E(t) := (E1(t), . . . , EK(t))T , t ≥ 0},
where Now we introduce the following primitive processes describing the
dynam-
n
Σ
Ek(t) = max {n ≥ 1 : ξk(i) ≤ t} (9)
i=1

[0, t]. We also introduce the process S = {S(t) := (S1(t), . . . , SK(t)), t ≥


0}, is the total number of class-k arrivals from outside to the system in
interval where the renewal process
n
Σ
Sk(t) = max {n ≥ 1 : ηk(i) ≤ t} (10)
i=1

is the total number of class-k customers that would be served in interval [0, t],
provided all servers from s(k) devote all time to class-k customers. (By definition,

E(0) = S(0) = 0.) The routing process Φ = {Φ(n)}n∈N is defined as follows:


Σn
Φk(n) = i=1 φk(i), (11)

where, for each i ∈ N, K-dimensional vectors φk(i) = {φ ℓ (i), ℓ = 1, . . . , K} are


k

i.i.d. (independent of the inter-arrival and service time processes), with at most
one component equals 1, and the rest components being equal 0.j If φk(i) = 1
then the ith class-k customer becomes class-j, while φk(i) = 0 means the
departure from the network.
network. For any t ≥ 0 and k, let Ak(t) be the number of class-k arrivals
(from Now we introduce the descriptive processes to measure the performance
of the outside and by feedback) by time t, Dk(t) be the number of class-k
departures
customers being served at time t, so Zk(t) ∈ {0, 1}. Also let Tk(t) be the
total (to other classes or outside the system), and let Zk(t) be the number of
class-k service time devoted to class-k customers in interval [0, t]. Denote
Yj(t) the idle
of station j at time t, j ∈ {1, . . . , J }. In an evident notation, processes D, T
and time of server j in [0, t], and let Qj(t) be the number of customers in the
buffer Y are non-decreasing and satisfy initial conditions D(0) = T (0) = Y
(0) = 0. We
358 R. Delgado and E. Morozov

note that A(0) = 0, and assume that Z(0) and Q(0) are mutually
independent and independent of all above given quantities.
For each t and k, we define the remaining time Uk(t) until the next
exogenous class-k arrival, and the remaining service time Vk(t) of class-k
customer being served at time t, if any. We introduce (in an evident
notation) processes U and V , assume that they are right-continuous, and
define Vk(t) = 0 if Zk(t) = 0.
the process X = {X(t), t ≥ 0} describing the dynamics of the network, where
Note that Uk(0) = ξk(1), while Vk(0) = ηk(1) if Zk(0) = 1. Now we
define:= (Q(t), Z(t), U (t), V (t))T, with the state space X = ZK ×{0, 1}K × RK ×
X(t) + +
K
R+ . The process X is a piecewise-deterministic Markov process which satisfies
Assumption 3.1 [5], and is a strong Markov process (p. 58, [3]).
We define the workload process W = {W (t) := (W (t), . . . , W (t))T , t ≥
0}, customers present in the system at time t, for1 any k ∈J C(j). We
introduce the where Wj(t) is the (workload) time needed to complete service
of all class-k cumulative service time process

Υ = {Υ (n) := (Υ1(n1), . . . , ΥK(nK))T , n = (n1, . . . , nK) ∈ NK}, (12)


where Υk(nk) is the total amount of service time of the first nk class-k
customers
(including the remaining service time at time 0 for the first one), by any of
the servers in the set s(k). Note that this time is the same for each server
from s(k), and that Υk(0) = 0.
all t ≥ 0, k = 1, . . . , K and j = 1, . . . , J :
The following queueing network equations, which are easy to verify, hold for
K
Σ
A(t) = E(t) + Φk(Dk(t)), (13)
k=1
Dk(t) = Sk (Tk(t)), (14)
Σ
Qk(t) = Qk(0) + Ak(t) − Dk(t) + Zk(t) ,
k∈ Tk(t) + Yj(t) = t, (16)

C(j)
∫ ∞

Wj(t) d Yj(t) = 0, (17)


0

W (t) = C Υ (Q(0) + A(t)) − T (t) , (18)

where e = (1, . . . , 1)T ∈ Rd and C is the J × K matrix defined by:

Cjk 1, if j ∈ s(k), equivalently, if and only if k ∈ (19)


=
C(j),
0, otherwise.
Stability Analysis of a Basic Collaboration System 359

Note that Eq. (17) reflects the work-conserving property introduced above.
Also we note that Eq. (16) can be written as C T (t) + Y (t) = t e.
We assume that the service discipline is head-of-the-line (HL): only the oldest
customer of each class can receive service. It gives the additional equation:

Υ (D(t)) ≤ T (t) < Υ (D(t) + e). (20)


3 Stability Analysis of the BC System
By definition, a queueing network is stable if its associated underlying
Markov process X is positive Harris recurrent, that is, it has a unique
invariant probabi- lity measure. To prove stability of the network it is
enough to establish stability of the associated fluid limit model [3].

3.1 The Fluid Limit Model


Now we present, without proof, an analogue of Theorem 4.1 [3] (see also
Propo- sition 1 [7]). If X(0) = (Q(0), Z(0), U (0), V (0))T = x, then we
denote X as Xx (and analogously, for the processes E, S, D, T, Y, W ).

and any sequence of initial states {xn}n≥1 ⊂ X with limn→∞ |xn| = ∞, there
exists a subsequence {xnr }r≥1 ⊆ {xn}n≥1 with limr→∞ |xnr | = ∞ such that the
Proposition 1. Consider the BC system. Then, for almost all sample
paths
following limit
1
lim Xxnr (0) := X¯ (0), (21)
r→∞ |xnr |

exists, and moreover the following u.o.c. limit exists for each t ≥ 0,
1
Xxnr (|xn|t), D r (|x n |t), T r (|x n|t), Y r (|x n|t), W r (|x |t)
xn xn xn xn
lim n
r→∞ |xnr | r r r r r

:= X¯ (t), D¯ (t), T¯(t), Y¯ (t), W¯ (t) , (22)

where (in evident notation)

X¯ (t) := Q¯ (t), Z¯ (t), U¯ (t), V¯ (t)


T
, (23)

and the components of vectors U¯ (t), V¯ (t) have, respectively, the form

U¯k (t) = (U¯k (0) − t)+ , V¯k(t) =

(V¯k(0) − t)+, k = 1, . . . , K. (24)


351 R. Delgado and E. Morozov
0
Furthermore, the following equations are satisfied for any t ≥ 0, k = 1, . . . ,

K
and j = 1, . . . , J:
A¯(t) = t α + P T D¯ (t), (25)
D¯¯ (t) = M −1 T¯(t), (26)
Z k (t) = 0, (27)

Q¯ (t) = Q¯ (0) + A¯(t) − D¯ (t) = Q¯ (0) + t α − (I − PT ) D¯ (t),

(28)
C T¯(t) + Y¯ (t) = t e, (29)
∫ ∞
W¯ (t) d Y
¯ (t) = 0, (30)
j j
0

W¯ (t) = C M (Q¯ (0) + A¯(t)) − T¯(t) = C M Q¯ (t), (31)


where diagonal matrix M is defined as
1 1 T
M = diag ( ,..., ) . (32)
μ1 μK
We note that

ρ = C M λ. (33)

Any limit ( X¯ , D¯ , T¯, Y¯ , W¯ ) in (21), (22) is called a fluid limit


associated with the BC system, [3]. Thus, Proposition 1 states that any fluid
limit associated with the BC system satisfies the fluid model Eqs. (25)–(31).

Remark 1. By Lemma 5.3 in [3], hereinafter we will assume without loss of


gen- erality that U¯ (0) = V¯ (0) = 0, which, by (24), implies U¯ (t) = V¯
(t) = 0 for all t > 0. We denote it U¯ = V¯ = 0 and identify X¯ with Q¯ .

work is stable, if there exists t1 ≥ ¯0 (depending on the input and service


rates Definition 1. The fluid limit ( Q , D¯ , T¯, Y¯ , W¯ ) associated with a
¯
queueing net- only) such that if |Q (0)| = 1, then

Q¯ (t) = 0 for all t ≥ t1. (34)


3.2 The Stability Criterion
Now we are ready to introduce and prove the stability criterion of the BC
system with “non-crossing” feedback, following Theorem 5.1 [3]. We prove
this result under a technical Assumption (A) concerning the routing matrix P
, which is equivalent to the “non-crossing” feedback assumption. We first
introduce a
process W˜ by
˜ ¯
Stability Analysis of a Basic Collaboration System 351
1
W (t) = C M Q Q(t), t ≥ 0. (35)
few words. Intuitively, from (31) the fluid limit of the workload process is
Remark
W¯ 2. Process W˜ has a key role in the proof of Theorem 1 and deserves
a
360 R. Delgado and E. Morozov

related with the fluid limit of the queue length process Q¯ by W¯ (t) = C M
Q¯ (t).
Therefore, W˜ defined in this way is a correction of W¯ that takes into
account
feedback. Note that W˜ = W¯ in case of no feedback, since in this case P = 0
and then Q = I. Consider a non-trivial example of W -model with J = 2
and K = 3 (see Introduction) in which only feedback from class-3 to other
classes is allowed. Then,
0 0 0

P = 0 0 0 . (36)
p
⎝ 31 p 32 0
10 1 ⎞
Since C = for this model, we
obtain 0 1 1

W˜ (t) = p31 1
1 Q¯ (t) + + (t), (37)
1 Q¯
μ1
1
μp
1 32 μ3
1 3
W˜ (t) = Q¯ (t) + +
1 (t). (38)

3
2 2 μ2 μ3
μ2
Because

W¯ 1 (t)
1 1 1 1
= Q (t) + Q (t) W (t) = Q (t) + Q (t) , (39)
μ1 ¯ μ3 ¯ , ¯2 μ2 ¯ μ3 ¯
1 3 2 3

we have then
that p31 p32
W˜ 1 ( t ) = W¯ 1(t) + ¯
1 Q 3 (t) , 2 W˜ 2 ( t ) =
μ
W¯ 2(t) + Q¯ 3 (t), (40)
μ
expressions from which it is evident that differences between W and
˜ are due
W¯ to feedback from class-3 customers to the other classes.
Assumption (A). (“non-crossing” feedback):

The matrix P is such that for any t ≥ 0 and j = 1, . . . , J ,


W¯ j (t) = 0 if and only ˜
if Wj(t) = 0. (41)
Remark 3. Assumption (A) has a key role in proving property (c) in the
proof of sufficiency in Theorem 1. It is trivially accomplished if no feedback
is allowed since in˜this case W = W¯ . We can illustrate this assumption by
introducing a simple example. Consider example introduced in Remark 2
but with (possible)
Stability Analysis of a Basic Collaboration System 361
feedback from any customer class to itself (that is, in the more general 1
“non- crossing” feedback setting). Then,
⎛ ⎞
p11 0 0
P = ⎝ 0 p22 0 ⎠ (42)
p31 p32 p33
360 R. Delgado and E. Morozov

and
1 p31 1
W˜1 (t) Q¯1 (t) + + 3 (t),
μ1 (1 − μ1 (1 − p11) (1 − μ3 (1 −
= p11) p33) Q¯p33) (43)
p32 1
W˜ (t) Q¯ (t) + + (t).
=
1 Q¯
2 3

(44)
2
μ2 (1 − p22) μ2 (1 − p22) (1 − μ3 (1 −
Becaus
e
p33) p33)

1 1 1 1
W¯ 1 (t) = Q (t) + Q (t) W (t) = Q (t) + Q (t) , (45)
μ1 ¯ μ3 ¯ , ¯2 μ2 ¯ μ3 ¯
1 3 2 3

and all coefficients are positive, it is immediate to check that Assumption


(A) holds.
Analogously to [6], the crucial fact in the proof of the stability criterion
(Theorem 1) is that the fluid limit˜W turns out to be a part of a solution
of a linear Skorokhod problem, while the fluid limit process Q¯ is instead
used in [3]. We note that in some settings, the workload is better adapted to
the use of the methodology of the Skorokhod problems than the queue-size
process. On
the other hand, a key point in the proof is the adequate choice of the
Lyapunov function.
Theorem 1. If the non-overlapping BC system with “non-crossing” feedback
given by fluid model Eqs. (25)–(31) and Assumption (A), satisfy conditions
(4)–(7), then a sufficient stability condition is
max
ρj < 1,
j=1,...,J (46)
while maxj=1,...,J ρj ≤ 1 is the necessary stability condition.
Proof. Sufficiency: By the Eqs. (25)–(31),

A¯(t) = t α + PT Q¯ (0) + A¯(t) − Q¯ (t) , (47)


implying

I − PT A¯(t) = t α + PT Q¯ (0) − Q¯ ( t) . (48)


It in turn implies

By (49) we A¯(t) = t λ + Q PT Q¯ (0) − Q¯ (t) . (49)

obtain
W¯ (t) = C M Q¯ (0) + C M A¯(t) − C T¯(t)
Stability Analysis of a Basic Collaboration System 361
3 ¯
= C M Q¯ (0) + C M t λ + Q P T Q¯ (0) − Q¯ (t) − te+ Y
(t)
= C M Q¯ (0) + (ρ − e) t + C M Q PT Q¯ (0) − Q¯ (t) + Y¯ (t)

= C M Q Q¯ (0) + (ρ − e) t − C M Q PT Q¯ (t) + Y¯ (t). (50)


362 R. Delgado and E. Morozov
Since W¯ (t) = C M Q¯ (t), it then follows from (50) that

C M Q Q¯ (t) = C M Q Q¯ (0) + (ρ − e) t + Y¯ (t), (51)

˜
or, denoting X(t) = C M Q Q¯ (0) + (ρ − e) t,
˜ (t) = X(t) + Y¯ (t).
W (52)
It is easy to check that the following
˜ properties hold:

˜ has continuous paths with ˜X(0) ≥ 0,


(a) X(·)

(c) Y (·) has nondecreasing


¯ paths, Y (0) = 0, and. .Yj.(·), Jincreases only at
(b) times
W˜ (t)t such that
≥ 0 for t ≥ 0,j (t) = 0 for j = 1, ˜
all¯W (see (30)). By
Assumption (A), Y j (·) increases only when W j (t) = 0, j = 1, . . . , J
. ¯ ¯

It follows that the paths of processes ( W˜ , Y¯ ) are solutions of the


dynamic
continuouscomplementarity problem ¯ (DCP) for X˜ (see Definition 5.1 [3]), also
known as the deterministic Skorokhod problem. Moreover, it is easy to
check that condition (5.1) in [3],

˜ (s)
W
is satisfied with θ :=
+ X(t + s) − X(s) ≥ θ t ∀t, s ≥ 0,
ρ − e.˜Therefore, by Lemma 5.1 [3],
(53)
˜
˙
Y (s) ≤ (e − ρ), if s ≥ 0 is a regular point of Y¯ (·).
¯ (54)
Define function f as

It follows that f (t) = |W˜ (t)| = eT W˜ (t). (55)


˜
f (t) = eT (X(t) + Y¯ (t)) = f (0) + eT ((ρ − e) t + Y¯ (t))
ΣJ

= f (0) + (ρj − 1) t + Y¯j (t) . (56)


j=1
Assume that t > 0 is a regular point for W (equivalently, for Y¯ ).
˜
If f (t) > 0, then there exists j0 ∈ {1, . . . , J } such that
˜ Wj0 (t) ˙> 0.
By that for any j1 ∈ s(k0), W¯ j1 (t) > 0, which in turn implies that Y¯ j1 (t)
= 0. Fix a definition˜ of process W , there exists some k0 with Q¯ k 0 (t) > 0,
which implies
j1 ∈ s(k0). Hence, by (56) and (54),
J
Σ Σ
˙ ˙
f˙(t) =j=1 (ρj − 1) + Y¯ j (t) = (ρj1 − j1)j1+ (ρj − 1) + Y¯ j (t)
=
≤ ρj1 − 1 ≤ max
j=1,...,J

ρj − 1 = −κ, (57)
Stability Analysis of a Basic Collaboration System 363

where κ = 1 − maxj=1,...,J ρj > 0 by assumption.


surely all regular points t, f˙(t) ≤ −κ whenever f (t) > 0, then, by
Lemma 5.2 [3], f is non increasing and f (t) = 0 for t ≥ f (0)/κ.
That is, As f is a nonnegative function that is absolutely continuous
and, for almost
|W¯ (0)|
W˜ ( t ) = 0, t ≥ δ := . (58)
max 1 − j=1,...,J ρj

Finally, by definition of process W˜ , W˜ = 0 if and only if Q¯ = 0. Moreover,


since
J
Σ
Σ akj Q¯ k ( , (59)
|W˜ (t)| = |C M Q t)
j=1 k∈ C
(j)

Q¯ (t)| =

where akj depends on μ and matrix Q = (qkℓ)k,ℓ=1,...,K. More exactly, 0 ≤ akj ≤


MK, for any k = 1, . . . , K and j = 1, . . . , J , where

MK = max 1
qkℓ max K > 0. (60)
max
k=1,...,K ℓ=1,...,K k=1,...,K μk
Then,

and we obtain |W˜ (t)| ≤ J MK |Q¯ (t)|, (61)


¯
J MK |Q (0) |
Q¯ (t) = 0, t ≥ ≥ 0. (62)
1 − maxj=1...,J ρj
It means that the fluid model is stable (by Definition 1), and Theorem 4.2
[3]
ensures the stability of the queueing network.
Necessity: To prove the necessity of condition maxj=1,...,J ρj ≤ 1, we assume

ρj0 > 1 for some j0 ∈ {1, . . . , J }. Consider the non-negative function

g(t) = W˜j0 (t) = g(0) + (ρj0 − 1) t + Y¯j0 (t) ≥ (ρj0 − 1) t > 0, t > 0.

(63)

with
ThenaW ≥ 0,
kj0˜ finishing the proof. ¯ 0 since W˜j0 (t) =
j0 (t) > 0, which implies Q (t) =/

Σ
k∈C(j0 ) akj0 Q¯ k (t)
364 R. Delgado and E. Morozov
Remark 4. We note that in practice, condition (46) can be treated as stability
criterion which, for the W -model in Remark 3, becomes

max{ρ1, ρ2} < 1, (64)


where

ρ1 = 1 p31 1
α
1
μ1 (1 − p11) + α + (65)

3
μ12 (1 ) (1 − μ33 (1
3
ρ =
μ (1 −
−p 22) (1 −
p11 μ (1 −

1 p33
p 33
)
) p32 p33
p 33
)
) 1
α2 2
μ2 (1 − p22) + α + . (66)
Stability Analysis of a Basic Collaboration System 365

This can be easily seen since ρ = C M Q


α, 1
μ3
⎛ 1 ⎞

μ1 (67)
0 1
CM= ⎝ μ3
1
0 μ2

and
⎛ 1 p31

1−p11 0 (1−p11 ) (1−p33 )

1 p32
−1 ⎜ 0 1−p22 (1−p22 ) (1−p33 ) . (68)
Q = (I − P ) T

= ⎟

1
0 1−p33

If the model does not allow feedback, then pij = 0 for all i, j = 1, 2, 3, and
α1 α3 α2 α3
ρ = + , ρ = + . (69)
1 2
μ1 μ3 μ2 μ3

4 Conclusion
We consider a non-overlapping Basic Collaboration queueing system, which
is a multi-class queueing system with “non-crossing” feedback, that
generalizes the so-called W -model [15]. In the system, some customer classes
cooperate to be processed by a subset of non-overlapping servers, and
feedback is allowed from each customer class to itself, and also from each
customer class needing cooperation of some servers to any of the concurrent
customer classes at any of these servers. We apply the fluid limit approach
methodology [3] to find the stability condition of the system.

Acknowledgement. The authors wish to thank the anonymous referees for careful
reading and helpful comments that resulted in an overall improvement of the paper.

References
1. Arthurs, E., Kaufman, J.S.: Sizing a message store subject to blocking criteria.
In: Proceedings of the Third International Symposium on Modelling and
Performance Evaluation of Computer Systems: Performance of Computer
Systems, pp. 547–564. North-Holland Publishing Co., Amsterdam (1979)
2. Brill, P., Green, L.: Queues in which customers receive simultaneous service
from a random number of servers: a system point approach. Manage. Sci. 30(1),
51–68 (1984)
3. Dai, J.G.: On positive Harris recurrence of multiclass queueing networks: a
unified approach via fluid limit models. Ann. Appl. Prob. 5(1), 49–77 (1995)
4. Dai, J.G.: A fluid limit model criterion for unstability of multiclass queueing
net- works. Ann. Appl. Prob. 6(3), 751–757 (1996)
5. Davis, M.H.A.: Piecewise deterministic Markov processes: a general class of non-
366 R. Delgado and E. Morozov
diffusion stochastic models. J. Roy. Statist. Soc. Ser. B. 46, 353–388 (1984)
Stability Analysis of a Basic Collaboration System 367

6. Delgado, R.: State space collapse and stability of queueing networks. Math.
Meth. Oper. Res. 72, 477–499 (2010)
7. Delgado, R., Morozov, E.: Stability analysis of cascade networks via fluid
models. Perform. Eval. 82, 39–54 (2014)
8. Fletcher, G.Y., Perros, H., Stewart, W.: A queueing system where customers
require a random number of servers simultaneously. Eur. J. Oper. Res. 23, 331–
342 (1986)
9. Garnet, O., Mandelbaum, A.: An introduction to Skills-Based Routing and its
operational complexities. http://iew3.technion.ac.il/serveng/Lectures/SBR.pdf
10. Green, L.: Comparing operating characteristics of queues in which customers
require a random number of servers. Manage. Sci. 27(1), 65–74 (1980)
11. Kaufman, J.: Blocking in a shared resource environment. IEEE Trans. Commun.
29(10), 1474–1481 (1981)
12. Kelly, F.P.: Loss networks. Ann. Appl. Prob. 1(3), 319–378 (1991)
13. Kim, S.: M/M/s queueing system where customers demand multiple server use.
Ph.D. thesis, Southern Methodist University (1979)
14. Rumyantsev, A., Morozov, E.: Stability criterion of a multiserver model with
simultaneous service. Ann. Oper. Res. 252(1), 29–39 (2015). doi:10.1007/
s10479-015-1917-2
15. Talreja, R., Whitt, W.: Fluid models for overloaded multiclass many-server
queue- ing systems with first-come, first-served routing. Manage. Sci. 54, 1513–
1527 (2008)
16. Tikhonenko, O.: Generalized erlang problem for service systems with finite total
capacity. Probl. Inf. Transm. 41(3), 243–253 (2005)
17. Van Dijk, N.M.: Blocking of finite source inputs which require simultaneous
servers with general think and holding times. Oper. Res. Lett. 8(1), 45–52 (1989)
18. Whitt, W.: Blocking when service is required from several facilities
simultaneously. AT&T Tech. J. 64(8), 1807–1856 (1985)
19. Wong, D., Paciorek, N., Walsh, T., DiCelie, J., Young, M., Peet, B.: Concordia:
an infrastructure for collaborating mobile agents. In: Rothermel, K., Popescu-
Zeletin,
R. (eds.) MA 1997. LNCS, vol. 1219, pp. 86–97. Springer, Heidelberg (1997). doi:10.
1007/3-540-62803-7 26
Erlang Service System with Limited Memory
Space Under Control of AQM Mechanizm

Oleg Tikhonenko1(✉) and Wojciech M. Kempa2


1
Faculty of Mathematics and Natural Sciences, College of Sciences,
Cardinal Stefan Wyszynski University in Warsaw,
Ul. Woycickiego 1/3, 01-938 Warsaw, Poland
[email protected]
2
Institute of Mathematics, Silesian University of Technology,
Ul. Kaszubska 23, 44-100 Gliwice, Poland
[email protected]

Abstract. We investigate the M/G/n ≤ ∞/(0, V )-type Erlang loss ser-


vice system with n ≤ ∞ independent service stations and Poisson
arrival stream in which volumes of entering demands and their
processing times are generally distributed and, in general, are
dependent random vari- ables. Moreover, the total volume of all
demands present simultaneously in the system is bounded by a non-
random value V (system memory capacity). The enqueueing process is
controlled by an AQM-type non- increasing accepting function. Two
different acceptance rules are consid- ered in which the probability of
acceptance is dependent or independent on the volume of the arriving
demand. Stationary queue-size distribu- tion and the loss probability
are found for both scenarios of the system behavior. Besides, some
special cases are discussed. Numerical examples are attached as well.

Keywords: Active Queue Management (AQM) · Erlang service sys-


tem · Loss probability · Queue-size distribution · Supplementary vari-
ables’ technique

1 Introduction
Queueing systems with finite buffer capacities are commonly used, especially
in the design and performance evaluation of telecommunication and
computer networks. They are good models for describing different-type
phenomena appear- ing in network nodes (e.g., IP routers). In packet-
oriented networks problems of buffer overflows and packet losses are typical
ones. The classical Tail Drop (TD) mechanism rejects the entering packet
only when the buffer is completely sat- urated and hence, according to
frequent complex nature of the traffic (e.g., in Internet), it can give some
negative consequences, like, e.g., losing packets in series, too long delays in
queueing or TCP synchronization.
The idea of the Active Queue Management (AQM) is in implementing
the mechanism of the entering packets rejection that is possible even when
the buffer
Oc Springer International Publishing AG 2017
P. Gaj et al. (Eds.): CN 2017, CCIS 718, pp. 366–379, 2017.
DOI: 10.1007/978-3-319-59767-6 29
Erlang Service System with Limited Memory Space 367

is not saturated. Hence, the using of AQM can physically decrease the
queue length at the node and allows for avoiding the hazard of buffer
overflow by reducing the intensity of arrivals (as a consequence of packets
dropping). The first AQM scheme, called RED (Random Early Detection),
was proposed in [1], where a linearly dropping function was presented. Some
modifications of the original RED mechanism can be found e.g. in [3, 10],
where an exponential and quadratic dropping functions were applied,
respectively. An analysis of stationary characteristics of the M/M/1/N queue
with packet dropping was carried out e.g. in [2].
In the paper we study the queue-size distribution in the M/G/n ≤ ∞/(0,
V )- type Erlang loss service system with n ≤ ∞ independent service
stations and Poisson arrival stream, in which volumes of entering demands
and their process-
ing times are generally distributed and, in general, are dependent random
vari- ables. Moreover, the total volume of all demands present
simultaneously in the system is bounded by a non-random value V (system
memory capacity). We define an AQM-type accepting function which
qualifies the arriving packet for service with probability depending on its
volume and the total capacity of pack- ets present in the system at the pre-
arrival epoch. The basics of the theory of queueing models with randomly
distributed packet sizes and bounded system capacity can be found in [4, 5].
The representations for the steady-state queue- size distributions in such
systems with Poisson arrivals were obtained in [7, 8], where the
exponentially and generally distributed service times were considered,
respectively. Similar results for the multi-server model was derived in [9].
The paper is organized as follows. Section 2 contains a description of the
queueing model and necessary notations. In Sect. 3 we define a Markovian
process describing the evolution of the system. In Sect. 4 we build a system
of partial differential equations for the transient system characteristics.
Section 5 contains the formulae for the stationary queue-size distribution
and loss probability and in Sect. 6 we analyze some special cases of the
original model. Section 7 contains examples of numerical results illustrating
theoretical formulae and the last Sect. 8 presents conclusions and final
remarks.
2 The Model and Notation
We consider the Erlang M/G/n ≤ ∞/(0, V ) service system and denote by a
volume and its service time, respectively. F (x, t) = P{ζ < x, ξ < t} be
the the parameter (intensity) of demands entrance flow. Let ζ and ξ be a
demand joint distribution function of the non-negative random variables ζ
and ξ. Then,
L(x) = F (x, ∞) and B(t) = F (∞, t) are the distribution functions of the
random ∫
variables ζ and ξ, respectively. Denote by β1 = Eξ = t dB(t) the mean
service

0
time. Let σ(t) be the total volume of demands present in the system at time
instant t. The values of the process σ(t) are bounded by a constant positive
value V (system memory capacity). Let η(t) be the number of demands
present in the system at time instant t.
defined on the segment [0; V ] such that r(V ) ≥ 0, r(0) ≤ 1.
Consider the right continuous non-increasing (accepting) function r(x)
368 O. Tikhonenko and W.M. Kempa

Assume that at the epoch t a demand of volume x arrives to the system


when the total volume of the other demands present in it is equal to y. In
what follows, we analyze two scenarios of system behavior: (1) the arriving
demand is accepted to service with probability r(x + y) and removed from
the system
with probability 1 − r(x + y); (2) the arriving demand is accepted to
service with probability r(y) and removed from the system with probability
1 − r(y). In or η(t−) = n (if n <
∞). both cases, the arriving demand is also removed from the system if x +
y>V
If the customer of volume x arriving at the epoch t is not accepted to the
system, then η(t) = η(t−) and σ(t) = σ(t−). If it is accepted to the system, then
completed at the epoch τ , then we get η(τ ) = η(τ −) − 1 and σ(τ ) = σ(τ −)
− x. η(t) = η(t−) + 1 and σ(t) = σ(t−) + x. If servicing of the demand of
volume x is the time instant t (j = 1, . . . , k if η(t) = k, 1 ≤ k ≤ n). We
agree to assume that Denote by σj(t) the volume of the jth demand
presenting in the system at the customers present in the system at an
arbitrary time instant are numerated
randomly, that is, if at the time instant t there are k demands in the
system, then any of the possible k! numerations can be used with the same
probability 1/k!. jDenote by ξ∗(t) the remaining service time of jth demand
presenting in the system at time instant t.
Later on, the following notation for vectors will be used to reduce the
rela- tions:

Yk = (y1, . . . , yk), Yk j = (y1, . . . , yj−1, yj+1 . . . , yk), (Yk, u) = (y1, . . . , yk, u).
(1)

3 Random Process and Functions Describing System


Behavior
Behavior of the system under consideration we shall describe by the following
Markovian process:

η(t); σj(t), ξj∗(t), j = 1, . . . , η(t) , (2)

where the components σj(t) and ξj∗(t) are absent, if η(t) = 0. In this

case, Σ
obviously, σ(t) = 0. Otherwise, we have σ(t) = η(t) σj(t).
j=1
Process (2) we shall characterize by functions having probability sense as
follows:
P0(t) = P{η(t) = 0}; (3)
Gk(x, Yk, t)dx = P{η(t) = k, σ(t) ∈ [x; x + dx), ξj∗(t) <∫ yVj, j = 1, . . . , k}, (4)
Θ (Y , t) = P{η(t) = k, ξ∗(t) < y , j = 1, . . . , k} =
k k jV j Gk(x, Yk, t)dx,
0

Pk (t) = P{η(t) = k} = Gk(x, ∞k, t)dx = Θk(∞k, t), k = 1, . . . , n, (6)
0
Erlang Service System with Limited Memory Space 369

where ∞k = (∞, . . . , ∞).


k
It follows ` from
˛¸ the random demands numeration that the functions
Gk(x, Yk, t) and Θk(Yk, t) are symmetrical with respect to permutations of
com- ponents of the vector Yk.
x
4 Equations for Introduced Functions and Their Solution
Consider a system whose behavior corresponds to the first scenario
describing in Sect. 2.
It can be shown by supplementary variables’ technique [6] and taking in
consideration the aforementioned symmetry of the functions Gk(x, Yk, t) and
Θk(Yk, t) that the partial differential equations for the functions (3)–(5) have
the form:
V

P ′(t) = −aP0(t) r(v)dL(v) ∂Θ1(y, t) ; (7)


0
∫ +
0 ∂y ⏐y=0

∂Θ1(y, t) ∂Θ1(y, t)
− V
∂Θ1(y, = aP0(t) r(v)dvF (v, y)
+
t)

∂t ∂y ∂y ⏐⏐ 0
(8)
y=0
∂Θ2(y, u,
∫ V ∫ V −x t)
—a G1(x, y, t) r(x + dx + 2 ;
0 0 v)dL(v) ∂u

⏐u=0

∂Θ (Y , k
k
t) Σ ∂Θ (Y , ∂Θ (Y , t) ⏐ y
j =0


k k k k
k t) —
∂t − ∂y j ∂y j
j=1
k
a
= V
k j=1 Gk−1(x, Y kj, t) ∫ V −x r(x + v)dvF (v, yj) dx
Σ 0 ∫ 0
V
∫ V −x
—a Gk(x, Yk, t) r(x + dx + (k + ∂Θk+1(Yk, u, t)
∫0 v)dL(v) 1) ,
0

∂u ⏐
u=0
k = 2, . . . , n − 1;
(9)
∂Θn(Yn, t) Σ
n ∂Θn (Yn , ∂Θn(Yn , t) ⏐
t)
— — ⏐
∂t j=1 ∂yj ∂yj yj =0
368 O. Tikhonenko and W.M. Kempa
a Σn ∫
(10)
= V
(x, Y j, t) V −x r(x + v)d F (v, y dx.
)

G
n−1 n v j
n 0
j=1 0

We say that the system is empty at time t if there are no demands in it at


this time.
370 O. Tikhonenko and W.M. Kempa

To explain (7)–(10), we first clear the probability sense of some integrals


in these equations:
0
V
r(v)dL(v) is the probability that an arbitrary demand will be accepted to
the
∫ system if it arrives at the moment when the system is empty;
∫0 V v
∫V ∫ V −x
with service
r(v)d F (v,time
y) is less
the than
probability
y; 0 ofG k the
(x, kYsame
, t) 0event,r(x
but for a
+ v)dL(v) dx is
the probability that a demand arriving at time t will be accepted
if there are k other demands in the system at this moment
demand
and their remaining service times are less than
y1, . . . , yk, respectively;
∫V ∫ V −x
0 Gk (x, Yk+1 , t) 0 r(x + v)d
v F (v,j y ) dx is the probability of the same
j

event for the arriving demand with service time less than yj when remaining
service times of other demands present in the system are less than y1, . . . ,
yj−1, yj+1, . . . , yk, respectively.
Suppose for simplicity that n = 2. Then, the analysis of the Markovian process
(2) provides to the following difference equations for the functions (3)–(5):

P0(t + ∆t) = V

P0(t) r(v)dL(v + Θ(∆t, t) + o(∆t);


1 − a∆t ∫ )
0
V
Θ1(y, t + ∆t) = a∆tP0(t)
∫0 r(v)dvF (v, y + ∆t) + Θ1(y + ∆t, t) − Θ1(∆t, t)
V
G1(x, y + ∆t, r(x + v)dL(v) dx
— a∆t ∫ ∫0 V −x
t)
0
+ Θ2(∆t, y + ∆t, t) + Θ2(y + ∆t, ∆t, t) + o(∆t);
a∆t
Θ2(y1, y2, t + ∆t, t) = Θ2(y1 + ∆t, y2 + ∆t, t)
V
2
a∆t
+
2 ∫0 G1(x, y2 + ∆t, ∫0 r(x + v)dvF (v, y1 + ∆t) dx
V −x
t)
+ Θ2(y1 + ∆t, y2 + ∆t, t) − Θ2(y1 + ∆t, ∆t, t) − Θ2(∆t, y2 + ∆t, t)
V
G2(x, y1 + ∆t, y2 + ∆t, r(x + v)dL(v) dx
— a∆t ∫ ∫0 V −x
t)
0
+ Θ3(∆t, y1 + ∆t, y2 + ∆t, t) + Θ3(y1 + ∆t, ∆t, y2 +
∆t, t)
+ Θ3(y1 + ∆t, y2 + ∆t, ∆t, t) + o(∆t). (11)
From these difference equations, using standard technique and taking in
consideration the symmetry of the functions G2(x, y1, y2, t), Θ2(y1, y2, t) and
Θ3(y1, y2, y3, t), we obtain the following partial differential equations:
Erlang Service System with Limited Memory Space 371
V 1
P ′(t) = −aP0(t) r(v)dL(v) ∂Θ1(y, t)
+ ∂y ;
0 ∫ 0


⏐y=0
(y, t)
∂Θ1∂t (y, t)
∂Θ1∂y (y, t) ⏐
∂Θ1∂y
— + = aP0(t) r(v)dvF (v,
⏐ ∫0 V y)
∫ ⏐y=0
∫ V −x ∂Θ2(y, u, t)
—a
V
G1(x, y, t) r(x + dx + 2 ;
∂u
v)dL(v) ⏐
0 ∂Θ (y , y , t) 0 ∂Θ (y , y , t) ⏐u=0
2 1 2 2 1 2 ∂Θ2(y1, y2, t)

∂t ∂y1 ⏐
∂y1 ⏐y 1 =0
∂Θ2(y∂y∂Θ2(y1, y2, t)
1, 2y2, t)

∂y2
⏐⏐
y =0
= ∫ V G1(x, y1, t) ∫ V −x r(x + v)dvF 2(v, dx
2a 0 0 y2 )
V
+a G1(x, y2, t)
2 ∫0 ∫0 V −x r(x + v)dvF (v, y1) dx
V

—a G2(x, y1, y2, t) ∫ V −x r(x + dx + ∂Θ3(y1, y2, u, t) .


∫ v)dL(v) 3
0 0 ∂u u=0

⏐ (12)
For arbitrary k, this technique gives us Eqs. (7)–
(10). Let us introduce the notation

R(z, y) r(V − z + v)dvF (v, y). (13)


= ∫0 z
Then Eqs. (7)–(10) can be rewritten as
′ ∂Θ1(y, t)

0 0 ; (14)
P (t) = −aP (t)R(V, ∞) + ∂y y=0
∂Θ1(y, t) ∂Θ1(y, t) ∂Θ1(y, t) ⏐

= aP0 (t)R(V, y)
∂t ∂y ∂y
− + y=0
(15)

∂Θ2(y, u, t)
— a ∫ G1(x, y, t)R(V − x, ∞)dx⏐+
V
⏐ ;
2 0 ∂u u=0
Σ ∂Θ (Y , ⏐
∂Θ (Y , ∂Θ (Y , t) ⏐
t)
t)k
k k
k k k k
a Σ ∫ j=1
∂yj — ∂yj
V ∂t − ⏐
k yj =0
= Gk−1(x, Y j, t)R(V − x, yj)dx − a Gk(x, Yk, t)R(V − x, ∞)dx
k 0
j=1 k ∫0 V
∂Θk+1(Yk, u, t)
+ (k + 1)
∂u
⏐⏐ (16)
u=0
370 O. Tikhonenko and W.M. Kempa

, k = 2, . . . , n − 1.
372 O. Tikhonenko and W.M. Kempa

Σ
∂Θn(Yn, t) n ∂Θn (Yn , ∂Θn(Yn , t) ⏐
t)
∂t — — ⏐
∂yj ∂yj yj =0
j=1 (17)
n V
a
=
n ∫ Gn−1(x, Y j,nt)R(V − x, yj)dx.
Σ
j=1 0

If the inequality ρ = aβ1 < ∞ takes place, the steady state exists for
the
system under consideration, and the following limits exist in the sense of a weak
convergence: η(t) ⇒ η; σ(t) ⇒ σ; ξ∗(t) ⇒ ξ∗, j = 1, . . . , η, where η, σ, ξ∗ are
appropriate steady-state characteristics.
j Then,
j the following finite limits
j the
exist:

p0 = lim P0(t) = P{η = 0}; (18)


t→∞

gk(x, Yk) = t→∞


lim Gk(x, Yk, t) (19)
and

k(x, Yk)dx
Θ (Y= ,P{η
t) ==P{η = ∈k;[x,
ξ∗ x<+
y dx); j ==1, . . . ,
j ∗

θ (Y
V ) g= lim k; σ , j = ξ1, .<. y
. ,j,k}

k};
k k
t→∞
k k j j gk(x, Yk)dx;
0
(20)

=
V lim P (t) = P{η = k} =
pk gk(x, ∞k)dx = θk(∞k) (21)
k
t→∞
0
where k = 1,....., n.
It is clear that the steady-state functions gk(x, Yk) and θk(Yk) are also sym-
metrical with respect to permutations of components of the vector Yk.
In steady state, for the value (18) and the functions (19), (20), we obtain the
following equations that follow from Eqs. (14)–(17):
∂θ (y) 1
0= −ap R(V, ∞) + ; (22)
0
∂y ⏐ y=0
∂θ1(y) ⏐
∂θ1(y) ∫V
− + = ap0R(V, y) − a
∂y ∂y ⏐ 0
g1(x, y)R(V − x, ∞)dx (23)
⏐y=0
∂θ2(y, u) ⏐
+ 2 ∂u ;
Erlang Service System with Limited Memory Space 373

u=0

k
k k
—Σ
j=1
∂θ∂y(Yj )


374 O. Tikhonenko and W.M. Kempa

k k

∂θ (Y ) ⏐ ∂y j yj =0
k
a V ⏐
V
= (24)
k gk−1(x, Y j)R(V
k − x, yj)dx − a ∫ gk(x, Yk)R(V − x, ∞)dx
Σ
j=1 0 0
∂θ k+1 (Y k , u) = 2, . . . , n − 1.
+ (k + 1) , k

∂u ⏐

u=0
Erlang Service System with Limited Memory Space 375

Σ ∫
∂θn (Yn) ∂θn (Yn) ⏐ a Σ
n n V

— = j
— ∂yj ∂yj n 0
j=1 ⏐
gn−1(x, Yn )R(V − x, yj)dx.
j=1
yj =0
(25)
In steady state, the following boundary conditions (or equilibrium
equations)
hold:
V
a gk(x, Yk)R(V −x, ∞)dx = (k+1) ∂θk+1(Yk, u) , k = 1, . . . , n−1. (26)
∫0 ⏐ ∂u
u=0

It is clear that R(z, y) is the probability that an arriving demand with
service
system had z units of free memory space. Then obviously, the function R(z,
∞) time less than y will be accepted to service, if immediately before its
arrival the represents the probability that an arbitrary demand is accepted
to service, under
condition that immediately before its arrival there were z units of free
memory space in the system.
Consider the function H(z, y) = R(z, ∞) − R(z, y) which defines the prob-
ability that a demand with service time greater than or equal to y is accepted
F (x, t) as F (x, t) = L(x)B(t|ζ < x), where B(t|ζ < x) = P{ξ < t|ζ < x}
is to service under the same condition. Let us present the distribution
function the conditional distribution function of the random variable ξ
under condition
ζ < x. Then, we have
z
R(z, y) =
r(V − z + v)B(y|ζ = v)dL(v) (27)
∫0
and, consequently,
z
H(z, y) =
r(V − z + v)[1 − B(y|ζ = v)]dL(v). (28)
∫0
Let us introduce the function ⎤
⎡ y
y z ∫
Φy(z) H(z, u)du (1 − B(u|ζ = v)) d u ⎦ dL(v)
= ∫ = ∫ r(V − z + v) (29)
0
0 ⎣0
and notation ∂Φu(z) z
= r(V − z + v)dL(v) (30)
S(z) = R(z, ∞) ∫
= ∂u ⏐⏐u=0 0

assuming that B(0|ζ = v) = 0. We also introduce the following notation for


the
Stieltjes convolution:
k
Fj(x). (31)
F1 ∗ · · · ∗ Fk(x) =


j=1
The kth order Stieltjes convolution of the function D(x) we shall denote by
376 O. Tikhonenko and W.M. Kempa
D(∗k)(x).
Erlang Service System with Limited Memory Space 377

Taking in consideration the aforementioned symmetry of the functions


θk(Yk) and the boundary conditions (26), one can easily show by direct
substitution that the solution of the equation system (22)–(25) has the form:
k
ak
gk(x, Yk)dx = C Φy j , k = 1, . . . , n, (32)
k! ∗
dx j=1 (x)

where C is a constant value to be defined later from the normalization


condition. It follows from the last relation and (20) that

ak k
θ(Yk) = C k! ∗ Φyj (V ), k = 1, . . . , n. (33)
j=1

5 Steady-State Demands Number Distribution and Loss


Probability
Let us introduce the function

A(z) = lim Φy(z) H(z, u)du
= ∫0
y→∞
z
(34)
= r(V − z + dL(v).
∫ ∫ ∞ (1 − B(u|ζ = v))
v)
0
du 0

Here the integral (1 − B(u|ζ = v)) du = E(ξ|ζ = v) is the conditional
expectation of the demand
∫0 service time under condition ζ = v.
We establish from (21) that
ak (k)

pk = C
A∗ (V ), k = 1, . . . , n, (35)
k!
where the constant C is determined from the normalization condition as

ak −1
C = p0 =
Σ 1+
n A(k)(V ) . (36)

k!
k=1

The relation for the loss probability Ploss follows from the equilibrium
con- dition
Σ n ∂θ (∞ , u) ⏐
=1 k k−1
a(1 − Ploss) = k k u=0
, (37)
∂u
according to which the mean number of customers accepted to the system
during a time unit in steady state is equal to the mean ⏐ number of customers
completed their service during the same time. After simple computations we
get


Ploss
378 O. Tikhonenko and W.M. Kempa
n−1 k

k! .
k=1 Σ
= 1 − p0 S(V ) S∗
+
a

A(k)(V

)
Erlang Service System with Limited Memory Space 379

We recall that all above relations refer to the first scenario of system
behavior (see Sect. 2). To study the second scenario, one needs to define
the function
R(z, y) by the following relation differing from (13): R(z, y) = r(V − z)F (z,
y).
It can be easily shown that Eqs. (22)–(26) stay the same for this function
and,
in this case, the functions Φy(z), A(z) and S(z) take the following forms:

Φy(z) = r(V − z)L(z) [1 − B(u|ζ < z)] du,


∫0 y
A(z) = r(V − z)L(z) [1 − B(u|ζ < z)] du = r(V − E(ξ|ζ =
∫0 ∞ z) ∫0 z
v)dL(v),

S(z) = r(V − z)L(z). (39)


The probability sense of the function S(z) remains the same for both scenarios
of system behavior.

6 Analysis of Some Special Cases


6.1 Demand Service Time Doesn’t Depend on Its Volume
In this case we have F (x, t) = L(x)B(t). Then, for both scenarios of system
behavior, we obtain that A(s) = β1S(z). Therefore, the demands number
distri- bution and loss probability doesn’t depend on the form of
distribution function
B(t). These characteristics depend on the first moment of service time β1
only. The steady-state demands number distribution in this case has the
form:

ρk −1
p0 = 1 + S(k)(V ) , (40)

k!
Σ n

k=1

ρk (k)

pk = p0 S∗ (V ), k = 1, . . . , n, (41)
k!
where ρ = aβ1, S(z) is defined by (30), for the first scenario, and by (39), for
the second one. The relation for the loss probability, as it follows from (38),
takes the form:

Ploss n−1 k ∗
S(V ) Σ k! S
(k+1)
(V ) . (42)
+ k=1
= 1 − p0
6.2 Classical Erlang System with AQM
ρ
371 O. Tikhonenko and W.M. Kempa
0 Let us consider the system under the second scenario with demands of
volume equals to 1. It is clear that, in this case, we can assume that service
time doesn’t depend on demand volume and demands number
characteristics (including loss probability) depend on the first moment of
service time, only. In this case, we

denote the memory capacity of the system by N , N ≤ n.


Erlang Service System with Limited Memory Space 371
1
Denote by ri = r(i) the probability that the arriving demand will be
accepted to the system if immediately before its arrival there were i other
demands in it,
i = 0, . . . , N . Obviously, ri = 1 − di, where di is the classical dropping function
[2]. Then, we obtain from (39)–(42) that

Σ j j−1 ⎞
N −1
k k−1
ρ ; pk = p0 ρ ri, k = 1, . . . , N
j!
p0 = ⎝ 1+ j=1 j=0 ri⎠ k!
i=0 ; (43)
N −1 k k

loss 0 i
k!
Σ
k=0 ρ i=0
P =1− r.

p
6.3 Service Time Is Proportional to Demand Volume
Let us assume that ξ = cζ, c > 0.
∫ Then,
∞ we obviously have:
E(ξ|ζ = v) =
and, for the first scenario of system behavior, we get:
[1 − B(y|ζ = v] dy = cv, (44)
0
A(z) = vr(V − z + v)dL(v), (45)
c ∫0 z

an exponential distribution L(x) = 1 − e−fx, f > 0, and r(x) = 1 − x/V


for x ∈ [0; V ], we obtain, taking in consideration (30),
that and the function S(z) has the form (30). For example, if demand
volume has
fz + e −1
S(z) = , A(z) = fz − 2 + (2 + fz)e−fz . (46)
Thus,
For theallsystem
components
under of
−fz
theformulas (35)–(38)
second scenario,
c areget:
we determined.
f f
2
V V
A(z) = cr(V − v dL(v), (47)
z) ∫0 z

and S(z) is determined by (39). For the exponentially distributed demand


vol- ume and the same function r(x) as above, we obtain:
z
S(z) = cz 1 − (1 +−fz
fz)e f V
V , A(z) . (48)
=
1 − e−fz
7 Numerical Examples
In this section we present numerical examples illustrating theoretical results.
Let us consider, firstly, the case of the demand service which does not
depend on the volume. Assume that volumes of entering demands are
exponentially distributed
371 O. Tikhonenko and W.M. Kempa
2 = with parameter f = 1 and that the accepting function is a linear one, i.e. r(x)
= 1 − x , x ∈ [0, V ]. For V = 8, n = 5 and two different values of the traffic
load: ρV = 0.70 and 1.00, we obtain the stationary queue-size distributions
as
shown in Figs. 1 and 2 (for the first and second scenarios of system’s opera-
tion). The values of the loss probability equal 0.350704 (ρ = 0.70) and
0.409832 (ρ = 1.00) for the first scenario, and 0.252966 and 0.318422 for the
second sce- nario, respectively.
In Figs. 3 and 4 we present similar results for the case V = 15 and n = ∞
(we give successive probabilities from 0 to 10). For the case of infinite num-
ber of independent servers we get the loss probability equal to 0.302237
(for ρ = 0.70) and 0.363743 (for ρ = 1.00) in the case of the first scenario of
the sys- tem operation. Similarly, for the second scenario, we have 0.243615
and 0.307163, respectively.
Moreover, in Figs. 5 and 6 the case of the service time being
proportional to demand volume is presented, where the coefficient of
proportionality c = 1, a = 0.70 and 1.00, and the remaining system
parameters are the same as in Figs. 3 and 4. For the first scenario we get
loss probabilities equal to 0.339437 (a = 0.7) and 0.396688 (a = 1).
Similarly, for the second scenario of the system’s operation we obtain,
0.269370 and 0.338323, respectively.
Lastly, on Figs. 7 and 8 the stationary queue-size distribution is shown
for the case of c = 1.5, two different values of arrival intensity, namely a =
2 and a = 1 and, as in Figs. 5 and 6, taking V = 8, f = 1 and n = 5. The
obtained values of the loss probability are 0.593461 (first scenario) and
0.228903 (second scenario) for a = 2 and, respectively, 0.468284 and
0.0074844 for a = 1.

Fig. 1. 5 servers, first scenario (inde- Fig. 2. 5 servers, second scenario


pendent) (inde- pendent)
Erlang Service System with Limited Memory Space 371
3

Fig. 3. Infinite number of servers, first Fig. 4. Infinite number of servers, sec-
scenario (independent) ond scenario (independent)

Fig. 5. 5 servers, first scenario (propor- Fig. 6. 5 servers, second scenario (pro-
tional), c = 1 portional), c = 1

Fig. 7. 5 servers, first scenario (propor- Fig. 8. 5 servers, second scenario (pro-
tional), c = 1.5 portional), c = 1.5

8 Conclusions

stream, finite memory capacity and n ≤ ∞ independent service stations is


con- In the paper a multi-server Erlang loss queueing system with Poisson
arrival sidered, in which volumes of the arriving demands and their service
times are
dependent random variables. An AQM-type non-increasing accepting
function is implemented for controlling the incoming flow of demands. Two
possible accep- tance rules are considered: in the first one the arriving
demand is being accepted for service with probability depending on its
volume and on the volume of all demands being processed at the arrival
epoch; in the second one the accept- ing probability is independent on the
volume of the arriving demand. Using
371 O. Tikhonenko and W.M. Kempa
4
supplementary variables’ technique, the stationary queue-size distribution
and the loss probability are obtained for both scenarios of the system
behavior. Three different special cases are discussed and illustrated via
numerical examples. The results obtained in the paper can be used for
estimating of total demands capac- ity characteristics in the nodes of
computer and telecommunication networks.

References
1. Floyd, S., Jacobson, V.: Random early detection gateways for congestion
avoidance. IEEE/ACM Trans. Netw. 1(4), 397–412 (1993)
2. Kempa, W.M.: On main characteristics of the M/M/1/N queue with single and
batch arrivals and the queue size controlled by AQM algorithms. Kybernetika
47(6), 930–943 (2011)
3. Liu, S., Basar, T., Srikant, R.: Exponential RED: a stabilizing AQM scheme for
low- and high-speed TCP protocols. IEEE/ACM Trans. Netw. 13, 1068–1081
(2005)
4. Tikhonenko, O.M.: Generalized Erlang problem for service systems with finite
total capacity. Probl. Inf. Transm. 41(3), 243–253 (2005)
5. Tikhonenko, O.M.: Queueing systems of a random length demands with restric-
tions. Autom. Remote Control 52(10, pt. 2), 1431–1437 (1991)
6. Tikhonenko, O.: Computer systems probability analysis. Akademicka Oficyna
Wydawnicza EXIT, Warsaw (2006). (in Polish)
7. Tikhonenko, O., Kempa, W.M.: The generalization of AQM algorithms for
queue- ing systems with bounded capacity. In: Wyrzykowski, R., Dongarra, J.,
Karczewski, K., Wa´sniewski, J. (eds.) PPAM 2011. LNCS, vol. 7204, pp. 242–
251. Springer, Heidelberg (2012). doi:10.1007/978-3-642-31500-8 25
8. Tikhonenko, O., Kempa, W.M.: Queue-size distribution in M/G/1-type sys-
tem with bounded capacity and packet dropping. In: Dudin, A., Klimenok, V.,
Tsarenkov, G., Dudin, S. (eds.) BWWQT 2013. CCIS, vol. 356, pp. 177–186.
Springer, Heidelberg (2013). doi:10.1007/978-3-642-35980-4 20
9. Tikhonenko, O., Kempa, W.M.: On the queue-size distribution in the
multiserver system with bounded capacity and packet dropping. Kybernetika
49(6), 855–867 (2013)
10. Zhou, K., Yeung, K.L., Li, V.O.K.: Nonlinear RED: a simple yet efficient active
queue management scheme. Comput. Netw. 50(18), 3784–3794 (2006)
Erlang Service System with Limited Memory Space 371
5
Queueing Systems with Demands
of Random Space Requirement
and Limited Queueing or Sojourn Time

Oleg Tikhonenko1(✉) and Pawel Zajac2


1
Faculty of Mathematics and Natural Sciences, College of Sciences,
Cardinal Stefan Wyszyn´ski University in Warsaw,
Ul. W´oycickiego 1/3, 01-938 Warsaw, Poland
[email protected]
2
Institute of Mathematics, Czestochowa University of Technology,
Al. Armii Krajowej 21, 42-200 Czestochowa, Poland
pawel [email protected]

Abstract. We investigate queueing systems with demands of random


space requirements and limited buffer space, in which queueing or
sojourn time are limited by some constant value. For such systems, in
the case of exponentially distributed service time and Poisson entry,
we obtain the steady-state demands number distribution and
probability of demands losing.

Keywords: Queueing system · Buffer space capacity · Demand


volume · Demands total volume · Queueing time · Sojourn time

1 Introduction
We consider queueing systems with demands of a random space requirement
(volume) and a limited buffer space capacity. This means that each demand
is characterized by some non-negative random indication named the demand
space requirement or demand volume ζ. The total sum σ(t) of volumes of all
demands present in the system at arbitrary time instant t is limited by
some constant value V , which is named a buffer space capacity of the
system. Such systems have been used to model and solve the various
practical problems occurring in design of computer and communication
systems. They were widely studied in the literature (see, e.g., [3, 5–12]).
In our work, we study queueing systems in which demands are also
“impa- tient”. In other words, they can leave the system during their
waiting in the queue, or even during their servicing.
Such systems are the models of some real processes. E.g., systems of
informa- tion transmission often deal with the process of messages
information reduction. The outdated messages can be removed from the
system before their transmis- sion. In this case, we have the system with
limited queueing time. The typical
Oc Springer International Publishing AG 2017
P. Gaj et al. (Eds.): CN 2017, CCIS 718, pp. 380–391, 2017.
DOI: 10.1007/978-3-319-59767-6 30
Queueing Systems with Demands of Random Space Requirement 381

example of the system with limited sojourn time is a radar for airplanes
supervi- sion. This installation is characterized by limited area of servicing.
The problem is to keeping up with the service of all airplanes that are in
this area during some constant or random amount of time.
In the classical queueing theory (for demands without volume), systems
with limited queueing or sojourn time were investigated, e.g., in [2]. We
consider systems with queueing or sojourn time limited by some constant
value τ . In the systems under consideration, the buffer space and the
number of waiting places in the queue can be also limited.
For the systems under consideration, we obtain the steady-state
demands number distribution and the probability that a demand is lost or
leaves the system because of queueing or sojourn time limitation.
This work is organized as follows. In Sect. 2, we give the mathematical
description of the models and introduce some necessary notations. In Sect.
3, we investigate the system with limited queueing time. In Sect. 4, we
investigate the system with limited sojourn time. Section 5 contains
concluding remarks.

2 Models Description
Consider the M/M/n/m-type queueing system with identical servers and
FIFO service discipline. Let a be the intensity of demands entrance flow, µ
be the parameter of service time. Each demand has some random volume ζ
which does
Let L(x) = P{ζ < x} be the demand volume distribution function and σ(t)
not depend on the volumes of other demands nor on the demand arriving
epoch. be the sum of volumes of all demands present in the system at time
instant t.
The values of the process σ(t) are limited by the constant value V (buffer
space capacity). Let us denote by η(t) the number of demands present in the
system at
it will be accepted to the system if η(t−) < n + m and σ(t−) + x ≤ V . In
this time t. Let a demand having −the volume x arrive to the system at epoch
t. Then, case, we have η(t) = η(t ) + 1, σ(t) = σ(t−) + x. In opposite case,
the demand
volume x leaves the system, we have η(t) = η(t−) − 1, σ(t) = σ(t−) − x.
will be lost and η(t) = η(t−), σ(t) = σ(t−). If t is the epoch when a demand
of
Assume that demand service time ξ does not depend on its volume ζ. In
the system with limited queueing time, a waiting demand leaves the system
immediately if its queueing time achieves the value τ . In this system, a
demand on service never be lost. In the system with limited sojourn time, a
demand will be lost if its sojourn time achieves the value τ .

3 Queueing System with Limited Buffer Space


and Limited Queueing Time
3.1 Process and Characteristics
Let ξj(t) be the length of time interval from the moment t to the moment
when the jth demand leaves the queue (starts its service or is
lost),
382 O. Tikhonenko and P. Zajac

j = n + 1, . . . , η(t). Let σj(t) be the volume of the jth demand. It is clear


Σ
that σ(t) = η(t)
j=1 σj(t).
The system behavior is described by the following Markov process:
(η(t); σj(t), j = 1, . . . , η(t)) , 1 ≤ η(t) ≤ n,
(1)
(η(t); σj(t), j = 1, . . . , η(t); ξl(t), l = n + 1, . . . , η(t)) , n < η(t) ≤ n

+ m.
Let us introduce the functions to characterize the process (1). First, we intro-
duce the functions

Pk(t) = P{η(t) = k}, k = 0, 1, . . . , n + m. (2)


For k = 1, 2, . . . , n + m, we introduce the functions
G (t, x) = P{η(t) = k, σ(t) < x}. (3)
It is clear that, for such kk, we have Pk(t) = Gk(t, V ).
For n + 1 ≤ k ≤ n + m, we introduce the functions Rk(t, x, yn+1, . . . , yk) and
Hk(t; yn+1, . . . , yk) = Rk(t, V, yn+1, . . . , yk) with the following probability sense:

Rk(t, x, yn+1, . . . , yk)dyn+1 . . . dyk

= P{η(t) = k; σ(t) < x; ξj(t) ∈ [yj; yj + dyj), j = n + 1, . . . , k}, (4)


Hk(t, yn+1, . . . , yk)dyn+1 . . . dyk

= P{η(t) = k; ξj(t) ∈ [yj; yj + dyj), j = n + 1, . . . , k}. (5)


3.2 Steady-State Demands Number Distribution
It is easily to show that, for the functions (2)–(5), the following differential
equations hold:
0
P ′(t) = −aP0(t)L(V ) + µP1(t); (6)
∫ V G (t, V − x)dL(x) − µP (t) + 2µP (t);
P1′(t) = aP0(t)L(V ) − 1 1 2 (7)
a 0

V V
∫ ∫
P k (t) =

Gk−1(t, V − x)dL(x) − Gk(t, V − x)dL(x)
a 0

a
0

— kµPk(t) + (k + 1)µPk+1(t), k = 2, . . . , n − 1; (8)



∫ V V
P ′ (t) =
n
a Gn−1(t, V − x)dL(x) − a Gn(t, V − x)dL(x)
0 0
Queueing Systems with Demands of Random Space Requirement 383

— nµPn(t) + Hn+1(t, 0); (9)


384 O. Tikhonenko and P. Zajac

′ , V , V
Pk (t) = Gk−1(t, V − x)dL(x) − Gk(t, V − x)dL(x)
a a 0
0=0
yn+3
gk(V − x)dL(x)
a, τ , yk
..
,
Hk(t, 0, yn+2, . . . , yk)dyk . . . dyn+2
— yk =0 y k−1 =0 yn+2 =0
.
, τ , yk+1 , yn+3
+ .. Hk+1(t, 0, yn+2, . . . , yk+1)dyk+1 . . . dyn+2,
yk+1 =0 yk =0 . yn+2 =0

k = n + 1, . . . , n + m − 1; (10)

,V
Pn+m
′ (t) = a Gn+m−1(t, V − x)dL(x)
0
, , yn+m , yn+3
τ
.. Hn+m(t, 0, yn+2, . . . , yn+m)dyn+m . . . dyn+2.
— yn+m=0 yn+m−1 =0 yn+2 =0
.
(11)

ρ = a/(nµ) < ∞ the steady state exists for the system under
consideration, i.e. η(t) ⇒ η and σ(t) ⇒ σ in the sense of a weak convergence,
where η and σ are Assume that at least one of the values V and m is
finite. Therefore, for
a steady-state number of demands present in the system and their steady-
state
total volume, respectively. Hence, the following finite limits exist:

pk = lim Pk(t) = P{η = k}, k = 0, 1, . . . , n + m; (12)


t→∞

gk(x) = lim Gk(t, x) = P{η = k, σ < x}, k = 1, 2, . . . , n + m; (13)


t→∞
rk(x, yn+1, . . . , yk) = lim Rk(t, x, yn+1, . . . , yk), k = n + 1, . . . , n + m; (14)
t→∞
hk(yn+1, . . . , yk) = lim Hk(t, yn+1, . . . , yk), k = n + 1, . . . , n + m. (15)
t→∞

Then, from (6)–(11), we obtain the following equations for the steady-
state functions (12)–(15):

0=∫ −ap
V 0L(V ) + µp1; (16)
0= ap0L(V ) − a
0
g1(V − x)dL(x) − µp1 + 2µp2; (17)
∫ V ∫ V
0 gk−1(V − x)dL(x) − a 0

— kµpk + (k + 1)µpk+1, k = 2, . . . , n − 1; (18)


∫ V ∫ V
0= 0 gn−1(V − x)dL(x) − a 0 gn(V − x)dL(x) − nµpn + hn+1(0); (19)
a
Queueing Systems with Demands of Random Space Requirement 385

V V
0= ∫ ∫
a
0
gk−1(V − x)dL(x) − a 0 gk(V − x)dL(x)
∫ τ ∫ yk ∫ yn+3
— .. hk(0, yn+2, . . . , yk)dyk . . . dyn+2
0 0
. 0
∫ τ ∫ yk+1 ∫ yn+3
+ .. hk+1(0, yn+2, . . . , yk+1)dyk+1 . . . dyn+2,
.
0 0 0

k = n + 1, . . . , n + m − 1; (20)
∫ V
0= gn+m−1(V − x)dL(x)
a ∫0 τ ∫ yn+m ∫ yn+3
— .. hn+m(0, yn+2, . . . , yn+m)dyn+m . . . dyn+2. (21)
.
0 0 0

In the steady state, the following boundary conditions hold:


∫ V
a gn(V − x)dL(x) = hn+1(0); (22)
0
∫ V

a
0
∫ τ ∫ ∫
yk+1 yn+3
g (V − x)dL(x) (23)
= .. hk+1k(0, yn+2, . . . , yk+1)dyk+1 . . . dyn+2,
0 0 . 0

k = n + 1, . . . , n + m − 1.
It can be easily shown by direct substitution that the following functions are
the solution of Eqs. (16)–(21) for which the boundary conditions (22) and
(23) hold:

rk(x, yn+1, . . . , yk) = p0 µk−n(nρ)k ∗


−nµy k L(k)(x), k = n + 1, . . . , n + m, (24)
n! e
whereas we get:

hk(yn+1, . . . , yk) = p0 µk−n(nρ)k


k (k)
n! e−nµy L ∗ (V ), k = n + 1, . . . , n + m.
For the functions gk(x), we have:
(nρ)k (k)
gk(x) = L∗ (x), k = 1, 2, . . . , n; (25)
p0 k!
∫ yn+2
∫ τ ∫ yk rk(x, yn+1, . . . , yk)dyk...dyn+1
g k(x) = . .∫ 0
∫ ∫
0 0 .
yk
= p0 µ (nρ) (k)
k−n k τ yn+2
L (x) dyk−1 . . .
k

n! −nµy
e 0
dyn+1 (26)
0
0 dyk
k−n−1 j
386 O. Tikhonenko and∗ P. Zajac ⎤

n nρk −nµτ
Σ (nµτ (k)
)
= p0
L∗ (x)⎦ , k = n + 1, . . . , n + m.
n!
j=0 j!
⎣1 − e
Queueing Systems with Demands of Random Space Requirement 387

Finally, we obtain:
(nρ)k (k)
, k! L∗ (V ), k = 1, . . . , n,
= p
pk , ⎡
, ⎨, 0 k −Σn − 1
, p 0 n nρk
⎤ (nµτ (k)
, n! −nµτ j ⎦ L∗ (V ), k = n + 1, . . . , n +
⎣1 − ) j!
m.
j=0
Σn+m
From the normalization condition k=0 pk = 1, we have: ⎤
, e
, −
⎨Σn (nρ)k (k) Σ (nµτ )
j
⎡ −nµτ k−n−1
nn Σ (k)
n+m
1

p0 = ρk
, k=0 L∗ (V ) + ⎦ L∗ (V ) .
k! k=n+1 j=0 j! ,
n! ⎣ 1−e
3.3 Loss Probability
Let A be the event that an arbitrary arriving demand is accepted to the
system and served completely. The probability of this event can be calculated
as follows:

nΣ−1 ∫ V
P{A} = p0L(V ) ∫ gk(V − x)dL(x) 1−e −nµτ gn(V − x)dL(x)
V

+ +
k=1 0 0
∫ ∫ yn+2
n +Σ m − 1 V
+∫ .. rk(V − x, yn+1, . . . , yk)
τ
0 0 . 0
k=n+1

)
× 1 − e−nµ(τ−yk dL(x)dyk . . . dyn+1,
whereas, taking into consideration formulae (24) and (25), we obtain:

nΣ−1
P{A} = 0 (nρ)(k+1)
k
(V )
k! L

p
k=0
⎡ kΣ−n
n n+m−1 k −nµτ ⎤ (nµτ (k+1)
n
) j
Σ
+
n! ⎦ L∗ (V ) .
k=n
ρ ⎣1 − j=0 j!

e
It is clear that the loss probability (or the probability that a demand is
lostarriving epoch or is not served completely) can be determined as
at its
388 O. Tikhonenko and P. Zajac

Ploss = 1 − P{A}.
4 Queueing System with Limited Buffer Space
and Limited Sojourn Time
4.1 Process and Characteristics
Let us assume that demands are served according FIFO discipline. Then,
accept- ing ones can leave the system only during their servicing (see, e.g.,
[2]). For this
Queueing Systems with Demands of Random Space Requirement 389

system, we denote by γl(t) the length of time interval from the moment t to
the moment when the lth server releases from service of demands accepting
to the system before the moment t if this server is busy at this moment. We
assume that γl(t) = 0 if the lth server is free at this moment, l = 1, . . . , n.
Later on, we shall numerate busy (at time instant t) servers according to
appropriate values γl(t).
Then, the system behavior is described by the following Markov process:

(η(t); σj(t), j = 1, . . . , η(t); γl(t), l = 1, . . . , max(η(t), n)) . (27)

We shall characterize the process (27) by the following functions:

Pk(t) = P{η(t) = k}, k = 0, 1, . . . , n + m, (28)


Rk(t, x, y1, . . . , yk)dy1 . . . dyk

= P{η(t) = k; σ(t) < x;γl(t) ∈ [yl; yl + dyl), l = 1, . . . , k}, k = 1, . . . , n, (29)


Rk(t, x, y1, . . . , yn)dy1 . . . dyn

= P{η(t) = k; σ(t) < x;γl(t) ∈ [yl; yl + dyl), l = 1, . . . , n}, k = n + 1, . . . , n + m,


(30)
Hk(t, y1, . . . , yk)dy1 . . . dyk

= P{η(t) = k; γl(t) ∈ [yl; yl + dyl), l = 1, . . . , k}, k = 1, . . . , n, (31)


Hk(t, y1, . . . , yn)dy1 . . . dyn
= P{η(t) = k; γl(t) ∈ [yl; yl + dyl), l = 1, . . . , n}, k = n + 1, . . . , n + m.
(32) It is clear that Hk(t, y1, . . . , yk) = Rk(t, V, y1, . . . , yk) for 1 ≤ k ≤ n
and Hk(t, y1, . . . , yn) = Rk(t, V, y1, . . . , yn) for n + 1 ≤ k ≤ n + m.
Taking into consideration the established way of servers numeration, let us
introduce the functions

k(t, x) = P{η(t) = k, σ(t) < x}


∫ τ∫ yk
∫G
y2
= .. Rk(t, x, y1, . . . ,yk)dyk . . . dy1, k = 1, . . . , n, (33)
.
0 0 0

∫ ∫ ∫
τ yn
y2 Gk(t, x) = P{η(t) = k, σ(t) < x}
= .. Rk(t, x, y1, . . . , yn)dyn . . . dy1, k = n + 1, . . . , n + m. (34)
.
0 0 0

It is clear that, for all k = 1, . . . , n + m, we have Pk(t) = Gk(t, V ).


381 O. Tikhonenko and P. Zajac
0
4.2 Steady-State Demands Number Distribution
For introduced functions, we can write the following equations:

P 0′(t) = −aP0(t)L(V ) + H1(t, 0); (35)


∫ V ∫ τ

1 H2(t, 0, y2)dy2;
G1(t, V − x)dL(x) − H1(t, 0) 0
P ′(t) = aP0(t)L(V ) − (36)
+
0

a
∫ V ∫ V
P ′ (t) =
ak Gk−1(t, V − x)dL(x) − a Gk(t, V − x)dL(x)
∫ 0 ∫ 0
y3
τ
0
Hk(t, 0, y2, . . . , yk)dyk . . . dy2
∫ 0 y3
— .. ∫
Hk+1(t, 0, y2, . . . , yk+1)dyk+1 . . . dy2, k = 2, . . . n − 1; (37)
0
.
τ
+ ..
.
0

∫ V ∫ V
P ′ (t) =
an Gn−1(t, V − x)dL(x) − a Gn(t, V − x)dL(x)
∫ τ
0 ∫ y3 0

.. Hn(t, 0, y2, . . . , yn)dyn . . . dy2


— . ∫0 y3
0
+∫ τ Hn+1(t, 0, y2, . . . , yn)dyn . . . dy2; (38)
.. 0
.
∫ V 0 ∫ V

Pk(t) = a ∫0 τ Gk−1∫(t,y V − x)dL(x) − a 0 Gk(t, V − x)dL(x)


3

— .. Hk(t, 0, y2, . . . , yn)dyn . . . dy2


0
. y3
0
∫ τ ∫
Hk+1(t, 0, y2, . . . , yn)dyn . . . dy2, k = n + 1, . . . n + m − 1;
+ .. 0
. (39)
0

V
P ′ (t) = ∫
a n+m Gn+m−1(t, V − x)dL(x)
∫ τ0 ∫
.
— .. 0
y3
Queueing Systems with Demands of Random Space Requirement 381
dy2. (40)
1
Hn+m(t, 0, y2, . . . , yn)dyn . . . 0
the number p0 = P{η = 0} and the functions rk, hk and gk (these
functions do By the same way as in Sect. 2, we can write the steady-state
equations for not depend on t) that are steady-state analogies of the
functions Rk, Hk and Gk,
respectively. It follows from Eqs. (35)–(40) that equations for the steady-
state functions have the following forms:

0= −ap0L(V ) + h1(0); (41)


381 O. Tikhonenko and P. Zajac
2 V
∫ ∫ τ

g1(V − x)dL(x) − h1(0) h2(0, y2)dy2; (42)


0
0= ap0L(V ) −
+
0

∫ V
a
∫ V

0 = a ∫0 τ gk−1∫(Vy − x)dL(x) − a 0
gk(V − x)dL(x)
3

— .. hk(0, y2, . . . , yk)dyk . . . dy2


0
. y3
0
∫ τ ∫
hk+1(0, y2, . . . , yk+1)dyk+1 . . . dy2, k = 2, . . . n − 1; (43)
+ .. 0
.
0

V
∫ V ∫
0= a
gn(V − x)dL(x)
∫0 τ gn−1∫(V − x)dL(x) − a 0
y3
— .. hn(0, y2, . . . , yn)dyn . . . dy2
0
. ∫ y3
∫0 τ hn+1(0, y2, . . . , yn)dyn . . . dy2; (44)
+ .. 0
.
0 ∫ V

∫ V

0 = a ∫0 τ gk−1∫(Vy − x)dL(x) − a 0
gk(V − x)dL(x)
3

— .. hk(0, y2, . . . , yn)dyn . . . dy2


0
. y3
0
∫ τ ∫
hk+1(0, y2, . . . , yn)dyn . . . dy2, k = n + 1, . . . n + m − 1; (45)
+ .. 0
.
0

∫ V ∫ ∫
0= τ y3

a gn+m−1(V − x)dL(x) − .. hn+m(0, y2, . . . , yn)dyn . . . dy2.


. 0
0
0 (46)
In the steady state, the following boundary conditions hold:
V
∫ ∫ τ
a g1(V − x)dL(x) h2(0, y2)dy2; (47)
0
=
0

V
∫ Queueing Systems with Demands of Random Space Requirement 381
∫ τ ∫ y3 3
a gk(V − x)dL(x) .. hk+1(0, y2, . . . , yk+1)dyk+1 . . . dy2,
. 0
= 0
0
381 O. Tikhonenko and P. Zajac
4

k = 2, . . . , n − 1; (48)
∫ V ∫ τ ∫ y3
.. hk+1(0, y2, . . . , yn)dyn . . . dy2,
a gk(V − x)dL(x) . 0
0
=
0

k = n, . . . , n + m − 1. (49)
Queueing Systems with Demands of Random Space Requirement 381
5
It can be shown by direct substitution that the following functions satisfy
Eqs. (41)–(43) and boundary conditions (47) and (48):
+···+y )
r (x, y , . . . , y ) = p (nµρ)ke−µ(y1 k L(k)(x),
k 1 k 0 ∗ +···+y )
h (y , . . . , y ) = r (V, y , . . . , y ) = p (nµρ)ke−µ(y1 k L(k)(V ),
k 1 k k 1 k 0 ∗
, τ , y , y2 (nρ)k −µτ k (k)
gk(x) =
k
... rk(x, y1, . . . , yk)dyk . . . dy1 = p0 (1 − e ) L∗ (x),
0 0 0 k!

where ρ = a/(nµ), k = 1, . . . , n − 1.
The following functions satisfy Eqs. (45) and (46) and the boundary condi-
tions (49):

r (x, y , . . . , (k)
y ) = p (nµ)nρk(1 − e−µτ )k−ne−µ(y1 +···+yn)L (x),
k 1 n 0 ∗
hk(y1, . . . , yn) = rk(V, y1, . . . , yn)
(k)
= p (nµ)nρk(1 − e−µτ )k−ne−µ(y1 +···+yn)L (V ),
0 ∗
∫ τ ∫ yn ∫ y2
gk(x) = .. rk(x, y1, . . . , yn)dyn . . . dy1
.
0 0 0
nnρk −µτ k (k)
= p0 (1 − e ) L∗ (x),
n!
where k = n, . . . , n + m.
Then, for demands , number distribution, we obtain the following relation:
, (nρ)
k
−µτ k (k)
⎨ p0 k! (1 − e ) L∗ (V ), k = 1, . . . , n − 1,
pk = gk(V )
= , nnρk
0 (1 − e−µτ )kL(k∗)(V ), k = n, . . . , n + m,
, n!
p
Σn+m
and, from the normalization condition k=0
pk = 1, we have
−1
n
(nρ)k Σ
n+m
p0 nn
∗ ∗ .
Σ k! n!
= k=0 (k) (k)
(1 − e −µτ
) L (V ) +
k k=n+1
ρ (1 − e
k −µτ
) L (V )
k

4.3 Loss Probability


and served completely. The probability P{A} of this event is determined as
Let A be the event that an arbitrary arriving demand is accepted to the
system follows:

P{A} = p L(V )(1 − e−µτ )


0
nΣ−1 ∫ V ∫ τ
∫ y2

+(1 − e
µτ
) ... rk(V − x, y1, . . . , yk)dL(x) dyk . . . dy1
0 0 0
k=1
∫ ∫ y2
n+Σm−1 V
.. r (V − x, y ,. . . , −µ(τ −y1 ) dL(x) . . . dy .
+∫ τ
. y ) dy
k 1 n 1− n 1
k=n 0 0 0 e
390 O. Tikhonenko and P. Zajac

If we substitute in this relation the obtained formulae for functions rk, we get:

nΣ−1
P{A} = p0 (nρ)k −µτ k+1 k+1
k! (1 − e ) L ∗ (V )
k=0
n n+m−1
n Σ
+ (k+1)
n! ρk(1 − e−µτ )kL ∗ (V )
k=n
∫τ ∫ y3
n −µτ
— (nµ) e . . y2e −µ(y 2+···+y n) dyn . . .
0 . 0 dy2
n+m−1
Σ

× (k+1)
ρk(1 − e−µτ )k−nL (V ) .
k=n
For example, for n = 1, we have:

∫ V ∫ τ
P{A} = p0 (1 − e−µτ )L(V ) + rk(V − x, y) 1 − e−µ(τ dL(x) dy
Σ m
−y)
0 0
k=1
Σm
1 (k+1)
= (1 − e−µτ )L(V ) 1 − (1 + µτ ρk(1 − e−µτ )k− L ∗ (V ) ,
p0 + )e−µτ k=1

and, for n = 2, we obtain:

P{A} =p0 (1 − e−µτ )L(V ) + 2ρ(1 − e−µτ )2L(2)(V )



m+1
+ 2p0 k −µτ k−2 (k+1)
Σ ∗
k=2 ρ (1 − e ) L (V
m+1

0 Σ ∗
— 4p )e−µτ 1 − (1 + µτ )e−µτ k=2 ρk(1 − e−µτ )k−2L(k+1)(V ).

Then, the loss probability is equal to Ploss = 1 − P{A}.


5 Conclusions
In this paper, we investigate queueing systems with constant limitations of
the demand total volume and queueing or sojourn time. We obtain formulae
for demands number distribution and loss probability for the systems under
consid- eration. The obtained formulae are not generally convenient for
precise calcula- tion, but the calculation is possible in some special cases
(e.g., when a demand volume has gamma or uniform distribution). In other
cases, we can use the numeric inversion of the Laplace transform [1, 4].
The results obtained in the paper can be used for estimating messages
number characteristics and loss probability in the information systems with
limited area of servicing and limited buffer space.
Queueing Systems with Demands of Random Space Requirement 391

References
1. Gaver, D.P.: Observing stochastic processes, and approximate transform
inversion. Oper. Res. 14(3), 444–459 (1966)
2. Gnedenko, B.V., Kovalenko, I.N.: Introduction to Queueing Theory.
Birkh¨auser, Boston (1989)
3. Morozov, E., Nekrasova, R., Potakhina, L., Tikhonenko, O.: Asymptotic
analysis of queueing systems with finite buffer space. In: Kwiecien´, A., Gaj,
P., Stera, P. (eds.) CN 2014. CCIS, vol. 431, pp. 223–232. Springer, Cham
(2014). doi:10.1007/
978-3-319-07941-7 23
4. Stehfest, H.: Algorithm 368: numeric inversion of Laplace transform. Commun.
ACM 13(1), 47–49 (1970)
5. Tikhonenko, O.: Computer Systems Probability Analysis. Akademicka Oficyna
Wydawnicza EXIT, Warsaw (2006) (In Polish)
6. Tikhonenko, O.: Determination of loss characteristics in queueing systems with
demands of random space requirement. In: Dudin, A., Nazarov, A., Yakupov, R.
(eds.) ITMM 2015. CCIS, vol. 564, pp. 209–215. Springer, Cham (2015). doi:10.
1007/978-3-319-25861-4 18
7. Tikhonenko, O.: Districted capacity queueing systems: Determination of their
char- acteristics. Autom. Remote Control. 58(6), Pt.1, 969–972 (1997)
8. Tikhonenko, O.M.: Generalized Erlang problem for service systems with finite
total capacity. Probl. Inf. Transm. 41(3), 243–253 (2005)
9. Tikhonenko, O.M.: Queueing Models in Information Systems. Universitetskoe,
Minsk (1990). (In Russian)
10. Tikhonenko, O.M.: Queueing systems of a random length demands with restric-
tions. Autom. Remote Control. 52(10), pt. 2, 1431–1437 (1991)
11. Tikhonenko, O.: Queueing systems with common buffer: a theoretical treatment.
In: Kwiecien´, A., Gaj, P., Stera, P. (eds.) CN 2011. CCIS, vol. 160, pp. 61–69.
Springer, Heidelberg (2011). doi:10.1007/978-3-642-21771-5 8
12. Tikhonenko, O.M.: Queuing systems with processor sharing and limited
resources. Autom. Remote Control. 71(5), 803–815 (2010)
Innovative Applications
Approaches for In-vehicle Communication –
An Analysis and Outlook

Arne Neumann1(✉), Martin Jan Mytych1, Derk


Wesemann2, Lukasz Wisniewski1, and Ju¨rgen
Jasperneite3
1
inIT - Institute Industrial IT, OWL University of Applied Sciences,
32657 Lemgo, Germany
{arne.neumann,martin.mytych,lukasz.wisniewski}@hs-owl.de
2
OWITA GmbH, 32657 Lemgo, Germany
[email protected]
3
Fraunhofer Application Center Industrial Automation (IOSB-INA),
32657 Lemgo, Germany
[email protected]

Abstract. Electrical and electronic systems have been getting raising


importance for innovations in the automotive industry. Networking
issues are a key factor in this process since they enable distributed
control func- tions and user interaction bringing together nodes from
different vendors. This paper analyses available and emerging network
technologies for in- vehicle communication from a requirements driven
perspective. It reviews successful network technologies from other
application areas regarding a possible deployment in vehicular
communication and distinguishes pas- senger car and commercial
vehicle sectors as far as possible. This contri- bution is oriented to the
OSI reference model showing the state of the art and future
opportunities at the level of the several communication layers with a
focus on physical layer issues and medium access protocols and
including information modeling aspects.

Keywords: In-vehicle networks · Controller Area Network · SAE


J1939 · Isobus · BroadR Reach · Reduced Twisted Pair Gigabit
Ethernet · Time Sensitive Networking · OPC UA

1 Introduction
Automotive systems became complex systems of a reasonable number of dis-
tributed electronic control units (ECUs) with even more sensors and
actuators attached. In passenger cars the number of ECUs reached 70,
processing about 2500 signal points already ten years ago [1] and their
numbers are still grow- ing. From the late 1980s on standardized serial
communication protocols have been used to interconnect the ECUs and
signals. This approach provides sev- eral advantages, including the following.
Subsystems of different vendors become able to interact with each other,
sensor data can be shared by different functions and the number of wires in
a vehicle can be reduced in comparison to parallel
Oc Springer International Publishing AG 2017
P. Gaj et al. (Eds.): CN 2017, CCIS 718, pp. 395–411, 2017.
DOI: 10.1007/978-3-319-59767-6 31
396 A. Neumann et al.

wiring of sensors, actuators and ECUs, which results in less costs for
material and assembling, less weight and hence less fuel consumption of the
vehicle.
There are many and various application functions utilizing the communi-
cation infrastructure of vehicles and new functions are evolving. For
example, driver assistance systems improve towards autonomous driving,
with truck pla- tooning as a use case being deployed soon [2]. These
functions impose require- ments to both in-vehicle and car-to-X
communication, where this paper focuses on in-vehicle networks. The
application functions require different characteris- tics of the communication
systems. For example, functions for driver assistance have a priority on
determinism and functional safety, whereas other functional- ity, such as
infotainment, has a priority on data throughput. As a consequence the
communication structure consists of interconnected subsystems of heteroge-
neous technologies, which will be analyzed in this paper and opportunities
for improvements will be discussed.
In other industries Ethernet-based technologies have been introduced
success- fully. For example in the industrial automation domain, Ethernet-
based real-time protocols have been standardized and deployed in
applications, where fieldbus protocols were used before. This development
was primarily motivated by better capabilities for network management,
maintainability and communication per- formance. In IEEE there are
currently activities to specify extensions to the lower layers of the Ethernet
protocol which may support its utilization at in-vehicle networks. The paper
also aims to analyze where and under which conditions Ethernet-based
technologies including their currently developed extensions can support in-
vehicle communication and how a migration could be done.
This paper reviews the requirements for communication networks in the
domains of passenger cars and of commercial vehicles. It gives an overview
of available technologies and discusses their applicability in commercial
vehicles. The focus of the paper will be on network technologies, which
enable a broad range of vehicular applications while technologies for
specialized applications will be only briefly dealt.

2 In-vehicle Networks
2.1 Automotive Networks and Topologies
Vehicular networks started with the controller area network (CAN, ISO
11989- 2), developed in 1983 and presented in 1987, defining layers 1 and 2
of the OSI reference model [3]. It basically offers a linear bus topology,
which greatly reduces the wiring efforts in cars. In addition to CAN as a
universal solution, other vehicular communication systems have been
developed for more specialized applications. The local interconnect network
(LIN, ISO 17987-1 to -7) focuses on small networks mainly for discrete I/O
signals with low bandwidth require- ments. LIN implements a master-slave-
topology offering a low-cost, single wire solution compared to CAN-enabled
devices. In the other direction, FlexRay was introduced 2000, offering
benefits over CAN in means of bandwidth, real-time capability, redundancy
and functional safety. The driving aspect was the advent
Approaches for In-vehicle Communication – An Analysis and 397
Outlook
of X-by-wire technologies, which needed a higher reliability and safety
rating. FlexRay offers a redundant connection between nodes and supports
both star and bus topology. The ability to support time-critical closed loop
control appli- cation in conjunction the resulting higher cost and complexity
of the components has preferred FlexRay’s usage to engine, steering and
advanced driver assistance systems (ADAS). Media Oriented Systems
Transport (MOST) was developed exclusively for telematics and multimedia
applications and is utilized only in the infotainment system. Comprehensive
surveys about the outlined network technologies can be found in the
literature [3–6].
These core standards are still a subject for improvements. For example
for CAN there are SAE J2284/3 (High-Speed CAN for Vehicle Applications
at 500 kbit/s) aiming at high transmission rate and higher allowable node
count, and SAE J2411 (Single Wire CAN Network for Vehicle Applications)
provid- ing a simplified variant for low requirements regarding bit rate, bus
length and robustness. As a disadvantage, sometimes compatibility issues
arise, such as for CAN FD (flexible data-rate) [7].
Upon these communication layers, a number of protocols and standards
have been developed for network control and data exchange. For CAN, this
includes general purpose protocols like ISO 11898-4 (TTCAN, Time-
Triggered Commu- nication on CAN), industry-specific protocols like
CANopen, SAE J1939 and ISOBUS [3, 4] and protocols for special purpose
vehicles, mainly derived from CANopen like EnergyBus (pedelecs, E-bikes),
CleANopen (municipal vehicles) and FireCAN (DIN 14700, for external
firefighter equipment). LIN does not comprise diverse higher layer protocols,
but is most often terminated with a gateway to connect to an overlying
CAN network. FlexRay, as a safety-critical subsystem, allows a diagnostic
function via gateway, but also includes no diverse higher layer protocols. An
outstanding application layer protocol is On-board Diagnostic (OBD)
specifying self-diagnostic and reporting capability to assist the vehicle owner
and repair technician. The development of OBD began in the 1980 s driven
by legal requirements for continuous emission surveillance dur- ing the
entire lifetime of a vehicle. There are several standards for OBD, some of
them contain both protocol and data object definitions. At the beginning
ISO 14230 (Road vehicles - Diagnostic communication over K-Line, DoK-
Line) gain importance, also known as KWP2000 and referring to ISO 15031-
5 (Road vehicles - Communication between vehicle and external equipment
for emissions- related diagnostics). Its CAN-based version ISO 15765-3
(Road vehicles - Diag- nostic communication over Controller Area Network,
DoCAN) has been widely implemented but never released by ISO. The
most recent standard for OBD is ISO 14229 (Road vehicles - Unified
diagnostic services, UDS). It focuses on application data and services,
decoupling them from the lower layers. UDS pro- vides data and services
with the same semantics as the ODB standards based on KWP2000 and
extend them but the representation is not compatible. This collection is not
complete, there are additional standards about OBD, such as definitions by
SAE or about communication to external equipment.
398 A. Neumann et al.

The different areas of preferred application for each bus system has led
to a heterogeneous network structure, so far with only few needs for
interconnec- tion; each segment is mainly designed to work standalone,
exchanging mainly status information with other networks. Nowadays the
bus segments are usually connected by a centralized gateway. A typical
network architecture is shown in Fig. 1, while additional topology examples
can be found in [3]. Other approaches focus on the introduction of backbone
networks for different application areas and different positions in the vehicle
as described in [5, 8].

Fig. 1. Typical vehicular network architecture

Almost all available literature about vehicular network architectures


aims at passenger cars, while the commercial vehicles sector is inadequately
represented in the related work.

2.2 Communication Requirements and Applications


With more driver assistance, autonomous driving and other integral
function- ality, there are variable applications demanding information
exchange between the ECUs, sensors and actuators. These applications also
require diverse qual- ities of service, mainly determined by the update rate
of information. In [9] update interval requirements at commercial vehicles
are given, such as tire pres- sure: 10 s, battery current: 1 s, cruise control
information: 100 ms and electronic transmission controller information: 10
ms. This fact of variable requirements in conjunction with the large number
of communication nodes and the limited bus capacities led to a functional
separation of bus segments into subsystems. The most typical subsystems
comprise the power train bus (engine and gear control), the chassis system
(e.g. anti-lock brake system) and driver assistance
Approaches for In-vehicle Communication – An Analysis and 399
Outlook
(e.g. electronic stability program), the body and comfort electronics (e.g. air
condition), and the infotainment (audio and navigation). The power train
bus needs to be generally accessible for diagnostics including emission
monitoring to fulfill legal regulations, while all other bus system diagnostics
is depending on manufacturer-specific tools.
The requirements imposed by the applications on the communication
net- work comprise determinism, fault tolerance, data throughput and
functional safety. Security requirements have to be managed for
components, which grant access from outside components. This becomes a
raising issue since car-to-X communication and infotainment connectivity
pose growing challenges. In [10] an overview about the subsystems and their
priorities of communication require- ments is given. Table 1 extracted from
this source summarizes the assignment of the different requirements to the
subsystems.

Table 1. Automotive subsystems and their major requirements acc. to [10]

Fault tolerance Determinism Bandwidth Flexibility Security


Chassis Yes Yes Some No No
Airbag Yes Yes Some No No
Powertrain Some Yes Yes Some No
Body and No Some Some Yes No
comfort
X-by-wire Yes Yes Some No No
Multimedia / No Some Yes Yes No
Infotainment
Wireless / No Some Some Yes Yes
Telematics
Diagnostics No Some No Yes Yes

An advantage of the segmented topology is, that dependability of the


com- munication for critical applications can be achieved by taking into
account only a small number of components interconnected by a single
segment. Additionally, every single bus segment can be configured in a way
that is exactly matching the specific application requirements. On the other
hand, the application functions of the vehicles become more complex and
require information exchange across sev- eral bus segments. The therefore
necessary communication paths involve segment transitions, which lead to an
additional, resource demanding load for those ECUs acting as gateways
between the bus segments. The number of cross-segment func- tions and
gateways influences the efficiency of the overall network topology.

2.3 Influences by Upcoming Power Concepts on In-vehicle


Networks
Vehicles like heavy duty road trains, buses and equipment for forestry and
agriculture are characterized by a big number of auxiliary aggregates.
These
400 A. Neumann et al.

auxiliaries comprise compressors, fans, hydraulic pumps for servo-assisted


steer- ing, lifts etc. Nowadays they are usually driven directly by the
combustion engine. The available power budget is coupled to the speed of
the engine and cannot be steered on demand. Therefore the aggregates have
to be scaled in a way that they can be operated at low engine speed. As a
consequence, weight and size of the aggregates raise, decreasing the
efficiency of the vehicle and resulting in a higher fuel consumption. In
contrast to this, electrically powered drives allow a flexible power supply
management which can adjust the power to operate an aggregate depending
on the individual demand. Hence, the introduction of elec- tric drives for
the auxiliaries has a high potential to increase their efficiency. An
accompanying effect of this concept is a significant increase of the number
of communication nodes and signal points of the in-vehicle network, since the
power management will require information exchange among the electrically
powered auxiliaries and between the auxiliaries and other vehicle
equipment. In contrast to the passenger cars with a mostly static
configuration, in the context of com- mercial vehicles the communication
topology is more dynamic due to the often changing of truck/trailer or
tractor/implement combinations. Especially upon the initial composition of
such a combination the exchange of device descrip- tions of the auxiliaries
can become necessary which will cause a high amount of data to be
transferred. Even this scenario does not happen very often, it is a some
minutes lasting procedure when realized by conventional in-vehicle net-
works. Additionally there is the challenge of introducing many instances of
the same or at least of a similar device type to the vehicle network. This
opportunity will become important especially for modular devices and it
shows a lack of scal- ability of the current communication standards.
Requirements coming from this use case may exceed not only the number of
physical nodes but also the number of logical addresses of a network
segment when the modules shall be addressed individually. Another issue is
about the information model. The standardized information models for
commercial vehicles, for example SAE J1939 [9] describe only the common
available data objects and do not allow a dynamic manage- ment of the
object pool. Currently, additional objects can only be described in a
proprietary way, which increases the engineering effort for the information
exchange.

3 Physical Layer Aspects


In this chapter, the state of the art technologies which are most widespread
in the automotive domain will be described. These are CAN (ISO 11989-2)
for the passenger car sector and SAE J1939 as a CAN-based adaptation for
the com- mercial vehicle sector. In contrast to this, a state of the art
technology which is widespread in other domains will be introduced,
Ethernet 100BASE-TX. Rele- vant criteria are robustness, bit rate, number
of nodes, network extension and topology in order to fulfill application
requirements on fault tolerance, band- width and scalability. With Ethernet
100BASE-T1 and Ethernet 1000BASE-T1 two emerging technologies will be
described, which are promising candidates to
Approaches for In-vehicle Communication – An Analysis and 401
Outlook 1
enable Ethernet based protocols on a physical layer being as simple and
reliable as nowadays solutions.
Currently used in-vehicle networks have different physical layer
character- istics because of their design and application area. Upcoming
technologies and concepts like ADAS or in-vehicle power concepts require
network systems with a higher bandwidth to handle the amount of data.
Ethernet is generally regarded as a next in-vehicle network for future
development. A comparison of physical layer characteristic between CAN
and Ethernet is shown in Table 2.

Table 2. Physical layer characteristic of CAN and Ethernet based communication

CAN 2.0 SAE J1939 Ethernet BroadR- Ethernet


R e a ch ⃝
R
100BASE- 1000BASE-T1
TX (100BASE-T1)
Standardi- ISO 11898-2 SAE J1939 IEEE 802.3 IEEE 802.3bw IEEE 802.3bp
zation Clause 25
Possible Bus Bus Star Star Star
topologies
Max. transfer 1 Mbit/s 250 kbit/s 100 Mbit/s 100 Mbit/s 1 Gbit/s
speed
Max. cable 40 m for 40 m 100 m 15 m 15 m
length 1 Mbit/s
Transmission Copper, Copper, Copper, 2 Copper, single Copper, single
media twisted pair shielded unshielded unshielded unshielded
twisted pair twisted pair twisted pair twisted pair

Typical CAN applications range from engine control and diagnostics to


com- fort electronics, with different bandwidths being employed. Typically,
the range below 125 kbit/s is regarded as low-speed CAN oder CAN B, and
the range from 125 kbit/s up to 1 Mbit/s is regarded as high-speed CAN, or
CAN C. The CAN A class defines a bandwidth of 10 kbit/s or lower,
historically used for diagnostic purposes. The maximum line length is
depending on the chosen bandwidth. This limitation arises from the
propagation time of the signal on the medium combined with the need for
CAN to sample the received data exactly bit-synchronously. All bus nodes
need to see the same bit value at the same point in time. Fault-tolerance is
achieved by the use of differential signaling and the insertion of stuff-bits
after 5 consecutive identical bit values, guaranteeing a state transition
occurrence for synchronization. For commercial vehicle, the Society of
Automotive Engineers (SAE) defines a communication protocol standard
named J1939. It uses CAN as physical layer and is widely in use. Compared
to CAN 2.0, SAE J1939 sets some limitations for the physical layer. The
standard defines a maximum transfer speed of 250 kbit/s with a maximum
cable length of 40 m, being below the allowed rating of up to 1 Mbit/s on
40 m distance for CAN, and a bus topology with a maximum number of 30
physical nodes.
However, new technologies and concepts need an enhancement of the
physical layer. Nowadays, Ethernet is a widely used point to point
communication
402 A. Neumann et al.

technology. With 100BASE-TX it is possible to transfer data with 100


Mbit/s over a maximum cable length of 100 m. Due to the requirements on
electromag- netic interference (EMI) and radio frequency interference (RFI)
in the automo- tive market, Ethernet 100BASE-TX could not be used as an
in-vehicle commu- nication network. In addition to that limitation,
100BASE-TX uses 2 unshielded twisted pair cable, which would increase the
overall cable weight and cost. To compensate the disadvantages of Ethernet
in physical layer for automotive, new
PHY’s are ready for operation or in development. BroadR- supports
R
Reach O
100 Mbit/s transfer speed over a single unshielded twisted pair cable which meets
automotive EMI requirements [11] and is standardized as 100BASE-T1 in IEEE
802.3bw-2015 [12]. BroadR- has been used in a real-time Ethernet in-
Reach O
R

car backbone project, where BroadR- became a part of the Ethernet


Reach O
R

backbone system [13]. Also the applicability of BroadR-Reach for use with
an industrial Ethernet protocol has been approved in [14]. But future
challenges like uncompressed video for ADAS would need more bandwidth
[15]. Hence, the next generation for Ethernet in the automotive field is
under development. The stan- dardization of a 1000BASE-T1 PHY in IEEE
802.3bp is currently in progress [16]. The 1000BASE-T1 PHY supports a
maximum transfer speed of 1 Gbit/s in full duplex mode over a single
unshielded twisted pair cable with a maximum length of 15 m. First PHY’s
on the basis of IEEE802.3bp draft are introduced [17]. Another point for the
trend of Ethernet as in-vehicle communication sys- tem is the possibility to
support voltage and current levels over a single twisted pair Ethernet link.
Currently the 1-Pair Power over Data Lines (PoDL) Task Force defines
under IEEE802.3bu a standard for that feature [18]. The deploy- ment of
these Ethernet based physical layers in the commercial vehicle sector is
more challenging in comparison to passenger cars. A main reason is the
topology extent beyond 15 m which requires components for signal
refreshing. Beside this, the harsher environment induces higher requirements
for ingress protection and overall robustness of connectors and may cause
signal refreshing too.

4 Medium Access Control Aspects


Medium access control is most relevant to fulfill application requirements on
deter- minism, transmission latency and data throughput. Here, the data
link layer methods for medium access control of CAN and Ethernet as state
of the art tech- nologies will be briefly discussed. Time sensitive networking
(TSN) targets to the real-time capability of Ethernet. Relevant TSN
specifications will be described, as they can contribute to cover a broad
range of vehicular requirements.

4.1 State of the Art Protocols


CAN specifies an asynchronous, event based medium access protocol. The
Approaches for In-vehicle Communication – An Analysis and 403
com- munication
Outlook is message-oriented, with a given identifier being assigned
to a certain information, but not to a specific device. The number of devices
on a bus is the- oretically not limited, while the number of possible message
identifiers depends
404 A. Neumann et al.

on the their length. Two types of identifiers, 11 bit and 29 bit, are available
to choose. CAN follows the Carrier Sense Multiple Access Collision
Resolution (CSMA/CR) scheme, where each network node is allowed to
send data when it detects an idle state at the medium. The messages are
prioritized by their iden- tifier, i.e. in case of conflicts an arbitration occurs
and the message coming up with the higher order identifier being sent
successfully. The arbitration reduces the number of retries and avoids a stop
of data transfer due to congestions [3]. Although CSMA/CR represents a
non-deterministic method, determinism can be reached for messages holding
the highest priority. To fulfill application requirements on latency of
transmission, a serious engineering effort regarding the assignment of
priorities and update intervals of data objects is necessary and simulation
and test of the network configuration is recommended. A detailed analysis
about schedulability in CAN Networks is provided in [19].
The IEEE 802.1 Ethernet standard utilizes Carrier Sense Multiple
Access with Collision Detection (CSMA/CD) for medium access and was
originally not designed to transport any time sensitive traffic and hence does
not provide determinism. After introduction of the IEEE 802.1Q, providing
the possibility to assign a defined priority level to a particular message by
using the Virtual LAN (VLAN) field, many proprietary industrial protocols
were developed, e.g. Ethernet/IP, PROFINET RT, SERCOS and many
other, which were build upon this feature. Due to limits of the priority based
communication, some additional functionalities to further improve the real-
time efficiency were introduced. These are: TDMA based communication
(e.g. PROFINET IRT), polling based com- munication (Powerlink) or
summation frame communication (EtherCAT). All mentioned approaches,
allow to achieve high real-time performance, however it require modification
of the original IEEE Ethernet MAC [20]. Beside this devel- opment of
industrials protocols for Ethernet to transfer sensitive traffic a pool of
establishments from the automotive area, like BMW and Daimler AG,
devel- oped a in-vehicle communication protocol, known as FlexRay, to
handle the requirements like real-time communication.

4.2 Time Sensitive Networking


Due to the rapid evolution of the IT technology, especially the entertainment
sec- tor, such as high quality audio and video streaming, demands for real-
time com- munication followed to the establishment of a new IEEE working
group, Audio Video Bridging (AVB). The aim was to further enhance the
real-time capabil- ities of the Ethernet standard. The suitability of AVB for
particular vehicular use cases already has been proven by simulation [21].
Due to the high interest of the industry in this activity, the focus of the
group has been broadened by including industrial application requirements
[22]. At the same time, the name of the working group has been changed to
more generic Time Sensitive Network- ing (TSN). An important aspect for
this activity was to offer low costs devices, which require a minimal
configuration effort to achieve plug-and-play function- ality [23]. In case of
in-vehicle communication, the plug-and-play functionality
Approaches for In-vehicle Communication – An Analysis and 405
Outlook
is not of major importance. It is due to the fact that in opposite to the
indus- trial automation, in cars, the installed in-vehicle network
infrastructure remains unchanged. More important is the spectrum of traffic
classes that can be sup- ported by the TSN technology. It allows to satisfy
demands in terms of high throughput required by multimedia or
infotainment systems, but also provide high determinism and availability,
thus enabling support of control loops and safety critical functions. Having
one system supporting different traffic flows, would help to significantly
reduce the complexity of the current in-car commu- nication infrastructures,
and open the possibility for the future functionalities, such as highly
sophisticated ADAS. The focus of TSN is very broad, therefore multiple
sub-groups has been established to deal with a particular aspect. The most
relevant for in-car communication are listed below:

– Timing and synchronization aspects:


• timing and synchronization IEEE 802.1.AS
– Quality of service aspects and resource reservation:
• stream reservation protocol IEEE 802.1Qat and the further extension
IEEE 802.1Qcc
• path control and reservation mechanisms IEEE 802.1Qca
– Forwarding and queuing mechanisms
• forwarding and queuing enhancements for Time-Sensitive streams
IEEE 802.1Qav
• deterministic communication through time aware shaper IEEE
802.1Qbv and cycling queuing and forwarding shaper IEEE 802.1Qch
• frame pre-emption IEEE 802.1Qbu
– Reliability
• seamless redundancy IEEE 802.1CB
• redundancy mechanisms included in IEEE 802.1Qca
There are several papers currently available, which try to evaluate some of
the TSN amendments in the in-car communication context. In [24], authors
investi- gated the worst case behavior of three different shapers, namely
Burst Limiting Shaper (BLS), Time Aware Shaper (TAS) and Peristaltic
Shaper (PS) using analytical calculation and simulation. According to the
authors in [24], the best performance in terms of latency and latency jitter
had the TAS, however require a lot of configuration efforts. BLS offers a
compromise between performance and configuration efforts. The PS offers
the easiest configuration, however the worst performance as comparing to
other shapers. An additional deep investigation of the worst case latency
provided by BLS in a typical automotive setup was conducted in [25].
Authors shown that in some cases it is better to use the IEEE 802.1Q than
TSN + BLS. In order to efficiently use BLS some additional fil- tering
functionality is required. The same authors analysed the effect of TSN with
frame preemption (IEEE 802.3br) to worst-case end-to-end latency in [26].
Their experiments in a typical automotive setup show, that latency
guarantees for time-critical traffic can be significantly improved while
preemptable traffic only slightly degrades. In [27], authors investigated
bandwidth allocation ratio
406 A. Neumann et al.

for the scheduled traffic (IEEE 802.1Qbv), while adjusting the Maximum
Trans- mit Unit (MTU). They have shown that using two time sensitive
flows it is possible to achieve cycle times of 250µs for the MTU size of 109
bytes. A survey in [28] provide a broad overview about the Ethernet-based
communication with the focus on IEEE AVB. It discusses especially the
scheduled traffic and presents simulation results, where offset scheduling TAS
were combined to achieve an tem- poral isolation from other kinds of traffic.
The fault-tolerance aspects of TSN were investigated in [29]. Authors
compared two different approaches aiming to guarantee seamless
redundancy. They pointed out that the current seamless redundancy
mechanisms provided by TSN lacks of flexibility in terms of stream
reconfiguration and mechanisms for automatic stream reservation. Despite
of all advantages, TSN increases the configuration overhead of a network. In
[30] an ontology-based approach to support automatic network
configuration of TSN is presented. The authors demonstrated the approach
by modeling the TAS and came to the conclusion that the expressiveness of
the ontology has to be further investigated. The several papers demonstrate
that TSN is actually in focus, but it also shows a gap in the field of
implementation to simulate the behavior of TSN. An easily accessible
implementation of single protocols would be a bene- fit to gain insight of
TSN and whose performance. After all it can be concluded that TSN is a
prominent candidate for in-vehicle communication to handle future
requirements. It supports different real-time classes, offers determinism and
high reliability via seamless redundancy.
As a wrap-up of this chapter, Table 3 gives a summary about the access
methods of the discussed communication technologies.

Table 3. Summary on medium access methods

CAN Ethernet TSN


Basic access protocol CSMA/CR CSMA/CD CSMA/CD
Additional measures Priority based arbitration – Scheduling
Determinism Restricted No Yes

5 Transport Protocol and Efficiency Aspects


In this chapter, the considerations are mainly driven by the application
require- ment of bandwidth. The state-of the-art technologies are compared
regarding their performance to transfer different qualities of user data. For
this communi- cation layer no emerging technology is discussed, but new
mappings with estab- lished protocols at higher layers open future
opportunities.
The standard ISO 11898 for CAN does not specify higher protocol layers
of the OSI reference model. CAN is limited to a maximum data object
length of 8 octets and provides message oriented broadcasting without
address infor- mation about the sender and receiver. This simplicity enables
a small protocol
Approaches for In-vehicle Communication – An Analysis and 407
Outlook
overhead and allows short transmission times. Consequently, user data rates
of approximately 7,5 KB/s for 1 octet payload and approximately 28 KB/s
for 8 octets payload are possible, supposing a bus workload of 100%,
according to [3, 31]. Although this throughput statement is not very
impressive, it is sufficient for many applications regarding the update
intervals of the required number of communication objects. In the domain of
commercial vehicles, the widespread standard SAE J1939 defines transport
protocols for segmented transport for both message-oriented broadcasts and
node-oriented unicasts on top of the CAN lay- ers. The transport protocols
define an initial frame to announce the transmission and the user data
length of a data frame is reduced to 7 by taking the first octet of the CAN
payload for protocol information. The protocols shall not strongly interfere
in the plain message exchange, therefore the standard defines a low CAN
priority and a minimum frame gap of 50 ms. All these measures reduce the
user data rate to below 140 bit/s and limit the application range to very
low demanding functions.
While CAN based protocols show constraints which the upper limit of
the payload size, Ethernet based protocols show a lack of efficiency
considering the lower limit of the payload size. The payload size of an
Ethernet frame is defined from 42 to 1500 octets, if the VLAN tag is used.
When transferring control data of sensors and actuators, the user data
length often will be between 1 and 4 octets and the remaining payload size
needs to be filled by padding octets. Considering the overall Ethernet
protocol overhead and the inter frame gap the gross ratio of net data
becomes 1:84 for a single octet. When using a Ethernet bit rate of 100
Mbit/s, the net data rate in this worst case is still above 1 Mbit/s, again
supposing a network load of 100%, which is significantly higher than CAN
communication.
Consequently, the substitution of CAN based protocols by Ethernet
based protocols can overcome bandwidth limitations for in-vehicle
communication when transferring big sized data objects. In [32] an approach
is published, where the SAE J1939 application protocols are mapped on top
of a TCP/UDP stack. The authors claim the applicability for the power
train segment in heavy duty vehicle networks, which still needs further
investigation. Nevertheless, this con- tribution shows, that a changeover to
more powerful network technologies is possible without essential
modifications at the application interface.

6 Information Model Aspects


In this section, a well established information model of the commercial
vehicle domain is discussed. To enable the easy integration of future
application func- tion a possible extension of this model, which preserves the
existing application interface, is described in the second subsection.

6.1 Consideration About Available Information Models


The application layer protocol standards for in-vehicle communication SAE
J1939 and ISO 11783, which are nowadays the mostly utilized standards
for
408 A. Neumann et al.

commercial vehicles, contain detailed information models to address vehicle


com- ponents and their parameters. For example, one part of SAE J1939
comprehen- sively specifies parameters concerning typical components (e.g.
engine, steer- ing, collision sensors) and functions (e.g. speed control, air
suspension control, aftertreatment) of an vehicle. The parameter description
includes the unambigu- ous parameter identifier, information about name
and acronym, data type, data range, affiliation to records for transmission
and update intervals. This docu- ment provides a valuable contribution for
the interoperability of the typical, widespread components and functions.
On the other hand, the approach of SAE J1939 is difficult to manage in
case of extending the model for new information object types or even for
adding new instances of already existing object types.
Currently such a extensibility is rarely required, but upcoming
application concepts like introducing modular electrical drives for auxiliaries
will tighten the problem.

6.2 Potential Future Information Models


An object oriented modeling of application specific information structures can
be used to improve the rigid information modeling provided by the current
technolo- gies for in-vehicle communication. OPC UA, a technology widely
used mainly in the domain of industry automation, provides such an object
oriented model- ing. Currently, the OPC UA specification is being enhanced
by PubSub, a new communication pattern according to the
publisher/subscriber model enabling so called server based subscriptions
[33]. In IEC 62541-3 the Address Space Model of OPC UA is defined. It can
be considered as a meta model providing objects as the basis for any
information model. The object elements are repre- sented by nodes. These
nodes comprise variables, methods or references to other objects.
Additionally, IEC 62541-5 specifies nodes to be used for diagnostics and as
entry points to server-specific nodes. As a result, an information model of an
“empty” server is defined and the vendor of the component which is
represented by the OPC server can customize it. As optional specification
elements, prede- fined models for data access, alarms, history and others are
available. Moreover, the information model can be changed during runtime
of the server by adding or removing nodes. By this means, the OPC UA
information model is indepen- dent from transport protocols and enable
domain specific extendibility. For the deployment in in-vehicle networks,
OPC UA needs the ability to be implemented on physical nodes with low
resources. For this reason, OPC UA components need to be scaled down. In
order to support this, the OPC UA specifications provide profiles, for
example the OPC UA Nano Embedded device profile. Based on this profile,
it is possible to scale down an OPC UA server to 15 KB RAM and 10 KB
ROM [34], thus allowing implementation at the chip level of a resource
limited device such as a sensor or an actuator.
To provide interoperability beyond this general information model and to
ease the use of OPC UA in several domains, Companion Standards have
been devel- oped. For example, the specifications for building automation
(in co-operation with BACnet), energy systems and management
(participating in IEC TC 57
Approaches for In-vehicle Communication – An Analysis and 409
Outlook

Fig. 2. Future architecture with Ethernet backbone acc. to [37]

“Power systems”) or railways transportation shows that OPC UA already


has been approached by applications outside the industrial automation. The
uti- lization of the OPC UA Address Space Model as a wrapper of data
models provided by the in-vehicle communication standards could be a step
ahead to the required extendibility of the models. The existing information
structures can be preserved and transformed into an object oriented
approach as it is shown in
[35] for building automation. At the same time the co-existence with
information models of upcoming components and functions, which are not
covered by the available standards in the vehicle domain, becomes possible.
For example, SAE J1939 data in parallel to data according to the standard
CAN in Automation DS402 for electric drives could be modeled and
transferred on the same network.

7 Conclusion
This paper shows the current status of in-vehicle communication networks
in the field of passenger cars as well as for commercial and heavy duty
vehicle, and points at upcoming challenges. It depicts the future of Ethernet
as in-vehicle communication system related to several parts of the OSI
reference model. In summary, Ethernet will take place in the automotive
market, see also [36]. How- ever, ongoing developments and
implementations show, that new network sys- tems will not immediately
replace, but rather supplement them. This strategy is beneficial especially
for critical systems, where proven-in-use concepts con- tribute to the
functional safety. The evolution of automotive Ethernet, according to [8],
propose the implementation of Ethernet in three generations. The first
generation already exist in high class vehicles. It uses 100BASE-TX
Ethernet with Diagnostics over IP (DoIP) for on-board diagnostics and
ECU’s updates. Figure 2 given by the author of [37] illustrates the next
generations. Second gen- eration uses Ethernet as additional in-vehicle
network to transfer the amount of data from camera systems for drive
assistance and infotainment. Finally, the 3rd
401 A. Neumann et al.
0
generation with the possibility to transfer 1 Gbit/s will implement Ethernet
as a backbone system and change automotive wiring harness from
heterogeneous to hierarchical homogeneous network by introducing a new
network topology level. In future, Ethernet in connection with TSN will be
a possible approach for time relevant communication beside ADAS and
infotainment. At the layer of information modeling, concepts incorporating
dynamic and instantiable infor- mation object presentation like OPC UA
can support the integration of new
application functions.

References
1. Navet, N., Song, Y., Simonot-Lion, F., Wilwert, C.: Trends in automotive
commu- nication systems. Proc. IEEE 93(6), 1204–1223 (2005)
2. Bishop, R., Bevly, D., Switkes, J., Park, L.: Results of initial test and evaluation
of a driver-assistive truck platooning prototype. In: 2014 IEEE Intelligent
Vehicles Symposium Proceedings, pp. 208–213, June 2014
3. Zimmermann, W., Schmidgall, R.: Bussysteme in der Fahrzeugtechnik -
Protokolle, Standards und Softwarearchitektur, 5th edn. Springer-Verlag,
Heidelberg (2014)
4. Navet, N., Simonot-Lion, F.: Automotive Embedded Systems Handbook.
Industrial Information Technology Series. CRC Press, Boca Raton (2008).
https://books.google.de/books?id=vB700Gb4RtkC
5. Zeng, W., Khalid, M., Chowdhury, S.: In-vehicle networks outlook:
achievements and challenges. IEEE Commun. Surv. Tutorials 18(3), 1–1 (2016)
6. Talbot, S.C., Ren, S.: Comparision of fieldbus systems can, TTCAN, FlexRay
and LIN in passenger vehicles. In: 29th IEEE International Conference on
Distributed Computing Systems Workshops, 2009, ICDCS Workshops 2009, pp.
26–31, June 2009
7. Cena, G., Bertolotti, I.C., Hu, T., Valenzano, A.: Improving compatibility
between CAN FD and legacy CAN devices. In: 2015 IEEE 1st International
Forum on Research and Technologies for Society and Industry Leveraging a
better tomorrow (RTSI), pp. 419–426, September 2015
8. Hank, P., Mu¨ller, S., Vermesan, O., Keybus, J.V.D.: Automotive ethernet: in-
vehicle networking and smart mobility. In: Design, Automation Test in Europe
Conference Exhibition (DATE), 2013, pp. 1735–1739, March 2013
9. J1939 Surface Vehicle Recommended Practice; Part 71 Vehicle Application
Layer, SAE International Std., June 2015
10. Nolte, T., Hansson, H., Bello, L.L.: Automotive communications-past, current
and future. In: 2005 IEEE Conference on Emerging Technologies and Factory
Automa- tion, vol. 1, pp. 8–992, September 2005
⃝R
11. Broadcom, “BroadR-r e a ch physical layer transceiver specification for automotive
applications v3.0,” Broadcom, Technical report (2014)
12. IEEE 802.3, working group for ethernet standards. http://www.ieee802.org/3/
13. Steinbach, T., Mu¨ller, K., Korf, F., R¨ollig, R.: Demo: real-time ethernet in-
car backbones: first insights into an automotive prototype. In: 2014 IEEE
Vehicular Networking Conference (VNC), pp. 133–134, December 2014
14. Banick, N.: Untersuchung des quelloffenen Ethernet Powerlink Stacks mit einer
Zweidraht-U¨ bertragungstechnologie f u¨ r den Einsatz im Automobilbereich,
Lemgo, January 2015
Approaches for In-vehicle Communication – An Analysis and 411
Outlook
15. “Reduced Twisted Pair Gigabit Ethernet PHY - Call for Interest,” IEEE 802.3
Ethernet Working Group, Technical report, March 2012. http://www.ieee802.org/
3/RTPGE/public/mar12/CFI 01 0312.pdf
16. IEEE p802.3bp. 1000BASE-T1 PHY Task Force. http://www.ieee802.org/3/bp/
17. Marvell 1000BASE-T1 PHY. http://www.marvell.com/company/news/
pressDetail.do?releaseID=7256
18. IEEE p802.3bu. 1-Pair Power over Data Lines (PoDL) Task Force. http://www.
ieee802.org/3/bu/
19. Davis, R.I., Kollmann, S., Pollex, V., Slomka, F.: Controller area network
(CAN) schedulability analysis with fifo queues. In: 2011 23rd Euromicro
Conference on Real-Time Systems, pp. 45–56, July 2011
20. Wisniewski, L., Schumacher, M., Jasperneite, J., Schriegel, S.: Fast and simple
scheduling algorithm for PROFINET IRT networks. In: 9th IEEE International
Workshop on Factory Communication Systems (WFCS) 2012, pp. 141–144, May
2012
21. Alderisi, G., Caltabiano, A., Vasta, G., Iannizzotto, G., Steinbach, T., Bello, L.L.:
Simulative assessments of IEEE 802.1 Ethernet AVB and time-triggered
ethernet for advanced driver assistance systems and in-car infotainment. In:
2012 Vehicular Networking Conference (VNC), IEEE, pp. 187–194, November
2012
22. Imtiaz, J., Jasperneite, J., Schriegel, S.: A proposal to integrate process data com-
munication to IEEE 802.1 Audio Video Bridging (AVB). In: 2011 IEEE 16th
Con- ference on Emerging Technologies Factory Automation (ETFA), pp. 1–8,
Septem- ber 2011
23. Garner, G.M., Ryu, H.: Synchronization of audio/video bridging networks using
IEEE 802.1AS. IEEE Commun. Mag. 49(2), 140–147 (2011)
24. Thangamuthu, S., Concer, N., Cuijpers, P.J.L., Lukkien, J.J.: Analysis of
ethernet- switch traffic shapers for in-vehicle networking applications. In: 2015
Design, Automation Test in Europe Conference Exhibition (DATE), pp. 55–60,
March 2015
25. Thiele, D., Ernst, R.: Formal worst-case timing analysis of ethernet TSN’s burst-
limiting shaper. In: 2016 Design, Automation Test in Europe Conference
Exhibition (DATE), pp. 187–192, March 2016
26. Thiele, D., Ernst, R.: Formal worst-case performance analysis of time-sensitive
ethernet with frame preemption. In: 2016 IEEE 21st International Conference on
Emerging Technologies and Factory Automation (ETFA), pp. 1–9, September 2016
27. Ko, J., Lee, J.H., Park, C., Park, S.K.: Research on optimal bandwidth alloca-
tion for the scheduled traffic in IEEE 802.1 AVB. In: 2015 IEEE International
Conference on Vehicular Electronics and Safety (ICVES), pp. 31–35, November
2015
28. Bello, L.L.: Novel trends in automotive networks: a perspective on ethernet and the
IEEE audio video bridging. In: Proceedings of the 2014 IEEE Emerging
Technology and Factory Automation (ETFA), pp. 1–8, September 2014
29. Kehrer, S., Kleineberg, O., Heffernan, D.: A comparison of fault-tolerance concepts
for IEEE 802.1 Time Sensitive Networks (TSN). In: Proceedings of the 2014
IEEE Emerging Technology and Factory Automation (ETFA), pp. 1–8,
September 2014
30. Farzaneh, M.H., Knoll, A.: An ontology-based plug-and-play approach for in-
vehicle Time-Sensitive Networking (TSN). In: 2016 IEEE 7th Annual
Information Technology, Electronics and Mobile Communication Conference
(IEMCON), pp. 1–8, October 2016
31. Traub, M.: Durchg¨angige Timing-Bewertung von Vernetzungsarchitekturen und
Gateway-Systemen im Kraftfahrzeug -. KIT Scientific Publishing, Karlsruhe
(2010)
410 A. Neumann et al.

32. Ruggeri, M., Malaguti, G., Dian, M.: SAE J 1939 over real time ethernet: the
future of heavy duty vehicle networks. Society of Automotive Engineers (SAE),
Technical report, September 2012
33. OPC Foundation. OPC UA is Enhanced for Publish-Subscribe (Pub/Sub).
https://opcfoundation.org/opc-connect/2016/03/opc-ua-is-enhanced-for-publish-
subscribe-pubsub/
34. Imtiaz, J., Jasperneite, J.: Scalability of OPC-UA down to the chip level enables
“Internet of Things”. In: 2013 11th IEEE International Conference on Industrial
Informatics (INDIN), pp. 500–505, July 2013
35. Fernbach, A., Granzer, W., Kastner, W.: Interoperability at the management
level of building automation systems: a case study for BACnet and OPC UA. In:
2011 IEEE 16th Conference on Emerging Technologies Factory Automation
(ETFA),
pp. 1–8, September 2011
36. Bello, L.L.: The case for ethernet in automotive communications. SIGBED Rev.
8(4), 7–15 (2011). http://doi.acm.org/10.1145/2095256.2095257
37. Hinrichsen, J.: The road to autonomous driving. In: Deterministic Ethernet
Forum, Vienna, April 2015
Approaches for In-vehicle Communication – An Analysis and 411
Outlook
An Approach for Evaluating Performance
of Magnetic-Field Based Indoor Positioning
Systems: Neural Network

Serpil Ustebay1, Zuleyha Yiner1, M. Ali Aydin1, Ahmet



Sertbas1, and Tulin Atmaca2( )
1
Department of Computer Engineering, Istanbul University, Istanbul, Turkey
{serpil.ustebay,zuleyha.yiner,aydinali,asertbas}@istanbul.edu.tr
2
Laboratoire Samovar, Telecom SudParis, CNRS, Universit´e Paris-Saclay,
Evry, France
[email protected]

Abstract. Indoor Positioning Systems are more and more attractive


research area and popular studies. They provide direct access of
instant location information of people in large, complex locations such
as air- ports, museums, hospitals, etc. Especially for elders and
children, loca- tion information can be lifesaving in such complex
places. Thanks to the smart technology that can be worn, daily
accessories such as wristbands, smart clocks are suitable for this job.
In this study, the earth’s magnetic field data is used to find location of
devices. Having less noise rather than other type of data, magnetic
field data provides high success. In this study, with this data, a
positioning model is constructed by using Artificial Neural Network
(ANN). Support Vector Machines(SVM) was used to compare the
results of the model with the ANN. Also the accu- racy of this model
is calculated and how the number of hidden layer of neural network
affects the accuracy is analyzed. Results show that magnetic field
indoor positioning system accuracy can reach 95% with ANN.

Keywords: Magnetic-field indoor positioning systems · Neural net-


work · Pattern recognition network · Cross entropy function · Perfor-
mance · Accuracy · Support Vector Machines (SVM)

1 Introduction
Positioning systems, known as outdoor and indoor, become more prevalent
with technological developments. These systems provide information for
applications use location information like navigation, monitoring, tracking
etc. Outdoor posi- tioning systems operate with the GPS (Global
Positioning System) signals com- ing from at least three satellites. GPS
works with triangulation method which is based time of signal’s time of
arrival, angle, etc. In addition, GPS may be used for indoor positioning if
necessary equipments are integrated inside build- ings. However, line-of-
sight transmission between receivers and satellites may
Oc Springer International Publishing AG 2017
P. Gaj et al. (Eds.): CN 2017, CCIS 718, pp. 412–421, 2017.
DOI: 10.1007/978-3-319-59767-6 32
Magnetic-Field Based Indoor Positioning Systems: Neural Network 413

not be efficient for an indoor environment due to lack of line-of-sight trans-


mission in indoor. Indoor positioning system (IPS) is a system which uses
radio waves, magnetic fields, or other sensory information collected by
mobile devices in order to locate objects or people in a building. IPS is
generally grouped as infrastructure-based and infrastructure-free. Since
requiring some hardware pre-installed, infrastructure-based solutions have
high cost comparing with infrastructure-free solutions. Positioning systems
can be divided into sub- categories as shown in Fig. 1. During location
determination, participants get into active group when information is
directly transferred from the device from which they carry. The opposite of
this situation is get into passive one [1]. To exemplify, the usage of
Bluetooth signal of person’s mobile phone in positioning is active
classification. With acquisition of images through cameras in interior space
and face recognition operations, learning of knowledge of where someone is
located is a passive positioning. Furthermore, positioning is realized by
using mathematical methods with respect to used technology [2].

Fig. 1. Positioning technique taxonomy

Although used technologies are different, methods can be grouped under


4 basic headings. In geometry based methods, in order to calculate of
distance between sender and receiver, the Angle of signals (AOA), arrival
time (TOA), the time difference of reception and arrival (TDOA) are used.
Fingerprint is a method based on power of signals and consists of two
processes. In this method, Received Signal Strength (RSS) measurements
are made from specific points
414 S. Ustebay et al.

within the building. A position estimation model is constructed based on the


obtained Fingerprint signal map. Bayesian approach uses Bayesian inference
techniques to calculate the position of a person or object according to
probability distribution at time t.
Galva´n Tejada et al. [3] used 4 categories based on the technology that
con- duct between user and the environment. The first one is a location
system that explicit technologies are used like Bluetooth and RFID. The
second one is sys- tems reuse sensory devices in the smart phones. Wi-Fi
access points are impor- tant for series of sensors of specific locations that
comprise specialized infrastruc- ture. This is the third kind of indoor
positioning categories. Lastly, the systems that use magnetic field or
environmental audio as positioning.
Higtower and Borriello [4] grouped location systems with respect to
ToA/TDoA (time of arrival and time difference of arrival use signal runtime
between sender and receivers), AoA (the angle of incidence at receivers), the
Received Signal Strength Indicator (RSSI) and Fingerprint. Fingerprint
meth- ods consist of offline and online phases. In the offline phase, Wi-Fi
signal are measured and stored in a database with the location of their
appearance. In this way fingerprint database is established. These signals
are measured again in the online phase. After finding the best matching
entry, positioning is performed. Earth magnetic field value is described by a
vector has X, Y, Z features where X is for north, Y is for east and Z is for
height [5] (Fig. 2).

Fig. 2. Magnetic field vector [5]

Sensors are used to get magnetic field information. This information may
be used in automotive, military, aviation and industrial areas. Advances in
Micro Electro Mechanical Systems (MEMS) technology have allowed these
sensors to be added to electronic devices such as smartphones, smart clocks
and computer tablets. Applications called e-compass used in mobile phones
can measure this new sensor data as short as 1nT. The data used by each
device may be differ- ent because mobile phone manufacturers produce these
sensors with 4 different approaches such as Hall Effect, Giant Magneto
Resistance (GMR), Magnetic
Magnetic-Field Based Indoor Positioning Systems: Neural Network 415

Tunneling Junction (MTJ) sensing and Anisotropic Magneto R´esistance


(AMR) [6]. By the way, longitude, latitude and altitude information are
entered by log- ging into the system online with the world’s magnetic field
calculator tool, which is obtained through joint participation of the United
States National Geospatial- Intelligence Agency (NGA) and the United
Kingdom’s Defense Geographic Cen- ter (DGC). Tthe world’s magnetic field
data can be obtained without the need for any sensors [7] via these
institutions.
Since the magnetic field data contains less noise than the WLAN signal
data, it has been used in indoor positioning studies. Especially a strong
magnetic field data prepared by the FingerPrint method allows the location
of the person to be detected with an error rate of the centimeter level [8].
Also, it is possible to increase the accuracy by adding different technologies.
In this study, a hybrid system was designed while the position of the person
was found and magnetic field data was used to increase the accuracy of the
results obtained [9]. Thus, false or missing measurement of the near-distance
technology such as RFID, and false positioning of Wi-Fi signals is affected
by the noise are prevented.
The location of person based on magnetic field data is determined by
using Gaussian process with radial basis function (GausssprRadial), Single
C5.0 Tree, Soft Independent Modeling of Classical Analogy (SIMCA),
Multi-layer percep- tron with Resilient Back propagation (Rprop), Bagged
Classification and Regres- sion Tree (CART) algorithms [10]. Results of this
study show that magnetic field data provides good robustness and accuracy
for buildings with low magnetic field variability.
This paper is organized as follows; Sect. 2 gives detailed information
about magnetic field data is used. Section 3 describes artificial neural
network. In Sect. 4, the implementation is introduced. Finally, the paper is
concluded in Sect. 5.

2 Database for Implementation


In this paper, RFKONDB was used as a indoor signal strength map.
RFKON is created by Sinem Bozkurt et al. [11]. which measurements are
taken from the first floor of Eskisehir Osmangazi University Teknopark.
This database is magnetic field based and constructed using 4 different
mobile devices. Descriptions of devices are given in the Table 1.

Table 1. Devices used for measurements

Device ID DEvice type OS version


1 Samsung S4 Mini Android .2.2
2 LG G3 Android 5.0
3 Sony Xperia Z2 Android .4.4
4 Samsung Galaxy Note 10.1 Android .4.2
416 S. Ustebay et al.

Table 2. Magnetic field data set

Ref. point Date Device ID x y Floor Battery X Y Z


1 01.07.2015 09:10 1 1.2 1.2 1 100 –13.02 5.87 –19.79
1 02.07.2015 09:10 2 1.2 1.2 1 100 –48.11 19.78 14.87
1 03.07.2015 09:10 3 1.2 1.2 1 100 –17.10 8.5 –22.79
1 04.07.2015 09:10 4 1.2 1.2 1 100 0.232 –17.10 –11.69

During measuring, 54 reference points is used and for each mobile device,
20 measurements were recorded at every reference point. Totally 4320
sample measurement is obtained for magnetic field database.
Magnetic field based dataset includes information about each data such
as DeviceID, real world X,Y coordinates, Floor, Battery Level, and
Magnetic Field X,Y, Z coordinates values. The database sample is given
Table 2. While trying to evaluate the accuracy of the magnetic field based
indoor location system, we just use reference points and Magnetic Field
coordinates of each measure.

3 Models
In this work Support Vector Machines and Neural Networks are used to
create an indoor positioning model.

3.1 Support Vector Machines (SVM)


Support Vector Machines (SVM) is a popular margin classifier which is
widely used for linear classification problems. In linear classification
problems, it is assumed that there exists an optimal separating line (1)
discriminating samples of positive and negative groups in data space S.

f (x) = w1 x
1 + w2 2x + k = wT x + k (1)
f (x) is defined by the weight vector w, and the shift amount k [12]. x is
a sample in data space S, and is considered to be from positive class if f (x)
≥ 1, or negative class if f (x) ≤ −1. This can be shown by the general
expression in (2). Here, c is the class label of x (positive if c = +1, or
negative if c = −1)

c(wT x + k) ≥ 1 (2)
The aim of SVM is to find optimal w vector of S new data space that

satisfies
(2) for all samples in S. The method calculates the principal components of
new data space S′ by solving a quadratic optimization problem. By this
way, samples that cannot be linearly discriminated in original data space S,
can be linearly discriminated in new data space S′ to which they are
transferred.
Classical SVM works only on bi-class datasets which is a major
disadvantage. Therefore, in this study, LIBSVM library [13] which is
capable of multi-class classification was utilized.
Magnetic-Field Based Indoor Positioning Systems: Neural Network 417

3.2 Artificial Neural Network (ANN)


ANN is a computational model based on biological neural network. A
neuron is a biological cell and process information in the brain. Axon and
dendrite are connection branches between any two cells. A neuron receives
signals from other neurons through its dendrites and transmits the signal to
other neurons along the axon. Hereby the basic function of the nerve cell
occurs.
In this study, we use Pattern Recognition Networks that are feed
forward networks to evaluate accuracy of location system. This type of
network can be trained to classify inputs according to target classes. The
target data for pattern recognition networks consist of vectors of all zero
values except for a 1 in element i, where i is the class they are to represent.
In this kind of feed forward network, training function is Scaled Conjugate
Gradient that updates weight and bias values according to the scaled
conjugate gradient method. This method has advantage by reducing time-
consuming line-search. Performance function of the network is Cross Entropy
by default. Here is a basic Pattern Recognition Network in Fig. 3.
Pattern Recognition Network consists of an input layer, one or more
hidden layer and an output layer. Neuron numbers in the hidden layer affect
the learning relationship between input and output. By defining appropriate
neuron in hidden layer, network will get better accuracy.

Fig. 3. Pattern recognition network

4 Implementation
In this study, we implemented a Neural Network localization model which
con- tains 3 layers i.e. input, one hidden layer and output layer used.
Magnetic field database is parted 80% training and 20% for test. Every
input pattern has 3 features. Input pattern is applied to the input layer and
the effect propagates through the network layer until an output is obtained.
This process is repeated layer-by-layer until an error signal is generated
which describes the contribution to each node in the network, relative to
the common fault. After that, the actual output of the network is compared
to the pending output and an error signal is calculated for each of the
output nodes. Then, weights, defined by default initially, are adjusted with
respect to calculated error. The process of finding
418 S. Ustebay et al.

Fig. 4. Error histogram

Fig. 5. Gradient changes with respect to epoch number (left). The error distribution
of network (right)

proper weights such that for a given input pattern the network produces the
desired output is defined as training.
Desired output is defined as training. Test data set is used for evaluating
generalization error which indicates performance. The performance criterion
is how well the artificial neural network can distinguish classes from each
other through the given training set. For this; Test data are given to the
generated neural network model and expected from it to find data classes.
The classes which are estimated from the system are compared with test
data set’s real classes and accuracy is calculated as a percentage. We use
Cross Entropy as an error measure to calculate the performance of the
network.
It can be seen error value versus epoch plot in Fig. 4. As a result,
the best accuracy is obtained at epoch 306 with using Cross Entropy
performance
Magnetic-Field Based Indoor Positioning Systems: Neural Network 419

Fig. 6. Accuracy versus number of hidden layer neuron

function. Figure 5 (left) shows the gradient changes with respect to epoch
num- ber and (right). The error distribution of network.
In neural network, number of neuron in the hidden layer has important
role on performance. The main problem becomes what the number of
neurons must be. Defining large number of neurons increases the storage
capacity of a network. Low number of neurons make network to have low
performance. The plot for hidden layer neurons versus accuracy is given in
Fig. 6.
As is seen after some changes in number of neurons, the performance
curves a bit changes. It must be careful about the network memorizes far
from learning. As seen in the Fig. 7, the accuracy is stabilized after some
number i.e. 35.

Fig. 7. SVM based localization model test results according to training and test data
portioning rate
420 S. Ustebay et al.

Fig. 8. SVM and ANN based localization model test results according to training and
test data portioning rate

SVM based indoor localization method is used the linear kernel and the
penalty value (C) was chosen as 1. Portioning defines division rate of
training data and test data. Model was performed at 100 and mean
accuracy was calcu- lated and accuracy results are shown on Fig. 7.
Figure 8 shows comparative results of ANN and SVM based localization
methods. Although the accuracy values of the algorithms are close, the
high- est accuracy value is obtained in the NN-based positioning model.

5 Conclusion
The main purpose of all positioning systems is to provide high accuracy,
reli- able results, and low cost construction. In many technologies,
sometimes the more accuracy results require the more cost devices inside
buildings. Nowadays, mobile devices have usage in order to estimate
locations of individuals thanks to containing many sensors within itself.
Using the correct sensor data and the correct positioning pattern will reduce
system cost. Having less noise than other data types, magnetic field based
data is more favorable.
The aim of this study was finding location of any object or device by
using its magnetic field information. Two different localization models were
created. First model is used ANN and second is used SVM. The results
which are obtained by using neural networks are highly accurate rather
than SVM based localization model. Magnetic field sensors which are
integrated to mobile devices do not produces any hardware cost. For rising
accuracy different kind of sensor data may be included to the localization
system and tested afterwards. In future studies, we envisage to use a hybrid
localization model with magnetic field data and RSSI data.

Acknowledgments. This work is also a part of the Ph.D. thesis titled “Design of an
Efficient User Localization System for Next Generation Wireless Networks” at
Istanbul University, Institute of Physical Sciences.
Magnetic-Field Based Indoor Positioning Systems: Neural Network 421

References
1. Pirzada, N., et al.: Comparative analysis of active and passive indoor
localization systems. AASRI Procedia 5, 92–97 (2013)
2. Seco, F., et al.: A survey of mathematical methods for indoor localization. In:
IEEE International Symposium on Intelligent Signal Processing, WISP 2009.
IEEE (2009)
3. Galv´an-Tejada, C.E., et al.: Evaluation of four classifiers as cost function for
indoor location systems. Procedia Comput. Sci. 32, 453–460 (2014)
4. Hightower, J., Borriello, G.: Location sensing techniques. IEEE Comput. 34(8),
57–66 (2001)
5. Wikimedia. https://commons.wikimedia.org/w/index.php?curid=19810392
6. National Centers for Environmental Information. https://www.ngdc.noaa.gov/
geomag/models.shtml
7. Online Calculators for the World Magnetic Model. https://www.ngdc.noaa.gov/
geomag/WMM/calculators.shtml
8. Angermann, M., et al.: Characterization of the indoor magnetic field for appli-
cations in localization and mapping. In: 2012 International Conference on Indoor
Positioning and Indoor Navigation (IPIN). IEEE (2012)
9. Ettlinger, A., Retscher, G.: Positioning using ambient magnetic fields in
combina- tion with Wi-Fi and RFID. In: 2016 International Conference on Indoor
Positioning and Indoor Navigation (IPIN). IEEE (2016)
10. http://www.mdpi.com/1424-8220/15/7/17168/htm
11. Bozkurt, S., et al.: A novel multi-sensor and multi-topological database for
indoor positioning on fingerprint techniques. In: 2015 International Symposium
on Inno- vations in Intelligent SysTems and Applications (INISTA). IEEE
(2015)
12. Alpaydin, E. (2013). Yapay O¨ ˇgrenme. Boˇgazi¸ci U¨ niversitesi Yayınevi. ISBN-
13:
978-6-054-23849-1.18. Lin., C.-C.C.-J. (2001)
13. LIBSVM: a library for support vector machine. http://www.csie.ntu.edu.tw/
∼cjlin/libsvm
Improvements of the Reactive Auto Scaling
Method for Cloud Platform


Dariusz Rafal Augustyn( )

Institute of Informatics, Silesian University of


Technology, 16 Akademicka St., 44-100 Gliwice, Poland
[email protected]

Abstract. Elements of cloud infrastructure like load balancers,


instances of virtual server (service nodes), storage services are used in
an architecture of modern cloud-enabled systems. Auto scaling is a
mecha- nism which allows to on-line adapt efficiency of a system to
current load. It is done by increasing or decreasing number of running
instances. Auto scaling model uses a statistics based on a standard
metrics like CPU Utilization or a custom metrics like execution time of
selected business service. By horizontal scaling, the model should satisfy
Quality of Service requirements (QoS). QoS requirements are
determined by criteria based on statistics defined on metrics. The auto
scaling model should minimize the cost (mainly measured by the
number of used instances) subject to an assumed QoS requirements.
There are many reactive (on current load) and predictive (future load)
approaches to the model of auto scal- ing. In this paper we propose
some extensions to the concrete reactive auto scaling model to improve
sensitivity to load changes. We introduce the extension which varying
threshold of CPU Utilization in scaling- out policy. We extend the
model by introducing randomized method in scaling-in policy.

Keywords: Cloud computing · Auto scaling · Custom metrics · Load


balancing · Overload and underload detection

1 Introduction
Most of modern system architectures allow to use scaling capability
provided by cloud platform. The cooperating components of the information
system may be run in cloud environment on separated virtual machines
called instances or ser- vice nodes. Inside the cloud, a load balancer can
distribute a stream of requests among many operating service nodes. Cloud
platform provides mechanisms (like software tools, APIs etc.) for managing
such service nodes. Especially, these mechanisms allow to horizontal scaling-
out and scaling-in by programmatic cre- ate/destroy a virtual server. This
gives a possibility to apply some model of auto scaling [1], where the
number of service nodes is adapted to a system load. Such approaches may
be reactive [2, 3] (they use information about current load and system state)
or predictive [4–6] (they additionally use an extrapolation of load
Oc Springer International Publishing AG 2017
P. Gaj et al. (Eds.): CN 2017, CCIS 718, pp. 422–431, 2017.
DOI: 10.1007/978-3-319-59767-6 33
Improvements of the Reactive Auto Scaling Method for Cloud 423
Platform
and system state in near future). The reactive auto scaling models are rather
simple, but they may be applied to a poorly predictable load.
Obviously a scaling-in increases a cost of system. To measure the cost we
may define a simple objective function:
∫ Time
1
MeanCost =
Time 0 Number of service nodes(t)dt (1)
which evaluates a system respect to usage of service nodes during Time.
A decision of scaling-in or scaling-out may be taken according to
assumed Quality of Service (QoS) requirements or system resource-based
ones. A user may assume some high-level criterion of quality based on
statistics (e.g. mean, high order quantile) of some application-level metrics
like execution time of selected business service. The approach to auto
scaling model which uses the application- level metrics will be denoted as
CMAS (Custom Metrics Auto Scaling). A user may also define less intuitive
low-level criterion based on statistics (e.g. mean) of a resource-level metrics
like CPU Utilization of a service node. Such approach will by denoted as
SMAS (Standard Metrics Auto Scaling). The approach proposed in [2]
combines these two approaches.
The optimization problem to solve in auto scaling domain can be formu-
lated as choosing such methods and values of their parameters to minimize
the objective function subject to QoS requirements.
This paper focuses on extending the model and method of the reactive
auto scaling module presented in [2]. In this paper we propose the following
improve- ments of that method:
– the additional error-based criterion in determining overloaded state of
system (Sect. 3),
– the method of obtaining limits for group CPU Utilization (that may
cause better choosing the moment of launching scaling-out) adapted to
number of currently launched virtual machine instances (Sect. 4),
– the more aggressive strategy of scaling-in based on a function probability
of turning off a redundant service node (Sect. 5).

2 The Auto-scaled Distributed System Designed for AWS


Cloud Infrastructure
In the considered model [2] a quality of service requirement is a constraint
based on statistics for execution times of a selected business critical service.
A user may explicitly set Tq acc – a value of a threshold (a max value) for Tq
- a high order quantile of execution times (the qth quantile). If Tq > Tq acc
than we assume that a system is overloaded. A user may also set MVacc – a
value of threshold (a min value) for MV – a mean value of execution times.
If MV < MVacc than we assume that a system is not enough loaded.
In our work we consider a simple cloud-aware information system (Fig.
1) that consists of:
424 D.R. Augustyn

– load balancer which exposes service outside a cloud and internally


distributes requests among service nodes,
– multiple (n) service nodes, that internally expose SOAP/WebServices to the
load balancer; the so-called auto scaling group consists of such service
nodes and load balancer,
– DaaS node (PostgreSQL Relational Database Service) which persists data.

Fig. 1. The cloud-based architecture of the system: Elastic Load Balancer, and (n = 3)
Elastic Compute Instances (service nodes), and Amazon RDS service (DaaS node).

The proposed in [2] software module that controls virtual machines (i.e.
cre- ates/destroys an instance of service node) is responsible for scaling-
out/-in.
A scaling-out procedure uses an application-level custom metric – Tq and
a AWS built-in resource-level metric – CPU Utilization. The module tries to
use the estimator of Tq. If estimator of Tq exceeds Tq acc the scaling-out
should be performed. When estimators of Tq is not available (too less
observations so we cannot positively verify at the assumed level that an
estimator Tq belongs to the assumed confidence interval) the module uses
GroupCPUUtil – a group CPU utilization (a mean of CPU utilizations of
service nodes). If GroupCPUUtil exceeds a MaxCPUUtil that means that
system is overloaded (but not by the selected business critical service) and
the scaling-out should be performed, too.
A scaling-in procedure uses a application-level custom metric – MV and
again a built-in resource-level built-in metric – CPU utilization. When
estima- tor of MV is not available (too few observations what causes that it
is not statistical confident) the module uses GroupCPUUtil. If MinCPUUtil
exceeds GroupCPUUtil it means that the system is not loaded enough and
the scaling-in should be performed.
The algorithm based on a custom metrics (Tq or MV ) was called CMAS
(Cus- tom Model of Auto Scaling). The supplementary algorithm based on a
built-in metrics (GroupCPUUtil) was called SMAS (Standard Model of Auto
Scaling). Both CMAS and SMAS checks the conditions for Tq, MV ,
GroupCPUUlil in some regular moments of time (determined by interval Ti).
They launch scaling only if the condition is satisfied at least m times during
last M tries (commonly m > M/2).
Improvements of the Reactive Auto Scaling Method for Cloud 425
Platform
3 Analysis of System Efficiency
To describe an efficiency characteristic of a system, we consider to load it
by a sequence of requests of selected business service. We assume the
exponential dis- tribution of intervals between subsequent requests with a
mean value of intervals equals 1/λ. The results of loading a system with n =
1, 2, 3 service nodes may look like those shown in Fig. 2.
Figure 2 presents how the mean value of execution time – MV (n), the qth
quantile of execution times – qT (n), the % of error requests per unit of time

Err(n) for n = 1, 2, 3 depend on increasing system load – λ.
In most cases, the error requests appear because nodes may be
overloaded. This may happen either for service node or DaaS node. In fact,
we may directly scale out in the system by multiply service nodes but we
have no direct influence on scaling DaaS node. Thus we may expect that for
overloaded system with many service nodes most of errors requests results
from overloading of single DaaS node.
Quality of Service requirements define a not overloaded system where both
(n)
criteria T q ≤ q acc and Err(n) ≤ Err(nacc
)
are satisfied. By increasing system
T
load we may obtain the highest values of λ –max (blue color in Fig. 2) where
λ(n)
(n) (n)
≈ q acc and Err ≤ Err acc for n = 1 (Fig. 2a, b), n = 2 (Fig. 2c), n = 3
(n)
T
T
q

(Fig. 2d).
In Figs. 2 and 3, the green color is used for marking acceptable operating
points, the blue for boundary ones, and the red or brown for unacceptable
ones.
We want to notice that the single criterion Tq ≤ Tq acc is not enough
(n)

to determine a not overloaded system. When DaaS node becomes


overloaded some time-out barrier may be crossed in communication between
a service node and DaaS node. The architecture of the system should be
adapted to such situa- tions. Modern systems (see e.g. Repository of
Electronic Medical Documentation – RepoEDM [2]) are based on a micro
services architecture and supported by functionality which minimizes
propagating of failure cascade and accelerates the backward information
about time-outs (Hystrix1). So-called self-healing2 mech- anism (based on
Hystrix/Eureka) reports that the service as unavailable so that subsequent
requests do not run into the same timeouts. This results in very fast
responses from error requests targeted to the overloaded DaaS node. This is
illustrated in Fig. 3b where an empirical probability density function is
bimodal. The execution times near the first local maximum (values close to
error MV (1)) correspond to error requests. The execution times near the
second local maxi- mum (values close to corr. MV (1)) correspond to
correctly processed requests. Although system is overloaded (Fig. 3b) and
most of requests are processed incor- rectly with time-outs (the mass near
error MV (1) is greater than the mass near corr. MV (1)), the mean value
and the qth quantile are less relative to the ones from the not overloaded
426 D.R. Augustyn
system (Fig. 3a).
1
GitHub - Netflix/Hystrix (2016) https://github.com/Netflix/Hystrix.
2
Hystrix and Eureka: the essentials of self-healing microservices (2016)
https://www. dynatrace.com/blog/top-2-features-self-healing-microservices.
Improvements of the Reactive Auto Scaling Method for Cloud 427
Platform

Fig. 2. Dependency between a load intensity λ and:


– the mean value of execution times – MV (n) (dashed line),
– the qth quantile of execution times – T q(n) (solid line),
– the % of error requests per unit of time – Err(n) (fat dashed line);
(a) some operating points for a one-service-node system (n = 1),
(b) outlines of hypothetical courses for MV (1), Tq(1), Err(1)(n = 1),
(c) some operating points for a two-service-nodes system (n = 2),
(d) some operating points for a three-service-nodes system (n = 3). (Color figure online)
428 D.R. Augustyn

Fig. 3. Probability density function (PDF) of execution times T for a one-service-


node system: (a) for a not overloaded system (green line for λ1 intensity) and a
boundary overloaded one (blue line for λ2), (b) for an overloaded system (red line for
λ3). (Color figure online)

The satisfied condition T (1) < (1) = T


q acc in the (3) operating point
th
q3 q2
(red in Fig. 2a) may lead to incorrect conclusion that the overloaded
T
system
from Fig. 3b might be accepted as not overloaded. But it does not satisfy
the
Err-based criterion so finally it will be rejected according to QoS
requirements.

4 Improvement of Scaling-Out in SMAS


SMAS model presented in [2] was based on an assumption that
MaxCPUUtil obtained for one-service-node system is enough accurate for a
multi-service-nodes system. This assumption is only approximately valid
because we can observe that CMAS and SMAS create instances in time
differently even for the same load profile. Adapting MaxCPUUtil values to
n – the number of running service nodes – allows SMAS to behave almost
the same like CMAS, i.e. we may observe situations when either SMAS or
CMAS increases number of service nodes almost at the same moments of
time.
We already noticed that a load of a single DaaS node may not be
distrib- uted like a load directed to many service nodes. During load
increasing, CPU utilization of DaaS node will increase too and DaaS node
becomes slower and the portion time of processing of a single request in
DaaS node will increase too. Because we want to hold the same Tq acc with
increasing load the portion of time of processing in a service node should be
Improvements of the Reactive Auto Scaling Method for Cloud 429
decreased thus a service node should be faster and its CPU Utilization has
Platform
to be lower.
421 D.R. Augustyn
0
We may experimentally find values of MaxCPUUtil dependent on n.
Values of MaxCPUUtil(n) determine when the auto scaling module should
switch the system from having n service nodes to n + 1 ones. Those values
may be obtained as means of CPU Utilizations of n service nodes in
boundary operational points,
i.e. for a load specified by max (when n = 1), . . ., (when n = 3), . . .(Fig. 4).
(1) (3)
λ λ max
Such hypothetical shape of a decreasing dependency suggests that switching
from n to n + 1(n > 1) will be earlier (i.e. for smaller values of
GroupCPUUtil) than it happens in the method from [2] where we had only a
one and high value – MaxCPUUtil(1) for all n.

Fig. 4. MaxCPUUtil threshold for SMAS adapted to n – the number of


operating service nodes.

5 Improvement of Scaling-in in CMAS


According to a goal of minimizing the objective function MeanCost (Eq.
1) some improvement of the scaling-in procedure was proposed. Let us
remind rather conservative behavior in [2] – a scaling-in is needed when
m times the criterion MV < MVacc is satisfied during M tries. In more ˆ
ˆ
detail, we obtain M V (the estimator of MV ) and verify that M V is less
than MVacc at an assumed ˆ confidence level p. Such M V we call confident.
In [2], we only detect a fact of criterion satisfying but we doˆ not use a
difference value – MVacc − M V .
To make the above-mentioned strategy of scaling-in more effective we
propose to scaling-in when we satisfy the criterion J times where 1 < J ≤ m
but we will take into account only confident
ˆ values of M V , too.
Although we will introduce some nondeterministic factor we want to
hold a compatibility with the current strategy that satisfying the criterion
m times launches scaling-in always i.e. with probability equals 1.
We do not want to fire scaling-in upon only a one
try. Let us denote as follows:
– MˆV Σ– confident estimator for j ≤ J ,
J
– s = j=1 MˆV j,
– s0 = JMVacc.
Improvements of the Reactive Auto Scaling Method for Cloud 421
Platform 1
We introduce function of probability (p-function) of launching scaling-in
as follows:

0 for s > s0
 0 for J =
11for 0 ≤ s ≤ s0 ∧ J = m
 
p(s, J) = 1− 1
(J(s
− 1)
− − 0) + 1 for 0 ≤ s ≤ (J − 1) ∧ J ∈ {2, . . . , m
 m−1
s
1}
s0 (2)
0
0− (J−1) m−1
 m−1
 1
(J−1)−0 s
 m−1

s0
(J−1)−s0
(s − s0 ) + 0 form−10 (J − 1) < s 0 ∧ J ∈ {2, . . . , m − 1}
≤s
m−1

which is easier understandable using Fig. 5.

Fig. 5. Function of probability of launching scaling-in (J = 1, . . . , m).

In new scaling-in method denoted by CMAS∗ we decided to scale-in with


probability obtained from p-function (Eq. 2).

6 Some Experimental Results


In experiments we used real RepoEDM system described in [2] and run it in
Amazon Web Services Cloud. The results illustrate SMAS/CMAS behavior
after the improvements introduced in Sects. 4 and 5.
According to the idea from Sect. 4 for some assumed Tq acc ≈ 4000 ms
we obtain MaxCPUUtil(1) ≈ 82%, MaxCPUUtil(2) ≈ 71%, MaxCPUUtil(3) ≈
63%. The differences among those values proved to be statistical significant
at assumed confidence level (p = 0.9).
To evaluate the improvement of SMAS scaling-in we test both SMAS
and CMAS with linearly increased λ in time during 20 min and with no load
during 20 min. Test was repeated 10 times. The improved model is denoted
by SMAS∗.
Let us introduce the following coefficient:
ˆ ˆ
T q CMAS − T q SMAS
Mean 100% . (3)
Tˆq CMAS
430 D.R. Augustyn

Experimentally, we obtained its value about 11% (for original SMAS


with one MaxCPUUtil) and about 6% (after the improvement, for SMAS∗
with the series of MaxCPUUtils(n)). This result shows that the system
under SMAS∗ becomes more similar to CMAS thanks to the scaling-out
improvement.
To evaluate improvement of CMAS i.e. compare pure CMAS and CMAS
with p-function of probability of lunching scaling-in (denoted by CMAS∗) –
we used a test profile defined by a sequence: constant λ during 20 min, and
linear decreased to zero during 20 min, and 10 min no load. m = 4 and M
= 7 were used in the experiment. Test was repeated 10 times.
For the following coefficient:

MeanCostCMAS − MeanCostCMAS*
Mean MeanCostCMAS 100% (4)

we obtained a value equals about 6% what shows a slight improvement in cost.

7 Conclusions
We rather expect poor effectiveness of load prediction during a process of
mass migration of systems to cloud. Such process is complicated and it will
depend on many technical factors, financial ones, or organizational ones. For
such temporary situations we rather recommend a reactive model of auto
scaling.
Although the idea of equivalency between custom-metrics-based QoS
require- ment (in CMAS) and resource-metrics-based QoS requirement (in
SMAS) is not complicated but we did not meet such approach in known
reactive models. We think that CMAS is well aligned to user expectations.
But CMAS may not work sometimes (because of lack of metrics data) so it
must be supported by adjusted SMAS.
In our work we provided a control module which implements proposed
coop- erative models of auto scaling (CMAS/SMAS). We also give a user a
method and a software tool for finding the parameters of SMAS that are
equivalent to given parameters of CMAS. The method and tool allow tuning
parameter values for SMAS (adjusted to CMAS). These values may be later
used in the proposed auto scaling control module.
Advantage of CMAS/SMAS approach results in its intuitiveness and
simplic- ity comparing it to other more complex reactive models like [3] for
example.
The paper presents some improvements of the reactive auto scaling
model proposed in [2].
In the paper we justify the need for error metric (% of incorrectly
processed requests per unit of time). This allows to minimize an impact of
error requests on main QoS statistics (the qth quantile) based on execution
times of all requests.
The first contribution is an extension of the scaling-out model that
allows to early react on an increased load by using thresholds for group
CPU Utilization that depend on the number of currently operating service
nodes. Early turning on an additional node may cause better QoS (early try
of overloading avoidance).
Improvements of the Reactive Auto Scaling Method for Cloud 431
Platform
The second contribution is an extension of early scaling-in model where a
function of probability of launching scaling-in (decreasing number of service
nodes) was introduced, giving some nondeterministic solution. Early turning
off a service node may cause a lower cost (early turned-off nodes does not
load a budget).
The future work will concentrate on detail experimental verification the
pro- posed extensions according to different load profiles.
We plan to verify a usefulness of introduction non-linear elements like
hys- teresis and dead zones into scaling-in/-out algorithms that operate on
metrics.

References
1. Qu, C., Calheiros, R.N., Buyya, R.: Auto-scaling web applications in clouds: a
tax- onomy and survey. CoRR abs/1609.09224 (2016)
2. Augustyn, D.R., Warchal, L.: Metrics-Based Auto Scaling Module for Amazon
Web Services Cloud Platform. In: Kozielski, S., Mrozek, D., Kasprowski, P., Ma-
lysiak- Mrozek, B., Kostrzewa, D. (eds.) BDAS 2017. CCIS, vol. 716, pp. 42–52.
Springer, Cham (2017). doi:10.1007/978-3-319-58274-0 4
3. De Assuncao, D., Cardonha, M., Netto, M., Cunha, R.: Impact of user patience
on auto-scaling resource capacity for cloud services. Future Gener. Comput. Syst.
55, 1–10 (2015)
4. Jiang, J., Lu, J., Zhang, G., Long, G.: Optimal cloud resource auto-scaling for
web applications. In: 13th IEEE/ACM International Symposium on Cluster,
Cloud, and Grid Computing, CCGrid 2013, Delft, Netherlands, 13–16 May 2013,
pp. 58–65
(2013)
5. Roy, N., Dubey, A., Gokhale, A.: Efficient autoscaling in the cloud using
predictive models for workload forecasting. In: Proceedings of the 2011 IEEE 4th
International Conference on Cloud Computing, CLOUD 2011, pp. 500–507. IEEE
Computer Soci- ety, Washington, DC (2011)
6. Calheiros, R.N., Masoumi, E., Ranjan, R., Buyya, R.: Workload prediction using
ARIMA model and its impact on cloud applications’ QoS. IEEE Trans. Cloud
Com- put. 3(4), 449–458 (2015)
Method of the Management of Garbage
Collection in the “Smart Clean City” Project

Alexander Brovko(✉), Olga Dolinina, and Vitaly Pechenkin

Department of Information Systems and Technology,


Yuri Gagarin State Technical University of Saratov,
Saratov 410054, Russia
[email protected]

Abstract. This paper presents a solution of the problem of the route


calculation for garbage removal trucks. The entire system architecture
including algorithms of the route calculation, server and client part
soft- ware, electronic devices on the garbage containers, mobile
solution and collaboration with other routing services is presented.
The dynamic net- work model and the optimization criterion on
containers status and road traffic as well as the knowledge base which
form the hybrid control sys- tem responsible for time of the garbage
collection are introduced.
Keywords: Dynamic network · Hybrid control system · Smart City ·

Traffic flow · Knowledge base · Expert rules · Intellectual solution


1 Introduction
The management of garbage removal is a particular problem for all major
cities, especially those overloaded by transport, with increasing traffic
density and the amounts of garbage. In every city there are defined
organizations and companies engaged in the collection and removal of
garbage to refuse centers. All compa- nies manage waste disposal according
to a schedule or according to customer demands. However, there are
situations when a garbage collection truck (GCT) arrives but garbage
containers are half full; at the same time the GCT does not arrive to the
real full garbage container. This is due to managers not accounting for the
actual container content. This problem forms a part of a general theme of
creating a healthy environment in modern cities, usually associated with
“Smart Environment”, described by attractive natural conditions, pollution,
resource management and also by efforts towards environmental protection
[1]. “Smart Environment” can be considered as a part of the “Smart City”
technology. There are many definitions of the “Smart City” term [2]. An
important component of the smart city concept is the use of the new
information mobile technologies. This approach emphasizes the following
definition as a city “combining ICT and Web
2.0 technology with other organizational, design and planning efforts to
demate- rialize and speed up bureaucratic processes and help to identify
new, innovative solutions to city management complexity, in order to
improve sustainability and livability [3]”.
Oc Springer International Publishing AG 2017
Improvements of the Reactive Auto Scaling Method for Cloud 431
Platform
P. Gaj et al. (Eds.): CN 2017, CCIS 718, pp. 432–443, 2017.
DOI: 10.1007/978-3-319-59767-6 34
Method of the Management of Garbage Collection 433

There already exist specialised software and hardware that allows one to
solve the problem of calculating a schedule for waste disposal. There are
various approaches which use different types of detectors and allow for on-
line control of the level of filling of garbage containers [4–8] but the problem
in case of dynamic changes in the level of fullness of the containers and
current changes of the real traffic situation in the city has not yet been
solved. Currently, there are no solutions which take into account multiple
optimization criteria simultaneously. The authors propose a solution using
three optimization criteria for the task of garbage collection: minimum
length of the route, processing for filled containers only, taking into account
dynamic traffic situation.

2 Description of the Proposed Method


Special system termed “Smart Clean City” was developed to optimize
garbage collection and manage this process. The system allows to carry out
the following tasks:

1. to generate a message by the garbage containers to indicate when they


are full;
2. to manage the process of sending a GCT for the garbage collection only if
the containers are full;
3. to develop the optimal route for the garbage collection;
4. to distribute rationally the containers in the areas.
“Smart Clean City” allows one to solve the following urban, social and
eco- nomic problem:

– increasing the economic efficiency of the company responsible for the


garbage collection in terms of fuel, funding for equipment maintenance,
optimization of staff, resources together with the amount of time
dedicated to garbage collection;
– maintaining proper urban sanitation.
The system consists of two parts: software and signaling equipment. The
technical part is represented by:
– equipment installed on each garbage site;
– equipment installed on every GCT.
Each area for garbage containers (AGC) is equipped with two types of
elec- tronic devices: one unit of bidirectional receiver/transmitter devices
and sensors to determine the fullness of each container via a transmitter.
Each container is equipped with a vandal detecting device, located on
the side wall of the container. It includes level sensors (both infrared and
ultrasound) to monitor levels. A sensor is connected to a radio transmitter
that transmits a signal indicating the level of each container to host
receiver-transmitter unit; all elements being powered by an internal power
supply. To save energy, the
434 A. Brovko et al.

sensors are not active all of the time but with a frequency controlled by the
microcontroller. This reduces the probability of a false signal transmission.
Data from the container transceiver unit is transmitted to the control
room using a built-in GSM module. This makes it possible to transmit the
received data via cellular communication. Information goes to the processing
server and will be processed by the server software. The software system
consists of client and server.
The tasks of the client part of the application:
– displaying routes graphically;
– client registration;
– warning about the need to empty the containers;
– notification of inability to continue work due to inevitable accidents;
– automatic authorization to get information depending on the area of
truck’s driver responsibility;
– automatic authorization to obtain client zone responsibility;
– periodically sending the information to the server about truck’s position.
The server part of application:
– storage of information about the AGC: address, location, number and
capac- ity of containers, date of the last maintenance;
– storage of information about garbage trucks: type, number of mobile
devices, device ID, capacity;
– receive and process messages from the garbage sites;
– information about the fullness of the AGC in general and deciding on the
need for removal of garbage from it;
– specification of the GCT for AGC needs to be cleaned, taking into
account its current location;
– generating the route in accordance with the road traffic data;
– transmission the calculated route to a AGC for GCT;
– receiving information from the client application;
– dynamic updates on the status of AGCs for garbage in accordance with
the actual situation;
– reporting and statistical analysis.
The software element provides the interface between the AGC, dispatch
cen- ter and the garbage GCTs. Operation of the system takes place as
follows:
1. Information about the status of the garbage containers levels in the area
is transmitted to the central server where the software calculates the
routes implemented. If the whole AGC is filled with more than 70%, the
system decides to remove the garbage.
2. AGC is added to the list to be visited.
3. The garbage sites are represented as nodes of the city network (see
details below). The system changes the weights of the edges on the basis
of data traffic and road conditions. Traffic data are taken from the online
road map service.
Method of the Management of Garbage Collection 435

4. Server application calculates optimal routes on the basis of the time


necessary for each GCT. Thus, the driver sees only the route to an area
for the garbage containers, which needs to be cleaned.

System structure overview is shown in Fig. 1.

Fig. 1. Clean city system overview

The suggested formalization and method for solving optimization


problem are original for the following reasons. Firstly, the network model of
the transport system has dynamic nature [9]. Secondly, information about
filling containers is handled automatically by the system during the
development of the plan of the garbage collection.

2.1 Formal Network Problem Statement for Dynamically Optimal


Route
To solve the problem let us define the weighted mixed network (there are
directed and undirected edges)
G = (V, E, f, g, w) (1)
where:

V – set of network vertices;


E – set of directed arcs (edges) corresponded to the city road network, arcs
connects location of garbage containers, places of their discharge, home
bases of GCT (garages) and linking them road network;
f : V × T → R – vertex weight function at time t, which determines the time
for passing the truck through the vertex;
436 A. Brovko et al.

g : E × T → R – arc weight function at time t, which determines the time for


passing the truck through the arc;
w : V × T → R – vertex weight function at time t, which determines the
amount of filled containers at the place of their location.

Vertices are superimposed on the map of the city road network. Arcs
and edges correspond to the roads (with one-way and two-way traffic,
respectively).
V is defined as

V = V1 ∪ V2 ∪ V3 ∪ V4, (2)
where:
V1 – network vertices correspond to the garbage site with containers;
V2 – network vertices correspond to the solid domestic garbage dumps
(SDGD);
V3 – network vertices correspond to the garages location;
V4 – vertices correspond to the connection points of the road segments
(cross- roads).
Let us define mapping f (temporal characteristics of road network
vertices). Vertices’ weight defines temporal characteristics of the GCT to
pass through this vertex, weight is defined by the membership of the subsets
V1, V2, V3, V4 and current time as follows:

garbage site for v ∈ V1;


f (v,f (v,
t) –t) time
– timerequired forthe
to unload loading
GCT contents
for v ∈ V2of
; containers on the specific
f (v, t) = 0 for v ∈ V3;
f (v, t) – value that characterizes the delay of GCT at a crossroads (traffic
mental data for v ∈ V4.
light, unregulated crossroad), this value is determined on the basis of
experi-
all network arcs (edges) e ∈ E g(e, t) – value represents time to pass on
route Let us define the mapping g (time characteristics of a road network
arcs). For segment (depends on speed limit, quality of the roadway, segment
road distance,
traffic on this segment) defined for time moment t, determined by the
current traffic situation.
Let us define the mapping w (number of filled containers). For all
vertices from V2, V3, V4 and for any time moment t the value w(v, t) = 0.
For vertices that corresponds to the location of garbage containers the value
of this function returns the number of full containers at time t.

w(v, t) = 0, if v ∈/ V1
⎧ number of filled containers


at the site v at time moment t, if v ∈ V1
Values for vertices and arcs of network at any time t are called markup. At
the starting point of the network (t = 0) we name as initial markup.
Dynamics of network changes is depended of actual traffic road situation,
fullness levels of
Method of the Management of Garbage Collection 437

containers on garbage sites (varies in time), which results in a change of


function values, change of markup.
Let the GCT has a capacity of L containers. At the initial moment (t =
0) on AGC vj we have Kj filled containers that we need to take to a
domestic garbage dump. It follows that Kj = w(vj, 0). Consequently, the
GCT must visit each point of discharge (dump) at least Sj times, where

j K
Lj
S = (3)

The total number of downloads–discharge cycles in this case is equal to


Σ
S= Sj .
1≤j≤|V1 |

The total number of filled containers is equal to


Σ
K= K j.
1≤j≤|V1 |

Suppose that there is a single truck that collects garbage from all AGC
and transfers it to the dump. In this case there are several possible
optimization criteria. In this paper we consider only one for the time
optimization, but the task is in reality is multi-objective one. We can
consider other areas of analysis, such as maximizing the volume of handled
garbage by trucks.
Designation: Let P – the route in network G, U ⊆ V . Designate |P | as
length of route P , and |P |U – the number of occurrences of vertices from set
U in P . It is clear that for any route in the network |P | = |P |V .

2.2 Problem Statement for One Truck


For given network find a route

P = v0, v1, v2, . . . , vm


where:

1. v0 = vm; v0 ∈ V3 (GCT departs from garage at the beginning of the work


and returns to the same garage after work completion);
2. ∀vi ∈V1 |P |{vi } = Si (every AGC visited as many times as necessary to
empty
filled containers);
3. |P |V2 = S (dumps are visited S times – required amount of times for
unload- ing filled containers);
4. Σ
i=1,m (f (vi, ti) + g((vi, vi+1), ti)) → min, where the minimum is taken over

all
routes that satisfy conditions 1, 2, 3. ti corresponds to time of events
related
to network vertices. It’s clear that

t1 < t2 < . . . tm.


438 A. Brovko et al.

2.3 The Problem Generalization for n Trucks


Let the number of used garbage trucks is equal to n. For the given network
there must be found n routes
Pii = vi , vi , vi , . . . , (i = 1, n)
v
0 1 2 mi

and following conditions are satisfied


1. ∀i=1,nvi = vi , vi ∈ V3 (garbage trucks depart from garage at the beginning
0 mi o
ofΣthe work and return to the same garage after work completion);
2.
i=1,n |Pi|V = S (every AGC is visited as many times as necessary to empty
1
filled
Σ containers);
3.
i=1,n |Pi|V = S (dumps are visited S times – required amount of time for
empty all2 filled containers);
Σ Σ
4. i=1,n j=0,mi f (vji, tji ) + g((v vi ), tji ) → min, where the minimum is taken
j i,j+1

over all routes that satisfy conditions 1, 2, 3.j ti corresponds to time of events
related to network vertices for i-th truck. It’s clear that for any i = 1, n
ti < ti < . . . ti .
1 2 mi

If it is necessary to provide uniform load distribution for garbage trucks


then one more condition should be added
Σ
5. Let Wi = f (vji, tj) + g((v i
vi ), tj)
j ,j+1
j=0,mi

then ∀i/=k,i,k=1,n|Wi − Wk| → min


Described below algorithm of building the path should be implemented with
taking into account current traffic situation in the city. Information about
time amount required for different segments of the route with current traffic
con- ditions can be extracted from various online map services, for example
using Google Maps Directions API [10]. This online service is available
through an HTTP interface, with requests constructed as an URL string,
using text strings or latitude/longitude coordinates to identify the location,
along with API key. HTTP request to use Google Maps Directions API can
contain some useful parameters, such as “waypoints” (intermediate points of
the route which should be visited; up to 23 intermediate points for the
business applications), “avoid” (objects which should be avoided in the
route), “mode” (type of transport in use), and others. The response from the
service is obtained as JSON array termed “routes”, consisting of one or more
segments “legs”, depending on the presence of intermediate points in the
request. Each segment of the route is described by using parameters
“distance” (distance of the segment in meters), “dura- tion” (time of driving
in this segment in seconds), “duration in traffic” (time of driving calculated
using statistical information and current traffic situation). These response
parameters are taken into account in the optimal path calcula- tion when
the function g(v, t) values are updating with the algorithm described
Method of the Management of Garbage Collection 439

below. The information obtained is then used to determine whether the


calcu- lated shortest route can lead the truck into a traffic jam. In this case
the route have to be rebuilt, considering road situation.
Let us outline the algorithm of the dynamic route calculation for the
truck which has the status “ready” and is making a request for the next
AGC – “Get optimal route to AGC”. The algorithm assumes the following
GCT statuses and related information requests:
– (Status) Registration
– (Status) Ready
– (Status) Faulty truck
– (Status) On the route
– (Request) Get all day schedule
– (Request) Get optimal route to AGC
– (Request) Get optimal route to SDGD.

Algorithm of the dynamic optimal route calculation

ONE TRACK SCHEDULING ALGORITHM


Input: <Track Position>, <Service Type Request>
Output: Optimal route to AGC
If <Service Type Request> = <Get next container area> then
/*Get statuses of AGC
For all v ∈ V 1 get Status(v) EndFor
/*Statuses are <Filled>, <Maintenance>, <Cleaned>
/*Get traffic situation
UPDATE info on g(e,<Current time>)
/*Get actual fullness levels info
AGC SET := ∅

For all AGC v ∈ V 1 do


If Status(v) = <Filled> then
Update info on w(v,t)
AGC_SET := AGC_SET.Add(v)
EndIf
End For

/*Select optimal route according to expert rules


/*and the current values of fullness levels
AGC_Next := GETOptimalAGC(AGC_SET)
AGC_Next_Path :=
GETOptimalRouteToAGC(AGC_Next) Transfer data
to the client application Status(AGC_Next) :=
<Maintenance>
End If
440 A. Brovko et al.

Method GETOptimalRouteToAGC uses the k-shortest simple paths


search implementation of Yen’s algorithm (loopless, one source) [11]. This
algorithm has a computational complexity of O(kn3), where O(n2) is due to
the shortest-path calculation. Here, n denotes the number of nodes in the
road network model. The value of k is empirically set to 5. All built routes
are ranked with using the expert rules which are presented in the knowledge
base and describe the expert knowledge about the traffic situation in the
considered period of time. The final list of routes ordered by their length
and evaluation is given to the truck driver. After taking the garbage from
the container the installed sensor in the AGC updates their fullness status
in the system, in case of a broken sensor the driver updates the status in the
system manually and the AGC is marked for the sensor
replacement (Fig. 2). Figure 3 shows making decision procedure.

Fig. 2. The filling level of containers. Client application screen

Smart Clean City approach combines the described algorithm of building


the optimal route with the intellectual approach represented by the
knowledge base consisting of the rules:

pri : ri : vi : If aj then bkwith the confidenceck, (4)


where:
ri ∈ {R} – the set of the rules,
pri ∈ {PR} – the set of the priorities,
vi ∈ {V } – the set of V , see (2);
aj ∈ {A} – the set of the facts which represent the current situation,
ck ∈ {C} – the set of the linguistic variables,
where C = {‘possible’, ‘probable’, ‘most likely’}, ck represents the fuzzy
variable described with the trapezoid function. Rules are formed by the
experts (from
Method of the Management of Garbage Collection 441
1

Fig. 3. The structure of the making decision

the traffic police or professional drivers) who are well acquainted with the
traffic situation in the city. For example, in case of the traffic accident and
correspond- ing traffic jam the experts could make the solution what step
should be made – to change the other route or to wait. If the described
algorithm of the building of the route tries to select the next node vi but
gets the message from the mobile maps service about the high load of the
transport on the way to vi and the
knowledge base contains the rule ri with the priority pri ≥ 80, then the
solution is made on the base of the selection of the ri (to follow the
algorithm or to select
the other node or change the route to the new one).
Knowledge base consists of the rules the examples of which are presented
below:
80 : r32 : v4i ∈ V4 : if status (GCT) = “on the route” AND f (vi, t) > 20
then recalculate AGC Next
100 : r5 : v4j ∈ V4 : if status (v4j) = “busy” then continue with calculated
route with confidence “most likely”
100 : r14 : v4j ∈ V4 : if status (V4) = “busy” then use the calculated
AGC Next

3 Discussion
The described system “Smart Clean City” has been implemented in the
October Region of the Saratov City (Russia) with the population about 1
million. A pilot
442 A. Brovko et al.

exploitation of the system during the period from September 2015 till
September 2016 demonstrated that the fuel saving achieved 21% by
decreasing the time of the trucks being on the route in comparison with the
standard manual route planning. The company responsible for the garbage
collection has 24 trucks.
We evaluate the scheduling with synthetic and real time data by means
of stochastic simulation in order to assess its performance. It is assumed
that the region where the system was implemented has about 250 containers
on 56 AGC. There are 2 dumps attached to the region. Each container has
capacity 100 kg, while each GCT capacity is set to 5000 kg (actual capacity
depends on the degree of compressibility of garbage). The results are shown
in Fig. 4.

Fig. 4. Performance of the optimization algorithm

At the same time there was fixed a problem which has not been
considered in the described system – lack of the proven information about
the solidness of the garbage to be taken away from the container. It does
not allow to take into consideration and include in the system the amount
of the garbage to be loaded to the truck. The truck can take more
compressed garbage. Information about the fullness of the containers
without knowing the solidness of the garbage can not predict accurately the
amount of the garbage which can be taken by the truck at the AGC.

4 Conclusion
System “Smart Clean City” allows one to manage garbage collection by
using the hybrid control system based on the building of the route of
taking away
Method of the Management of Garbage Collection 443

the garbage from the area of the garbage containers and using on-line
informa- tion from the mobile application which collects information of the
traffic jams and rules which can correct the calculated route. In this paper
we present one optimization criterion for the time to empty all full garbage
containers. Obvi- ously the dynamic nature of the chosen mathematical
model suggests using the other criteria as well for example, the
“uniformity”of the garbage trucks loading that imposes additional
restrictions on the algorithm for calculating routes. The advantage of the
proposed system is the integration of information on the status of containers
for garbage on special area with real-time traffic situation.

References
1. Global Innovators: International Case Studies on Smart Cities. Research
paper number 135, October 2013. http://www.gov.uk/government/publications/
smart-cities-international-case-studies-global-innovators
2. Anagnostopoulos, T., Zaslavsky, A., Medvedev, A., Khoruzhnikov, S.: Top-k
query based dynamic scheduling for IoT-enabled smart city waste collection. In:
Pro- ceedings of the 16th IEEE International Conference on Mobile Data
Management, Pittsburgh, US (2015)
3. Chourabi, H., Nam, T., Walker, S. Gil-Garcia, J.R., Mellouli, S., Nahon, K.,
Pardo, T.A., Scholl, H.J.: Understanding smart cities: an integrative framework.
In: Pro- ceedings of the 45th Hawaii International Conference on System Sciences,
pp. 2289– 2295 (2012)
4. Toppeta, D.: The Smart City Vision: How Innovation and ICT Can Build
Smart, “Livable”, Sustainable Cities. The Innovation Knowledge Foundation
(2010). http://www.inta-aivn.org/images/cc/Urbanism/background
%20documents/ Toppeta Report 005 2010.pdf
5. Optimising Waste Collection. http://www.enevo.com/
6. Kumar, N., Swamy, C., Nagadarshini, K.: Efficient garbage disposal
management in metropolitan cities using VANETs. J. Clean Energy Technol.
2(3), 258–262 (2014)
7. Kargin, R., Domnicky, A.: Routing the movement of road vehicles for the
collection and disposal of waste. Roads Bridges “ROSDORNII” 28(2), 92–102
(2012). (in Russian) Moscow
8. Doronkina, I.: Optimization of solid waste utilization. Serv. Russia Abroad 1, 20
(2011). (in Russian)
9. Dolinina, O., Pechenkin, V., Tarasova, V.: Dynamic graph visualization
approaches for social networks in educational organization. Vestnik SSTU 4(62),
239–242 (2011). (in Russian) Saratov
10. Google Maps Directions API. http://developers.google.com/maps/documentation/
directions/
11. Yen, J.: Finding the K shortest loopless paths in a network. Manage. Sci. 17,
712–716 (1971)
Zone-Based VANET Transmission Model
for Traffic Signal Control

Marcin Bernas(✉) and Bartl-omiej Pl-aczek

Institute of Computer Science, University of Silesia,


Bedzinska 39, 41-200 Sosnowiec, Poland
[email protected], [email protected]

Abstract. The rising number of vehicles and slowly growing transport


infrastructure results in congestion issue. Congestion becomes an
impor- tant research topic for transportation and control sciences. The
recent advances in vehicular ad-hoc networks (VANETs) allow the
traffic con- trol to be tackled as a real-time problem. Recent research
works have proven that the VANET technology can improve the
traffic control at the intersections by dynamically changing sequences
of traffic signals. Transmission of all vehicle positions data in real-time
to a traffic lights controller can generate a significant burden on the
communication net- work, thus this paper is focused on the reduction
of data transmitted to a control unit by vehicles. The time interval
between data transfers from vehicles is defined by zones that are tuned
for a given traffic control strat- egy using the proposed algorithm. The
introduced zone-based approach reduces the number of transmitted
messages, while maintaining the qual- ity of traffic signal control. The
results of experiments firmly show that the proposed method can be
successfully used for various state-of-art traffic control algorithms.

Keywords: Vehicular networks · Traffic signal control · Data


reduction · Congestion

1 Introduction
The last century was a place of very fast headway in motorization
industry. A vehicle, which was luxury good one hundred years ago, now
becomes the necessity to function in modern society. Rural areas, with
constantly growing population, are not prepared for this number of vehicles
and in consequence traf- fic is disturbed by congestions. The traffic
congestion is very costly phenomenon. It causes substantial time losses for
people and increases gasoline consumption [1]. To tackle this issue, a
reasonable solution is to increase the throughput of the intersections, which
are traffic bottlenecks. The throughput of intersection can be increased by
using traffic signal control [2]. Methods of traffic signal control can be
divided into two types: fixed-time control and traffic-responsive control [3].
Traffic-responsive control proved to be more efficient than the fixed-time
approach; however, it requires reliable transfers of real-time traffic data [4].
This
Oc Springer International Publishing AG 2017
P. Gaj et al. (Eds.): CN 2017, CCIS 718, pp. 444–457, 2017.
DOI: 10.1007/978-3-319-59767-6 35
Zone-Based VANET Transmission Model for Traffic Signal Control 445

study is focused on decentralized traffic-responsive control strategies that


are designed for urban networks with multiple intersections. One of such
approaches was based on the Backpressure routing in computer networks
[5]. The control strategy proposed by Helbing et al. [6] was based on self-
organizing pedestrian traffic. Another approach analyses queue size of
arriving vehicles (SOTL) [7]. Houli et al. [8] assumed that each traffic light
controller is an agent that is able to learn to control the traffic light via
interacting with the environment and neighbors. Recent works in this area
utilize predictions of future traffic state in order to find optimal control
actions LH [9]. All these methods assume that current traffic state can be
monitored by means of sensors [10] (e.g., inductive loop detectors, cameras
or radars). In recent years, VANET emerges as a reli- able data source for
the traffic control strategies. There is also a number of works related to
VANET applications in traffic signal control. A case study of adaptive traffic
light control algorithm and VANET data was presented in [11]. Majority of
solutions is focused on a given traffic control method as fixed time traffic
light control strategies [1] or adaptive one [5]. Up to date, many VANET-
based models were proposed, however they were not popular in practical
applications yet [12] and new data collection methods and control strategies
are researched. There is lack of universal VANET models that can be used
for any traffic signal control strategy. Therefore, this paper proposes a
universal VANET communi- cation scheme that could be implemented for
most state-of-art traffic control strategies. The method assumes that each
traffic control strategy requires data of defined precision [13], that can be
obtained from fixed traffic areas (zones) with defined frequency. The
proposed zones description aim is to find the optimal communication
patterns, which will reduce number of transmitted messages and maintain a
high traffic control quality. Finally, the algorithm to find optimal zones
description for selected control strategy was proposed. In the following
section the zones-based communication method for V2I and V2V VANET
com- munication is presented in details. Section 3 describes simulation
result obtained for three state-of-art traffic control strategies. Finally, in
Sect. 4 conclusions are given.

2 Proposed Model
The proposed VANET-based communication model assumes that traffic
signals controller also serves as a road side unit (RSU). Each RSU is able to
communicate with vehicles via VANET. The RSU broadcasts periodically its
own communi- cation scheme and communication schemes of RSUs at
neighboring junctions. The communication scheme defines zones with
different data transmission fre- quencies. The vehicles moving from a
previous junction to a next closest junction sends messages with frequencies
assigned to the zones in which they are currently located. The message
includes position and velocity of the sending vehicle. In this paper we
assume, based on previous research concerned target tracking in WSN [14],
that precise vehicle location data is especially important to make cor- rect
traffic control decision when vehicle is close to the junction. In this paper
446 M. Bernas and B. P-
laczek

Fig. 1. Overview of VANET implementation.

we assume that precision of data obtained by traffic light controller is


related to time interval between the transmissions of messages containing
position updates. The overview of the proposed VANET-based
communication model is illustrated in Fig. 1. The VANET communication
model was based on WAVE implemen- tation [15] and suppression strategy
proposed in [16, 17]. The communication frame is sent within one of the
service channels. The vehicle, which passes a junction, obtains
information about next junction on the way. The boundary inlets and
outlets are treated as junctions and broadcast data as well. The RSU
broadcasts the information directly to vehicles and this information is not
for- warded further. In this model it was assumed that junctions, as in
most real applications, are connected via wired infrastructure or cellular
network, thus this communication burden is not considered. To simplify
the model, each ith junction (ji) is described by set of links Li that connects
inlet and outlet traffic
with nearby junctions. Each link la ∈ Li of junction i is described as: la = (ji,
jk) or la = (jj, ji) where: jj, jk – are nearby junctions or boundary RSUs. In
this
research it was assumed that the information about the traffic signal control
strategy is not necessary. The control strategy is treated as a black box and
it is described as a control function (C). Input of the function C is the traffic
state dataset obtained from VANET (set D). D is a set of vehicles bx
registered on traffic lanes together with their positions and velocities.
Elements of this set are tuples containing vehicle velocity, position and road
id:

Di(t, Zi) = {bx}, bx = (px, vx, rx), (1)


Zone-Based VANET Transmission Model for Traffic Signal Control 447

where: bx - is vehicle registered at given road, px - is distance to the


junction, vx - velocity of vehicle, rx - current road, x - is unique vehicle
identifier (license plates or MAC of the communication device), Zi - is the
zone definition for a given junction described below. The quality of a
selected control strategy is described by traffic delay (td), average speed (tv)
and travel time (tt) of vehicles after t time steps. The traffic delay measure
(td) is defined as a sum of delays of individual vehicles. The delay of a
vehicle is calculated as a number of time steps (1 s) at which the velocity of
the vehicle was equal 0 (vehicle was waiting in a queue). The average speed
(tv) is measured as average velocities value for all vehicles. Finally, the
travel time (tt) is a sum of time needed to pass through the monitored area.
Thus, the performance of traffic control strategy (C) for junction i, based on
data provided by VANET D function is defined as follows:

(td, tv, tt) = Ci( Di(t, Zi)) (2)


t

where: td, tv, tt - parameters of traffic control quality, Ci - control strategy for
ith
junction, Di - VANET based traffic state monitoring, t - considered time
period, Zi
- zones definition. The function C for ith junction can be any traffic signal
control strategy that is dynamically adapted to the monitored traffic state.
Thus, in this research three representative control strategies were used: LH [9]
modification that selects optimal strategy based on predictions, a simple
strategy that takes into account only current traffic data (SOTL) [7], and
the Backpressure strategy [5] based on the routing algorithm for computer
networks. The aim of this research is to find the optimal communication
patterns (zones definition Zi) that will reduce number of transmitted
messages, while retaining a high traffic light control quality at the same time.
The optimal communication scheme described by sequential zones (defined
as Algorithms 1 and 2), is illustrated in Fig. 2. The zones definition was
illustrated for a single road section. The vehicle has GPS device and based on
localization determines the id of its current road (rx) as well as the ids of
junctions on the beginning and end of this road (3).
448 M. Bernas and B. P-
laczek Fig. 2. Overview of the zones definition.
Zone-Based VANET Transmission Model for Traffic Signal Control 449

rx = (js, je), (3)

where: js,je–id of junctions at ends of current road. The vehicle sends data to
both junctions using the zone settings. The zones are defined by an ordered
set of distances from a junction (4).

Zi = {z1 = (e1, t1), ..., zj = (ej, tj), ..., zn = (en, tn)}, ∀j∈1..n−1ej < ej+1, (4)

Fig. 3. The RSU operations for junction i (Algorithm 1).

where: ej - minimal Euclidean distance (ED) to junction, tj - maximal time


interval between position updates. Each zone has its time limits at which
vehicle has to transmit data. The distances between junctions and vehicles
are calcu- lated using Euclidean distance (ED). The operations performed
by the RSU at ith junction are summarized in Algorithm 1 (Fig. 3).
Algorithm 1 is able to process data obtained from VANET. Then it
exchanges zones definition between nearby junctions (lines 1–4). The
algorithm based on data obtained in lines (5–7) describes vehicle positions as
intervals gx (granules) in order to cope with uncer- tainty [13] (line 8a).
Most control algorithms cannot tackle position uncertainty, thus
degranulation procedure is performed (line 8b). The result of degranulation
procedure returns one of the probable vehicle positions that are sent to
traffic controller (line 9). Finally, the vehicles, which position intervals are
moved to negatives or leave the zones are removed (line 10). The vehicle
communication follows the simple procedure, defined by Algorithm 2 (Fig.
4). Algorithm 2 tracks vehicle position using GPS data, then based on
obtained zone definition cal- culates the interval at which the
communication should be performed for both junctions connected with a
given road section (lines 6–7). The communication is only possible within
the defined zone, thus if vehicle does not obtain the zone def- inition it does
not broadcast its position. Both Algorithm 1 and Algorithm 2 are based on
the zone definition. The zones are defined for specific junction, traffic
441 M. Bernas and B. P-
0 laczek

Fig. 4. The zone-based vehicle communication (Algorithm 2)

conditions and traffic control strategy. The zones are selected by using
Algo- rithm 3 and takes into consideration the relative traffic control
effectiveness, for considered time period (t), measured by EF function (5).
′ ′ ′
1 td − td tv′ − tt − tt′

tv

EF (td, td , tv, tv , tt, tt ) = ∗( + + ),


3 max(td, td′) max(tv, tv′) max(tt, tt′)

(5)

where: (td,tv,tt) and (td′, tv′, tt′) are quality measures returned by C function
(Eq. 2). Algorithm 3 (Fig. 5) is divided into two phases. Firstly, the range
at which data are vital for the selected traffic control strategy is found
(lines 1–6). The initial simulation is performed for minimal distance (lines
1–3). Then the distance is extended by the value of minDist parameter as
long as the traffic control effectiveness (5) is increasing (lines 4–5). Line 6
was added to avoid local minimum, which can be registered in the first steps
of algorithm. Then, in second phase, the obtained area is divided, using top-
down strategy, into two zones with different message transmission intervals.
The effectiveness of control strategy cannot decrease below a given
threshold (α). If the division is not possible under given parameter
assumption, the algorithm ends. The minimal length used to track vehicle is
defined as minDist and it is related with the used localization system and
size of vehicles. The division algorithm (phase 2) was illustrated in Fig. 6.
In first step (a) the performance of traffic control for single zone, with the
maximum size and the most frequent transmissions (1 s), was calculated
(Ci). Then, the zone is divided into two equal-length zones with various
update time, i.e., 1 and 2 s (b). The traffic control performance for newly
created zones is calculated. Then the performances for two zone settings are
compared by the EF function. If the traffic control performance is not
decreased below given threshold (α) the first zone (closer to junction) can be
Zone-Based VANET Transmission Model for Traffic Signal Control 441
narrowed. In opposite situation (c) the first zone is enlarged. If the divided
1
area is smaller than the defined threshold minDist (d) the division ends.
The end of zone is determined
450 M. Bernas and B. P-
laczek

Fig. 5. The zones search Algorithm (3)

Fig. 6. The example of zone finding algorithm


Zone-Based VANET Transmission Model for Traffic Signal Control 451
1
by taking into account the value of EF function. After the first zone is
found the rest of interval is further divided (e) to find the zones with higher
transmission intervals (2, 4, 8). The algorithm ends, when the remaining
interval is smaller than minDist parameter.

3 Experiments
To illustrate the robustness of the proposed solution, three state-of-art
traffic control strategies were used (LH [9], SOTL [7] and Backpressure [5]),
with var- ious datasets delivered from VANET. As input, the control
strategies receive the data describing vehicle positions, velocity and road, in
accordance with Algorithm 1. The control strategies were implemented in
Matlab and integrated with SUMO simulation of road network containing
four intersections (Fig. 7). Intensity of the traffic flow is determined for the
network model by parameter q in vehicles per second. This parameter refers
to all traffic streams entering the road network for t = 1000 s. At each time
step vehicles are randomly generated with a probability equal to the
intensity q in all traffic lanes of the network model. In this research the
traffic intensity changes within a day to model the rural traffic
characteristic q = (0.05,0.2). The initial experiment was conducted to find a
borderline for analyzed method. Thus a constant transmit time for all area
was researched - without zones. The average results of 20 simulations are
presented in Fig. 8. The transmission range below 30 in case of SOTL and
LH gives unpredictable results. For values between 70 and 110 both
strategies are stabilized. In case of the Backpressure strategy, the best
results are obtained for relatively small communication distance. Intuitively,
the number of messages is growing with the communication distance and
decreasing when the time inter- val between successive transmissions
increases. Algorithm 3 was used to find the zones.
The zones was searched for minStep = 10 m and α = 0 so no loss of
con- trol quality was allowed. The obtained results of 20 simulations are
presented as box plot in Fig. 9 and compared with fixed interval of message
transmis- sion. The box plot shows mean (line inside a box), 1st/3rd
quartiles (box) and

Fig. 7. The simulation test-bed in Sumo environment (a) road network (b) selected
junction.
452 M. Bernas and B. P-
laczek

Fig. 8. Communication distance vs. travel time and number of messages: (a) SOTL,
(b) LH, (c) Backpressure.

minimal/maximal values (error bars). The proposed zone model enables


find- ing a robust solution despite the randomness of position of vehicles
within each zone. It also allows maintaining the performance, while
decreasing the number of sent messages. The Backpresure is least affected
by random factors, while more precise algorithm are more vulnerable,
however the results are tend to be more optimal. The top down strategy
can give suboptimal solution, and thus the results were compared against
the exhaustive search. The results of exhaustive zone search were presented
in Table 1.
The last row of Table 1 shows the results obtained for the best zone
selection in case when only one zone is used. It means that in the selected
zone the transition interval is constant and equal to 1. The remaining rows
in Table 1 show all results for which the optimal performance of the control
strategies was
Zone-Based VANET Transmission Model for Traffic Signal Control 453

Fig. 9. The zone applied for state of art algorithms.

Table 1. The zones generated using exhaustive strategy.

Zone(Z) Delay[s] Velocity[m/s] Travel time[s] Frames[1]


SOTL (10,1),(30,2),(50,4),(110,8) 4631 4,57 11404 5698
(10,1),(30,2),(70,4),(110,8) 4631 4,57 11404 5825
(10,1),(30,2),(110,8) 4631 4,57 11404 5949
(10,1),(50,2),(110,8) 4631 4,57 11404 6115
(10,1),(50,2),(70,4),(110,8) 4631 4,57 11404 6319
(10,1),(50,2),(110,4) 4631 4,57 11404 6376
(10,1),(70,2),(90,4),(110,8) 4631 4,57 11404 6586
(10,1),(90,2),(110,4) 4631 4,57 11404 6815
(10,1),(110,2) 4631 4,57 11404 6916
(110,1) 4708 4,56 11648 11602
LH (30,1),(70,2) 2935 5,34 9891 7979
(70,1) 3119 5,23 10123 9319
Backpress (30,1) 7994 2,35 22206 10253
(30,2) 7994 2,35 22206 5053
(30,8) 7994 2,35 22206 1268
(30,1) 7994 2,35 22206 10253
454 M. Bernas and B. P-
laczek
obtained. The solution with least messages sent is the one selected with
proposed Algorithm 3 for zone finding. In all three cases, the same distance
gives the best result in terms of several zones as for one zone. In case of LH
strategy only one interval was found.
The LH strategy performs the forecast of vehicles movement thus the
precise vehicle position are vital to its performance. Nevertheless, over 30 m
the time delay can be increased by 2 and does not influence the control
strategy. In case of SOTL algorithm, where the vehicles delays and queue
lengths are the most important features, the zone definition changes.
The least expensive communication assumes that the data collected at
the distance up to 10 m are vital and with the distance the precise position
of vehi- cles is not that important. The last control strategy (Backpressure)
balances the amount of vehicles in separate traffic lanes, thus vehicle
number is important and positions of vehicles can be ignored, thus the
tracking length is shortest. The distance must be sufficient to register each
vehicle. In this case the dis- tance of 30 m is sufficient. The zones reflect the
character of the traffic control strategy. The more complex control strategy
the more data it requires. The proposed method of zones definition reduces
the number of messages exchange between vehicles and RSU by 50%, 14%
and 10% respectively for Backpressure, SOTL and LH strategy. Additional
research was conducted to analyze the influ- ence of α parameter on the
algorithm 3 effectiveness (Fig. 10). The values on X axis describes how
many loops in first or second phase of the algorithm was

Fig. 10. Simulation results (a) Backpressure, (b) SOTL, (c) LHs.
Zone-Based VANET Transmission Model for Traffic Signal Control 455

executed. The results in Fig. 10 show that the number of messages and
control quality decreases with increase of α parameter. For small values of α
= (0, 0.1) the decrease of message number is especially visible for more
complex algorithms (SOTL and LH). For α > 0.1 the traffic control strategy
is not optimal, thus the travel time of vehicles are longer and the number of
messages sent does not decrease so rapidly. In case of the Backpressure
method the data are already reduced significantly and further reduction has
great impact on the control per- formance. Thus for small α values the
result does not change.
In case of the more advanced control strategies, the zone analysis
allows to find a balance between the transmission burden and traffic
control quality.

Fig. 11. The model of Francuska street: (a) overview, (b)–(d) junctions.

Table 2. The simulation results for Francuska street

Control strategy Distance [m] Trans. Delay [s] Velocity Travel Messages
interval [s] [m/s] time [s] [1]
Junction 1 SOTL 110 1 10680 3,02 29506 26151
LH 30 1 11707 2,38 37198 18820
Backpressure 120 1 22439 1,42 63220 60008
SOTL (zone) 10, 30, 110 1,2,8 10019 2,89 30269 11826
LH (zone) 20, 30 1,2 11742 2,38 37732 17837
Backpressure (zone) 30, 120 1,8 21107 1,54 58368 29576
Junction 2 SOTL 50 1 8665 3,33 24874 16464
LH 60 1 13726 2,38 35060 25591
Backpressure 30 1 14964 1,52 53978 21081
SOTL (zone) 30, 50 1,8 8626 3,29 24727 12998
LH (zone) 50, 60 1,4 13762 2,38 34720 23539
Backpressure (zone) 20, 30 1,8 14879 1,52 54462 19828
Junction 3 SOTL 90 1 3062 5,34 14080 12574
LH 120 1 665 7,79 9409 7880
Backpressure 80 1 1007 7,77 9500 8137
SOTL (zone) 10, 90 1,8 2837 5,69 12915 2767
LH (zone) 60,90,120 1,4,8 663 7,56 9736 7003
Backpressure (zone) 30,80 1,4 995 7,79 9518 5021
456 M. Bernas and B. P-
laczek
Further experiments were conducted to verify the proposed approach for a
realis- tic scenario of road network with various junctions. Three junctions
were selected for these tests on the Francuska street in Katowice, Poland.
The simulation model is presented in Fig. 11. The traffic volume was set
based on the real traffic characteristics for this street during work days. As
in the previous experiment, three traffic control strategies were considered
with and without the proposed zone-based transmission method. The
average results of 20 simulations for this scenario are presented in Table 2.
The proposed zone-based transmission for all three traffic control strategies
allowed decreasing the number of messages, while retaining the high quality
of traffic control.
However, as in the first simulation scenario, the smallest decrease of
message reduction was registered for LH strategy (8%), while the biggest
reduction was observed in case of SOTL and Backpressure strategy (51%
and 31% respectively). The experimental results are promising and firmly
show that the data trans- mission can be decreased, while not incorporating
sophisticated suppression algo- rithms. However, the zones could be
dynamically changed according to traffic
intensity by using more sophisticated tracking and prediction mechanisms.

4 Conclusion
VANET is considered as useful source of input data for traffic signal
con- trol strategies. In this paper a zone-based transmission model is
proposed for VANETs, which enables effective data collection for the traffic
control applica- tions. Three state-of-art traffic control strategies were
investigated: SOTL, LH and Backpressure. The results show that it is
possible to reduce the amount of messages sent by vehicles using various
time intervals between data trans- missions. The time intervals were
selected based on distance to junction. The proposed method reduces the
transmission burden by sending only data that are vital for given control
strategy. The traffic control quality was measured by total delay, travel
time and average vehicle speed. The proposed algorithm uses α parameter to
balance the number of messages and the quality of traffic control. The
proposed concept of zone-based transmission is promising. It reduces the
number of messages sent between vehicles to minimum and allows the
vehicle to be aware to traffic control strategy and data requirements. The
future work will tackle with using multiple control strategy based on
obtained data. Another research area will be focused on even further
reducing the data transmission by implementing zone-dependent data
suppression methods. Finally, the zones can be defined not only for all day
traffic but for specific time periods, which could even further reduce data
transmission.

References
1. Aslam, M.U., et al.: An experimental investigation of CNG as an alternative fuel
for a retrofitted gasoline vehicle. J. Fuel Sci. Technol. Fuel Energy 85(5–6), 717–
724 (2006)
Zone-Based VANET Transmission Model for Traffic Signal Control 457

2. Wang, Q., Wang, L., Wei, G.: Research on traffic light adjustment based on
com- patibility graph of traffic flow. Intell. Hum. Mach. Syst. Cybern. (IHMSC)
1, 88–91 (2011)
3. P-laczek, B.: A traffic model based on fuzzy cellular automata. J. Cell. Automata
8(3–4), 261–282 (2013)
4. Qin, Z., Chao, P., Jingmin, S., Pengfei, D., Yu, B.: Cooperative traffic light
control based on semi-real-time processing. J. Autom. Control Eng. 4(1), 40–46
(2016)
5. Le, T., Kov´acs, P., Walton, N., Vu, H.L., Andrew, L.L., Hoogendoorn, S.S.:
Decen- tralized signal control for urban road networks. Transp. Res. Part C
Emerg. Tech- nol. 58, 431–450 (2015)
6. Helbing, D., L¨ammer, S., Lebacque, J.-P.: Self-organized control of irregular or
perturbed network traffic. In: Deissenberg, C., Hartl, R.F. (eds.) Optimal
Control and Dynamic Games. Advances in Computational Management Science,
vol. 7, pp. 239–274. Springer, USA (2005)
7. Cools, S.-B., Gershenson, C., D’Hooghe, B.: Self-organizing traffic lights: a
realistic simulation. In: Prokopenko, M. (ed.) Advances in Applied Self-Organizing
Systems. Advanced Information and Knowledge Processing, pp. 45–55. Springer,
London (2013)
8. Houli, D., Zhiheng, L., Yi, Z.: Multiobjective reinforcement learning for traffic
signal control using vehicular adhoc network. J. Adv. Sig. Process. 2010, 7
(2010)
9. P-laczek, B.: A self-organizing system for urban traffic control based on
predictive interval microscopic model. Eng. Appl. Artif. Intell. 34, 75–84 (2014)
10. Choudekar P., Banerjee S., Muju, M.K.: Implementation of image processing in
real time traffic light control. In: Proceedings of the 3rd International
Conference on Electronics Computer Technology (ICECT), Kanyakumari, vol. 2,
pp. 94–98 (2011)
11. Toor, Y., Muhlethaler, P., Laouiti, A., Fortelle, A.: Vehicle ad hoc networks:
appli- cations and related technical issues. IEEE Commun. Surv. Tutorials 10(1–
4), 74–88 (2008)
12. Kwatirayo, S., Almhana, J., Liu, Z.: Adaptive traffic light control using
VANET: a case study. In: Proceedings of 9th International Conference on
Wireless Com- munications and Mobile Computing Conference (IWCMC),
Sardinia, pp. 752–757 (2013)
13. Abbas, M.K., Karsiti, M.N., Napiah, M., Samir, B.B.: Traffic light control using
VANET system architecture. In: Proceedings of National Postgraduate
Conference (NPC), Kuala Lumpur, Malaysia, pp. 1–6 (2011)
14. Song, M., Wang, Y.: Human centricity and information granularity in the
agenda of theories and applications of soft computing. Appl. Soft Comput. 27,
610–613 (2014). doi:10.1016/j.asoc.2014.04.040
15. Sun, M-T., Feng, W-C., Lai, T-H., Yamada, K., Okada, H., Fujimura, K.: GPS
based message broadcasting for inter-vehicle communication. In: Proceedings of
International Conference on Parallel Processing, pp. 279–286 (2000)
16. P-laczek, B., Bernas, M.: Uncertainty-based information extraction in wireless
sen- sor networks for control applications. Ad Hoc Netw. 14, 106–117 (2014)
17. Bernas, M.: WSN Power Conservation Using Mobile Sink for Road Traffic Mon-
itoring. In: Kwiecien´, A., Gaj, P., Stera, P. (eds.) Computer Networks.
Commu- nications in Computer and Information Science, vol. 370, pp. 476–484.
Springer, Heidelberg (2013)
Author Index

Atmaca, Tulin 81, 412 Ksiezopolski, Bogdan 55


Augustyn, Dariusz Rafal 422 Kwiecień, Andrzej 182
Aydin, M. Ali 81, 412

Lysenko, Sergii 166


Bernas, Marcin 209, 444
Bestak, Robert 245
Bobrovnikova, Kira 166 Maćkowski, Michał 182
Brovko, Alexander 432 Malinowski, Tomasz 28
Martyna, Jerzy 44
Milczarski, Piotr 134
Callegari, Christian 154 Morozov, Evsey 351
Chudzikiewicz, Jan 28 Mytych, Martin Jan 395
Czachórski, Tadeusz 336
Czerwinski, Dariusz 106, 195
Neumann, Arne 395
Nife, Fahad 271
Delgado, Rosario 351 Nowak, Mateusz P. 70
Dolinina, Olga 432 Nowak, Sławomir 70
Domańska, Joanna 336
Domański, Adam 336 Pagano, Michele 154
Dronyuk, Ivanna 3 Pechenkin, Vitaly 432
Pecka, Piotr 70
Fedevych, Olga 3 Płaczek, Bartłomiej 209, 444
Furtak, Janusz 28 Podlaski, Krzysztof 134
Prokop, Wojciech 55
Ghumman, Waheed Aslam 305 Przylucki, Slawomir 106, 195
Giordano, Stefano 154 Pyda, Jakub 55
Głąbowski, Mariusz 256
Grochla, Krzysztof 70 Rak, Tomasz 321
Rusinek, Damian 55
Hlavacek, Jiri 245 Rybka, Paweł 144
Hłobaż, Artur 134 Rząsa, Wojciech 14
Hoeft, Michal 91 Rzońca, Dariusz 14, 182

Jamro, Marcin 14 Savenko, Bohdan 166


Jasperneite, Jürgen 395 Savenko, Oleg 166
Sawerwain, Marek 295
Schill, Alexander 305
Kaliszan, Adam 221, 256 Sertbas, Ahmet 81, 412
Kempa, Wojciech M. 366 Sidzina, Marcin 182
Klamka, Jerzy 336 Sierszen, Artur 195
Kotulski, Zbigniew 271
Sitkiewicz, Jaroslaw 106
Kryshchuk, Andrii 166 Skrzewski, Mirosław 144
460 Author Index

Sochor, Tomas 118 Wisniewski, Lukasz 395


Stasiak, Maciej 221, 256 Wojcicki, Piotr 106
Stój, Jacek 182 Wozniak, Jozef 91

Tikhonenko, Oleg 366, 380 Yiner, Zuleyha 412

Ustebay, Serpil 81, 412 Zajac, Pawel 380


Zawadzki, Piotr 236, 287
Wesemann, Derk 395 Zieliński, Zbigniew 28
Wiśniewska, Joanna 295 Zuzcak, Matej 118

You might also like