LINE-python
LINE-python
Disclaimer: L INE for Python is in alpha version, some features are still
under development and therefore marked in the manual as TODO.
1 Introduction 5
1.1 What is L INE? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Obtaining the latest release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Contact and credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Copyright and license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Getting started 9
2.1 Installation and support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.3 Getting help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Getting started examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 Controlling verbosity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 Model gallery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.3 Example 1: A M/M/1 queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.4 Example 2: A multiclass M/G/1 queue . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.5 Example 3: Machine interference problem . . . . . . . . . . . . . . . . . . . . . . 17
2.2.6 Example 4: Round-robin load-balancing . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.7 Example 5: Modelling a re-entrant line . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.8 Example 6: A queueing network with caching . . . . . . . . . . . . . . . . . . . . . 23
2.2.9 Example 7: Response time distribution and percentiles . . . . . . . . . . . . . . . . 25
2.2.10 Example 8: Optimizing a performance metric . . . . . . . . . . . . . . . . . . . . . 26
2.2.11 Example 9: Studying a departure process . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.12 Example 10: Evaluating a CTMC symbolically . . . . . . . . . . . . . . . . . . . . 28
3 Network models 30
3.1 Network object definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2
CONTENTS 3
4 Analysis methods 53
4.1 Performance metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.2 Steady-state analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.2.1 Station average performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.2.2 Station response time distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.2.3 System average performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.3 Specifying states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.3.1 Station states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.3.2 Network states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.3.3 Initialization of transient classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.3.4 State space generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.4 Transient analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.4.1 Computing transient averages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.4.2 First passage times into stations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.5 Sample path analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.6 Sensitivity analysis and numerical optimization . . . . . . . . . . . . . . . . . . . . . . . . 61
4.6.1 Fast parameter update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.6.2 Refreshing a network topology with non-probabilistic routing . . . . . . . . . . . . 62
4.6.3 Saving a network object before a change . . . . . . . . . . . . . . . . . . . . . . . . 62
5 Network solvers 63
5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.2 Solution methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.2.1 L INE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.2.2 CTMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.2.3 FLUID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.2.4 JMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4 CONTENTS
5.2.5 MAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.2.6 MVA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.2.7 NC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.2.8 SSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.3 Supported language features and options . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.3.1 Solver features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.3.2 Class functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.3.3 Node types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.3.4 Scheduling strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.3.5 Statistical distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.3.6 Solver options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.4 Solver maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
7 Random environments 88
7.1 Environment object definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
7.1.1 Specifying the environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
7.1.2 Specifying a reset policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.1.3 Specifying system models for each stage . . . . . . . . . . . . . . . . . . . . . . . 90
7.2 Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.2.1 ENV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
A Examples 95
Chapter 1
Introduction
A key feature of L INE is that the solver decouples the model description from the solvers used for
its solution. That is, L INE implements model-to-model transformations that automatically translate the
model specification into the input format (or data structure) accepted by the target solver. External solvers
supported by L INE include Java Modelling Tools (JMT; http://jmt.sf.net) and LQNS (http://www.sce.carleton.ca/
rads/lqns/). Native model solvers are instead based on formalisms and techniques such as:
5
6 CHAPTER 1. INTRODUCTION
Each solver encodes a general solution paradigm and can implement both exact and approximate analysis
methods. For example, the MVA solver implements both exact mean value analysis (MVA) and approximate
mean value analysis (AMVA). The offered methods typically differ for accuracy, computational cost, and
the subset of model features they support. A special solver (AUTO) is supplied that provides an automated
recommendation on which solver to use for a given model.
The above techniques can be applied to models specified in the following formats:
• L INE modeling language. This is a domain-specific object-oriented language designed to resemble
the abstractions available in JMT’s queueing network simulator (JSIM).
• Layered queueing network models (LQNS XML format). L INE is able to solve a sub-class of layered
queueing network models, either specified using the L INE modeling language or according to the
XML metamodel of the LQNS solver.
• JMT simulation models (JSIMg, JSIMw formats). L INE is able to import and solve queueing network
models specified using JSIMgraph and JSIMwiz. L INE models can be exported to, and visualized
with, JSIMgraph and JSIMwiz.
• Performance Model Interchange Format (PMIF XML format). L INE is able to import and solve closed
queueing network models specified using PMIF v1.0.
L INE 2.0.x has been tested using Python version 3.10, and IntelliJ DataSpell 2024.1.1 for the Jupyter note-
books.
1.3 References
To cite the L INE solver, we recommend to reference:
• G. Casale. “Integrated Performance Evaluation of Extended Queueing Network Models with LINE”,
in Proc. of WSC 2020, ACM Press, Dec 2020. This paper presents the technical approach used to
develop Line 2.0.x.
The following papers discuss L INE or use it in applications:
• R.-A. Dobre, Z. Niu, G. Casale. Approximating Fork-Join Systems via Mixed Model Transforma-
tions. Proc. of WOSP-C, 2024. This paper presents the fork-join approximation technique imple-
mented in SolverMVA.
1.3. REFERENCES 7
• G. Casale, Y. Gao, Z. Niu, L. Zhu. LN: a Meta-Solver for Layered Queueing Network Analysis,
Proceedings of QEST, 22 pages, Sep 2022. This paper gives a short introduction to the Layered
Queueing Network solver available in Line 2.0.x. Later extended into ACM Transactions on Modeling
and Computer Simulation, 2024.
• Y. Gao, G. Casale. JCSP: Joint Caching and Service Placement for Edge Computing Systems, in
Proc. of IEEE/ACM IWQoS, 10 pages, June 2022. This work introduces caching layers in Layered
Queueing Networks within Line’s LN solver.
• J. Ruuskanen, T. Berner, K.-E. Årzén et al., Improving the mean-field fluid model of processor shar-
ing queueing networks for dynamic performance models in cloud computing, Performance Evaluation
(2021). This work applies Line-JMT model-to-model transformations to validate mean-field approxi-
mations for product-form queueing networks..
• Z. Niu, G. Casale. A Mixture Density Network Approach to Predicting Response Times in Layered
Systems, in Proceedings of IEEE MASCOTS, 8 pages, Nov 2021. This work applies mixture density
networks for end-to-end latency percentile estimations in Layered Queueing Networks using Line.
• Y. Chen, G. Casale. Deep Learning Models for Automated Identification of Scheduling Policies, in
Proceedings of IEEE MASCOTS, 8 pages, Nov 2021. This work uses L INE’s SSA solver to generate
scheduling traces for training neural networks to infer scheduling policies.
• G. Casale. “Automated Multi-paradigm Analysis of Extended and Layered Queueing Models with
LINE”, in Proc. of ACM/SPEC 2019, ACM Press, Apr 2019. This paper gives a short introduction to
Line 2.0.0.
• D. J. Dubois, G. Casale. “OptiSpot: minimizing application deployment cost using spot cloud re-
sources”, in Cluster Computing, Volume 19, Issue 2, pages 893-909, 2016. This paper uses Line to
determine bidding costs in spot VMs.
8 CHAPTER 1. INTRODUCTION
• R. Osman, J. F. Pérez, and G. Casale. “Quantifying the Impact of Replication on the Quality-of-
Service in Cloud Databases’. Proceedings of the IEEE International Conference on Software Quality,
Reliability and Security (QRS), 286-297, 2016. This paper uses Line to model the Amazon RDS
database.
• C. Müller, P. Rygielski, S. Spinner, and S. Kounev. Enabling Fluid Analysis for Queueing Petri Nets
via Model Transformation, Electr. Notes Theor. Comput. Sci, 327, 71–91, 2016. This paper uses
Line to analyze Descartes models used in software engineering.
• J. F. Pérez and G. Casale. “Assessing SLA compliance from Palladio component models,” in Proceed-
ings of the 2nd Workshop on Management of resources and services in Cloud and Sky computing
(MICAS), IEEE Press, 2013. This paper uses Line to analyze Palladio component models used in
model-driven software engineering.
https://sourceforge.net/p/line-solver/code/ci/master/tree/LICENSE
1.6 Acknowledgement
L INE has been partially funded by the European Commission grants FP7-318484 (MODAClouds), H2020-
644869 (DICE), H2020-825040 (RADON), and by the EPSRC grant EP/M009211/1 (OptiMAM).
Chapter 2
Getting started
Ensure that the files are decompressed (or checked out) in the installation folder.
2. From now on, you will need to run all the commands from the python folder. Install the necessary
Python libraries running
3. L INE is now ready to use. For example, you can run a basic M/M/1 model using
python3 mm1.py
4. Jupyter notebooks are also available under the examples and gettingstarted folders.
To run line within your Python program, import the line solver module at the beginning of the
file, e.g.,
9
10 CHAPTER 2. GETTING STARTED
– enum tools
– jpype1
– numpy
– pandas
– matplotlib
– scipy
• Jupyter notebooks have been developed and tested using IntelliJ DataSpell 2024.1.1.
Partial Java ports of these libraries have been implemented or are automatically downloaded or shipped with
L INE:
• Java Modelling Tools (http://jmt.sf.net): version 1.2.4 or later. The latest version is automatically down-
loaded at the first call of the JMT solver.
• M3A (https://github.com/imperial-qore/M3A): version 1.0.0. This release is already included under the
lib subfolder.
Optional dependencies recommended to utilize all features available in L INE are as follows:
2.1.2 Documentation
This manual introduces the main concepts to define models in L INE and run its solvers. The document in-
cludes in particular several tables that summarize the features currently supported in the modeling language
and by individual solvers. Additional resources are as follows:
• PDF versions of all manuals (Java, MATLAB, Python): https://sourceforge.net/p/line-solver/code/ci/master/
tree/doc
• LayeredNetwork models are layered queueing networks, i.e., models consisting of layers, each
corresponding to a Network object, which interact through synchronous and asynchronous calls.
Technical background on layered queueing networks can be found in [45].
The goal of the remainder of this chapter is to provide simple examples that explain the basics on how these
models can be analyzed in L INE. More advanced forms of evaluation, such as probabilistic or transient anal-
yses, are discussed in later chapters. Additional examples are supplied under the examples and gallery
folders.
12 CHAPTER 2. GETTING STARTED
GlobalConstants.setVerbose(VerboseLevel.DEBUG)
SolverMVA(gallery_mm1_tandem(),'method','exact').getAvgTable()
The examples in the gallery may also be used as templates to accelerate the definition of basic models.
Example 9 shows later an example of gallery instantiation of a M/E2 /1 queue.
1. Definition of nodes
4. Solution
In the example, source and sink are arrival and departure points of jobs; queue is a queueing station
with FCFS scheduling; jobclass defines an open class of jobs that arrive, get served, and leave the system;
Exp(2.0) defines an exponential distribution with rate parameter λ = 2.0; finally, the getAvgTable
command solves for average performance measures with JMT’s simulator, using for reproducibility a spe-
cific seed for the random number generator.
The result is a table with mean performance measures including: the number of jobs in the station either
queueing or receiving service (QLen); the utilization of the servers (Util); the mean response time for a
visit to the station (RespT); the mean residence time, i.e. the mean response time cumulatively spent at the
station over all visits (ResidT); the mean throughput of departing jobs (Tput)
One can verify that this matches JMT results by first typing
model.jsimgView()
which will open the model inside JSIMgraph, as shown in Figure 2.1. From this screen, the simulation can
be started using the green “play” button in the JSIMgraph toolbar. A pre-defined gallery of classic models
is also available, for example
model = gallery_mm1()
Figure 2.1: M/M/1 example in JSIMgraph launched from the DataSpell IDE
gives output
If we specify only ’Queue’ or ’Class1’, tget will return all entries corresponding to that station or
class. Moreover, the following syntax is also valid
if we specify only queue or only jobclass, tget will return all entries corresponding to that station or
class.
model = Network('M/G/1')
source = Source(model,'Source')
queue = Queue(model, 'Queue', SchedStrategy.FCFS)
sink = Sink(model,'Sink')
2.2. GETTING STARTED EXAMPLES 15
The next step consists in defining the classes. We fit automatically from mean and squared coefficient of
variation (i.e., SCV=variance/mean2 ) an Erlang distribution and use the Replayer distribution to request
that the specified trace is read cyclically to obtain the service times of class 2
source.setArrival(jobclass1, Exp(0.5))
source.setArrival(jobclass2, Exp(0.5))
Note that the example trace.txt file consists of a single column of doubles, each representing a service
time value, e.g.,
1.2377474e-02
4.4486055e-02
1.0027642e-02
2.0983173e-02
...
We now specify a linear route through source, queue, and sink for both classes
P = model.initRoutingMatrix()
P.set(jobclass1,Network.serialRouting(source,queue,sink))
P.set(jobclass2,Network.serialRouting(source,queue,sink))
model.link(P)
jmtAvgTable = SolverJMT(model,'seed',23000).getAvgTable()
which gives
We wish now to validate this value against an analytical solver. Since jobclass2 has trace-based service
times, we first need to revise its service time distribution to make it analytically tractable, e.g., we may ask
L INE to fit an acyclic phase-type distribution [4] based on the trace
queue.setService(jobclass2, Replayer('example_trace.txt').fitAPH())
16 CHAPTER 2. GETTING STARTED
We can now use a Continuous Time Markov Chain (CTMC) to solve the system, but since the state space
is infinite in open models, we need to truncate it to be able to use this solver. For example, we may restrict
to states with at most 2 jobs in each class, checking with the verbose option the size of the resulting state
space
ctmcAvgTable2 = SolverCTMC(model,'cutoff',2,'verbose',True).getAvgTable()
which gives
However, we see from the comparison with JMT that the errors of SolverCTMC are rather large. Since
the truncated state space consists of just 46 states, we can further increase the cutoff to 4, trading a slower
solution time for higher precision
ctmcAvgTable4 = SolverCTMC(model,'cutoff',4,'verbose',True).getAvgTable()
which gives
To gain more accuracy, we could either keep increasing the cutoff value or, if we wish to compute an exact
solution, we may call the matrix-analytic method (MAM) solver instead. SolverMAM uses the repetitive
structure of the CTMC to exactly analyze open systems with an infinite state space, calling
SolverMAM(model).getAvgTable()
we get
The current MAM implementation is primarily constructed on top of Java ports of the BuTools solver [29]
and the SMC solver [3].
2.2. GETTING STARTED EXAMPLES 17
model = Network('MRP')
delay = Delay(model, 'WorkingState')
queue = Queue(model, 'RepairQueue', SchedStrategy.FCFS)
queue.setNumberOfServers(2)
cclass = ClosedClass(model, 'Machines', 3, delay)
delay.setService(cclass, Exp(0.5))
queue.setService(cclass, Exp(4.0))
model.link(Network.serialRouting(delay, queue))
solver = SolverCTMC(model)
ctmcAvgTable = solver.getAvgTable()
Here, delay appears in the constructor of the closed class to specify that a job will be considered completed
once it returns to the delay (i.e., the machine returns in working state). We say that the delay is thus the
reference station of cclass. The above code prints the following result
As before, we can inspect and analyze the model in JSIMgraph using the command
model.jsimgView()
Figure 2.2 illustrates the result, demonstrating the automated definition of the closed class.
We can now also inspect the CTMC more in the details as follows
which produces in output the state space of the model and the infinitesimal generator of the CTMC
[[0. 1. 2.]
[1. 0. 2.]
18 CHAPTER 2. GETTING STARTED
[2. 0. 1.]
[3. 0. 0.]]
[[-8. 8. 0. 0. ]
[ 0.5 -8.5 8. 0. ]
[ 0. 1. -5. 4. ]
[ 0. 0. 1.5 -1.5]]
For example, the first state (0 1 2) consists of two components: the initial 0 denotes the number of jobs in
service in the delay, while the remaining part is the state of the FCFS queue. In the latter, the 1 means
that a job of class 1 (the only class in this model) is in the waiting buffer, while the 2 means that there are
two jobs in service at the queue.
As another example, the second state (1 0 2) is similar, but one job has completed at the queue and has
then moved to the delay, concurrently triggering an admission in service for the job that was in the queue
buffer. As a result of this, the buffer is now empty. The corresponding transition rate in the infinitesimal
generator matrix is row 1 and column 2 of InfGen, which has value 8.0, that is the sum of the completion
rates at the queue for each server in the first state, and where indexes 1 and 2 are the rows in StateSpace
associated to the source and destination states.
On this and larger infinite generators, we may also list individual non-zero transitions as follows
SolverCTMC.printInfGen(infGen, stateSpace)
gives
The above printout helps in matching the state transitions to their rates.
To avoid having to inspect the StateSpace variable to determine to which station a particular column
refers to, we can alternatively use the more general invocation
gives
{0: array([[0.],
[1.],
[2.],
[3.]]),
1: array([[1., 2.],
[0., 2.],
[0., 1.],
[0., 0.]])}
which automatically splits the state space into its constituent parts for each stateful node.
A further observation is that model.getStateSpace() forces the regeneration of the state space
at each invocation, whereas the equivalent function in the CTMC solver, solver.getStateSpace(),
returns the state space cached during the solution of the CTMC.
model = Network('RRLB')
source = Source(model, 'Source')
lb = Router(model, 'LB')
queue1 = Queue(model, 'Queue1', SchedStrategy.PS)
queue2 = Queue(model, 'Queue2', SchedStrategy.PS)
sink = Sink(model, 'Sink')
Let us then define the class block by setting exponentially-distributed inter-arrival times and service times,
e.g.,
We now wish to express the fact that the router applies a round-robin strategy to dispatch jobs to the queues.
Since this is now a non-probabilistic routing strategy, we need to adopt a slightly different style to declare
the routing topology as we cannot specific anymore routing probabilities. First, we indicate the connections
between the nodes, using the addLinks function:
model.addLinks([[source, lb],
[lb, queue1],
[lb, queue2],
[queue1, sink],
[queue2, sink]])
lb.setRouting(jobclass, RoutingStrategy.RAND)
At this point, all nodes are automatically configured to route jobs with equal probabilities on the outgoing
links (RoutingStrategy.RAND policy). If we solve the model at this point, we see that the response
time at the queues is around 0.66s. Running JMT
jmtAvgTable = SolverJMT(model,'seed',23000).getAvgTable()
we get
After resetting the internal data structures, which is required before modifying a model we can require
L INE to solve again the model using this time a round-robin policy at the router.
model.reset()
lb.setRouting(oclass, RoutingStrategy.RROBIN)
SolverJMT(model,'seed',23000).getAvgTable()
gives
model = Network('RL')
queue = Queue(model, 'Queue', SchedStrategy.FCFS)
K = 3
N = (1, 0, 0)
jobclass = []
for k in range(K):
jobclass.append(ClosedClass(model, 'Class' + str(k), N[k], queue))
queue.setService(jobclass[k], Erlang.fitMeanAndOrder(1+k, 2))
P = model.initRoutingMatrix()
P.set(jobclass[0], jobclass[1], queue, queue, 1.0)
P.set(jobclass[1], jobclass[2], queue, queue, 1.0)
P.set(jobclass[2], jobclass[0], queue, queue, 1.0)
model.link(P)
22 CHAPTER 2. GETTING STARTED
The corresponding JMT model is shown in Figure 2.4, where it can be seen that the class-switching rule is
automatically enforced by introduction of a ClassSwitch node in the network.
We can now simulate the performance indexes for the different classes, for example using LINE’s normal-
izing constant solver (SolverNC)
ncAvgTable = SolverNC(model).getAvgTable()
gives
Suppose now that the job is considered completed, for the sake of computation of system performance
metrics, only when it departs the queue in class K (here Class3). By default, L INE will return system-
wide performance metrics using the getAvgSysTable method, i.e.,
new SolverNC(model).getAvgSysTable().print();
gives
This method identifies the model chains, i.e., groups of classes that can exchange jobs with each other, but
not with classes in other chains. Since the job can switch into any of the three classes, in this model there is
a single chain comprising the three classes.
We see that the throughput of the chain is 0.5, which means that L INE is counting every departure from
the queue in any class as a completion for the whole chain. This is incorrect for our model since we want to
2.2. GETTING STARTED EXAMPLES 23
count completions only when jobs depart in Class3. To require this behavior, we can tell to the solver that
passages for classes 1 and 2 through the reference station should not be counted as completions
jobclass[0].completes = False
jobclass[1].completes = False
This modification then gives the correct chain throughput, matching the one of Class3 alone
ncAvgSysTable = SolverNC(model).getAvgSysTable()
gives
Node block As usual, we begin by defining the nodes. Here a delay node will be used to describe the time
spent by the requests in the system, while the cache node will determine hits and misses:
model = Network('model')
clientDelay = Delay(model, 'Client')
cacheNode = Cache(model, 'Cache', 1000, 50, ReplacementStrategy.LRU)
cacheDelay = Delay(model, 'CacheDelay')
Class block We define a set of classes to represent the incoming requests (clientClass), cache hits
(hitClass) and cache misses (missClass). These classes need to be closed to ensure that there is a
single outstanding request from the client at all times:
We then assign the processing times, using the Immediate distribution to ensure that the client issues
immediately the request to the cache:
clientDelay.setService(clientClass, Immediate())
cacheDelay.setService(hitClass, Exp.fitMean(0.2))
cacheDelay.setService(missClass, Exp.fitMean(1.0))
The next step involves specifying that the request uses a Zipf-like distribution (with parameter α = 1.4) to
select the item to read from the cache, out of a pool of 1000 items
Finally, we ask that the job should become of class hitClass after a cache hit, and should become of class
missClass after a cache miss:
cacheNode.setHitClass(clientClass, hitClass)
cacheNode.setMissClass(clientClass, missClass)
Topology block Next, in the topology block we setup the routing so that the request, which starts in
clientClass at the clientDelay, then moves from there to the cache, remaining in clientClass
P = model.initRoutingMatrix()
P.set(clientClass, clientClass, clientDelay, cacheNode, 1.0)
Internally to the cache, the job will switch its class into either hitClass or missClass. Upon departure
in one of these classes, we ask it to join in the same class cacheDelay for further processing
Lastly, the job returns to clientDelay for completion and start of a new request, which is done by
switching its class back to clientClass
model.link(P)
Solution block To solve the model, since JMT does not support cache modeling, we use the native simu-
lation engine provided within L INE, the SSA solver:
The departing flows from the CacheDelay are the miss and hit rates. Thus, the hit rate is 2.4554 jobs per
unit time, while the miss rate is 0.50892 jobs per unit time.
Let us now suppose that we wish to verify the result with a longer simulation, for example with 10 times
more samples. To this aim, we can use the automatic parallelization of SSA
The execution time is longer than usual at the first invocation of the parallel solver due to the time needed
by MATLAB to bootstrap the parallel pool, in this example around 22 seconds. Successive invocations of
parallel SSA normally take much less, with this example around 7 seconds each.
model = Network("Model")
There is a single class consisting of 5 jobs that circulate between the two stations, taking exponential service
times at both.
model.link(Network.serialRouting(node[0], node[1]))
We now wish to compare the response time distribution at the PS queue computed analytically with a
fluid approximation against the simulated values returned by JMT. To do so, we call the getCdfRespT
method
# RDfluid = SolverFluid(model).getCdfRespT()
RDsim = SolverJMT(model, 'seed', 23000, 'samples', 10000).getCdfRespT()
TODO . The first column represents the cumulative distribution function (CDF) value F (t) = P r(T ≤
t), where T is the random variable denoting the response time, while t is the percentile appearing in the
corresponding entry of the second column.
For example, to plot the complementary CDF 1 − F (t) we can use the following code
The graph shows that, although the simulation refers to a transient, while the fluid approximation refers to
steady-state, there is a tight matching between the two response time distributions.
We can also readily compute the percentiles from the RDfluid and RDsim data structures, e.g., for
the 95th and 99th percentiles of the simulated distribution
That is, 95% of the response times at the PS queue (node 2, class 1) are less than or equal to 27.0222 time
units, while 99% are less than or equal to 41.8743 time units.
def objFun(p):
Within the function definition, we instantiate the two queues and the delay station
model = Network('LoadBalCQN')
delay = Delay(model, 'Think')
queue1 = Queue(model, 'Queue1', SchedStrategy.PS)
queue2 = Queue(model, 'Queue2', SchedStrategy.PS)
We assume that 16 jobs circulate among the nodes, and that the service rates are σ = 1 jobs per unit time at
the delay, and µ1 = 0.75 and µ2 = 0.50 at the two queues:
2.2. GETTING STARTED EXAMPLES 27
We initially setup a topology with arbitrary values for the routing probabilities between delay and queues,
ensuring that jobs completing at the queues return to the delay:
P = model.initRoutingMatrix()
P.set(cclass, cclass, queue1, delay, 1.0)
P.set(cclass, cclass, queue2, delay, 1.0)
model.link(P)
We now return the system response time for the jobs as a function of the routing probability p to choose
queue 1 instead of queue 2:
p_opt = optimize.fminbound(objFun, 0, 1)
print(p_opt[0])
We are now ready to run the example. The execution returns the optimal value 0.6104878504366782.
We now extract 50,000 samples from simulation based on the underpinning continuous-time Markov chain
The returned data structure supplies information about the stateful nodes (here source and queue) at each
of the 50,000 instants of sampling, together with the events that have been collected at these instants.
As an example, the first two events occur both at timestamp 0 and indicate a departure event from node
1 (the type EventType.DEP maps to event: DEP) followed by an arrival event at node 2 (the type
EventType.ARV maps to event: ARV) which accepts it always (prob: 1).
We are now ready to filter the timestamps of events related to departures from the queue node
We may now for example compute the squared coefficient of variation of this process
which evaluates to 0.8750. Using Marshall’s exact formula for the GI/G/1 queue [33], we get a theoretical
value of 0.8750.
Here, the first argument adds a single station to the next, while the second argument requires presence of a
delay station. The network has a single class with 4 circulating jobs.
The getSymbolicGenerator method of the CTMC solver can now be called to obtain the symbolic
generator
There are therefore 5 states, corresponding to all possible way of distributing the 4 jobs across the two
stations
An event is represented in L INE as a synchronization between an active agent and a passive agent. Typically,
the station that completes a job is an active agent, whereas the one that receives it is a passive agent. In this
sense, x1 and x2 may be seen as the rates at which the two agents synchronize to perform the two actions.
To learn the meaning of the symbolic variables x1 and x2 we can now use the syncInfo data structure
In the above, we see that x1 is a class-1 departure from station 1 (they delay) into station 2 (the processor
sharing queue), and viceversa x2 is a departure from station 2 that joins station 1.
Network models
Throughout this chapter, we discuss the specification of Network models, which are extended queueing
networks. L INE currently supports open, closed, and mixed networks with non-exponential service and
arrivals, and state-dependent routing. All solvers support the computation of basic performance metrics,
while some more advanced features are available only in specific solvers. Each Network model requires
in input a description of the nodes, the network topology, and the characteristics of the jobs that circulate
within the network. In output, L INE returns performance and reliability metrics.
The default metrics supported by all solvers are as follows:
• Mean queue-length (QLen). This is the mean number of jobs residing at a node when this is observed
at a random instant of time.
• Mean utilization (Util). For nodes that serve jobs, this is the mean fraction of time the node is busy
processing jobs. In both single-server and multi-server nodes, this is a number normalized between
0 and 1, corresponding to 0% and 100%. In infinite-server nodes, the utilization is set by convention
equal to the mean queue-length, therefore taking the interpretation of the mean number of jobs in
execution at the station.
• Mean response time (RespT). This is the mean time a job spends traversing a node within a network.
If the node is visited multiple times, the response time is the time spent for a single visit to the node.
• Mean residence time (ResidT). This is the total time a job accumulates, on average, to traverse a
node within a network. If the node is visited multiple times, the residence time is the time accumulated
overall visits to the node prior to returning to the reference station or arriving to a sink.
• Mean throughput (Tput). This is the mean departure rate of jobs completed at a resource per time
unit. Typically, this matches the mean arrival rate, unless the node switches the class of the jobs in
which case the arrival rate of a class may not match its departure rate.
30
3.1. NETWORK OBJECT DEFINITION 31
The above metrics refer to the performance characteristics of individual nodes. Response times and through-
puts can also be system-wide, meaning that they can describe end-to-end performance during the visit to the
network. In this case, these metrics are called system metrics.
model = Network('myModel')
The returned object of the Network class offers functions to instantiate and manage resource nodes (sta-
tions, delays, caches, ...) visited by jobs of several types (classes).
A node is a resource in the network that can be visited by a job. A node must have a unique name and can
either be stateful or stateless, the latter meaning that the node does not require state variables to determine
its state or actions. If jobs visiting a stateful node can be required to spend time in it, the node is also said to
be a station. A list of nodes available in Network models is given in Table 3.1.1.
We now provide more details on each of the nodes available in Network models.
Queue node. A Queue specifies a queueing station from its name and scheduling strategy, e.g.
specifies a first-come first-served queue. It is alternatively possible to instantiate a queue using the QueueingStation
constructor, which is merely an alias for Queue.
Queueing stations have by default a single server. The setNumberOfServers method can be used
to instantiate multi-server stations.
Valid scheduling strategies are specified within the SchedStrategy static class and include:
• First-come first-serve (SchedStrategy.FCFS)
• Infinite-server (SchedStrategy.INF)
• Processor-sharing (SchedStrategy.PS)
• Service in random order (SchedStrategy.SIRO)
• Discriminatory processor-sharing (SchedStrategy.DPS)
• Generalized processor-sharing (SchedStrategy.GPS)
• Shortest expected processing time (SchedStrategy.SEPT)
• Shortest job first (SchedStrategy.SJF)
• Head-of-line priority (non-preemptive) (SchedStrategy.HOL)
• Polling (SchedStrategy.POLLING)
If a strategy requires class weights, these can be specified directly as an argument to the setService
function or using the setStrategyParam function, see later the description of DPS scheduling for an
example.
Delay node. Delay stations, also called infinite server stations, may be instantiated either as objects of
Queue class, with the SchedStrategy.INF scheduling strategy, or using the following specialized
constructor
As for queues, for readability it is possible to instantiate delay nodes using the DelayStation class
which is an alias for the Delay class.
Source and Sink nodes. As seen in the M/M/1 getting started example, these nodes are mandatory el-
ements for the specification of open classes. Their constructor only requires a specification of the unique
name associated to the nodes:
Fork and Join nodes. The fork and join nodes are currently available only for the JMT solver. The Fork
splits an incoming job into a set of sibling tasks, sending out one task for each outgoing link. These tasks
inherit the class of the original job and are served as normal jobs until they are reassembled at a Join
station.
Their specification of Fork and Join nodes only requires the name of the node
The number of tasks sent by a Fork on each output link can be set using the setTasksPerLink
method of the fork object. To enable effective analytical approximations, presently L INE requires that
every join node is bound to a specific fork node, although specific solvers will ignore this information (e.g.,
JMT).
Also note that the routing probabilities out of the Fork node need to be set to 1.0 towards every other
node connected to the Fork. For example, a Fork sending jobs in class 1 to nodes A, B and C, cannot
send jobs in class 2 only to A and B: it must send them to all three connected nodes A, B and C. A new
fork node visited only by class-2 jobs needs to be created in order to send that class of jobs only to A and B.
After splitting a job into tasks, L INE takes the convention that visit counts refer to the average number
of passages at the target resources for the original job, scaled by the number of tasks. For example, if a job
is split into two tasks at a fork node, each visiting respectively nodes A and B, the average visit count at A
and B will be 0.5.
ClassSwitch node. This is a stateless node to change the class of a transiting job based on a static proba-
bilistic policy. For example, it is possible to specify that all jobs belonging to class 1 should become of class
2 with probability 1.0, or that a transiting job of class 2 should become of class 1 with probability 0.3. This
example is instantiated as follows
Cache node. This is a stateful node to store one or more items in a cache of finite size, for which it is
possible to specify a replacement policy. The cache constructor requires the total cache capacity and the
number of items that can be referenced by the jobs in transit, e.g.,
If the capacity is a scalar integer (e.g., 15), then it represents the total number of items that can be
cached and the value cannot be greater than the number of items. Conversely, if it is a vector of integers
(e.g., [10,5]) then the node is a list-based cache, where the vector entries specify the capacity of each list.
We point to [25] for more details on list-based caches and their replacement policies.
Available replacement policies are specified within the ReplacementStrategy static class and in-
clude:
34 CHAPTER 3. NETWORK MODELS
Router node. This node is able to route jobs according to a specified RoutingStrategy, which can
either be probabilistic or not (e.g., round-robin). Upon entering a Router, a job neither waits nor receives
service; it is instead directly forwarded to the next node according to the specified routing strategy. A
Router can be instantiated as follows:
An example of use of this node is given in Section 2.2.6. Routing strategies need to be specified for each
class using the setRouting method and valid choices are as follows
• Random routing (RoutingStrategy.RAND)
• Round robin (RoutingStrategy.RROBIN)
• Probabilistic routing (RoutingStrategy.PROB)
• Join-the-shortest-queue (RoutingStrategy.JSQ)
For example, assume that oclass is a class of jobs. In order to route jobs in this class with equal probabil-
ities to every outgoing link we set
router.setRouting(oclass, RoutingStrategy.RAND)
It should be noted that setRouting is also available for all other nodes such as queueing stations, delays,
etc. Therefore, the added value of the Router node is the ability to represent certain system elements that
centralize the routing logic, such as load balancers.
Logger node. A logger node is a node that closely resembles the logger node available in the JSIMgraph
simulator within JMT. At present, models that include this element can only be solved using the JMT solver.
A Logger node records information about passing jobs in a csv file, such as the timestamp of passage
and general information about the jobs. The node can be instantiated as follows
The routing behavior of jobs can be set up as explained for regular nodes such as queues or delay stations.
3.1. NETWORK OBJECT DEFINITION 35
assigns a weight 5.0 to jobs in class 2. The default weight of a class is 1.0.
Finite buffers
The functions setCapacity and setChainCapacity of the Station class are used to place con-
straints on the number of jobs, total or for each chain, that can reside within a station. Note that L INE does
not allow one to specify buffer constraints at the level of individual classes unless chains contain a single
class, in which case setChainCapacity is sufficient for the purpose.
For example,
creates an example model with two chains and three classes (specified in example closedModel 3.m)
and requires the second station to accept a maximum of one job in each chain. Note that if we were to ask
for a higher capacity, such as setChainCapacity([1,7]), which exceeds the total job population in
chain 2, L INE would have automatically reduced the value 7 to the chain 2 job population (2). This automatic
correction ensures that functions that analyze the state space of the model do not generate unreachable states.
The refreshCapacity function updates the buffer parameterizations, performing appropriate san-
ity checks. Since example closedModel 3 has already invoked a solver prior to our changes, the
requested modifications are materially applied by L INE to the network only after calling an appropriate
refreshStruct function, see the sensitivity analysis section. If the buffer capacity changes were made
before the first solver invocation on the model, then there would not be the need for a refreshCapacity
call, since the internal representation of the Network object used by the solvers is still to be created.
Open classes
The constructor for an open class only requires the class name and the creation of special nodes called
Source and Sink
Sources are special stations holding an infinite pool of jobs and representing the external world. Sinks are
nodes that route a departing job back into this infinite pool, i.e., into the source. Note that a network can
include at most a single Source and a single Sink.
Once source and sink are instantiated in the model, it is possible to instantiate open classes using
L INE does not require explicitly associating source and sink with the open classes in their constructors, as
this is done automatically. However, the L INE language requires explicitly creating these nodes since the
routing topology needs to indicate the arrival and departure points of jobs in open classes. However, if the
network does not include open classes, the user will not need to instantiate a Source and a Sink.
Closed classes
To create a closed class, we need instead to indicate the number of jobs that start in that class (e.g., 5 jobs)
and the reference station for that class (e.g., queue), i.e.:
The reference station indicates a point in the network used to calculate certain performance indexes, called
system performance indexes. The end-to-end response time for a job in an open class to traverse the system
is an example of a system performance index (system response time). The reference station of an open
class is always automatically set by L INE to be the Source. Conversely, the reference station needs to
be indicated explicitly in the constructor for closed classes since the point at which a class job completes
execution depends on the semantics of the model.
L INE also supports a special class of jobs, called self-looping jobs, which perpetually loop at the refer-
ence station, remaining in their class. The following example shows the syntax to specify a self-looping job,
which is identical to closed classes but there is no need later to specify routing information.
model = Network('model')
delay = Delay(model, 'Delay')
queue = Queue(model, 'Queue1', SchedStrategy.FCFS)
cclass = ClosedClass(model, 'Class1', 10, delay, 0)
slclass = SelfLoopingClass(model, 'SLC', 1, queue, 0)
delay.setService(cclass, Exp(1.0))
queue.setService(cclass, Exp(1.5))
3.1. NETWORK OBJECT DEFINITION 37
queue.setService(slclass, Exp(1.5))
P = model.initRoutingMatrix()
P[0] = [[0.7,0.3],[1.0,0.0]]
model.link(P)
Note that any routing information specified for the self-looping class will be ignored.
Mixed models
L INE also accepts models where a user has instantiated both open and closed classes. The only requirement
is that, if two classes communicate by means of a class-switching mechanism, then the two classes must
either be all closed or all open. In other words, classes in the same chain must either be both closed or open.
Furthermore, for all closed classes in the same chain, it is required for the reference station to be the same.
Class priorities
If a class has a priority, with 0 representing the highest priority, this can be specified as an additional
argument to both OpenClass and ClosedClass, e.g.,
In Network models, priorities are intended as hard priorities and the only supported priority scheduling
strategy (SchedStrategy.HOL) is non-preemptive. Weight-based policies such as DPS and GPS may
be used, as an alternative, to prevent starvation of jobs in low-priority classes.
Class switching
In L INE, jobs can switch their class while they travel between nodes (including self-loops on the same node).
For example, this feature can be used to model queueing properties such as re-entrant lines in which a job
visiting a station a second time may require a different average service demand than at its first visit.
A chain defines the set of reachable classes for a job that starts in the given class r and over time changes
class. Since class switching in L INE does not allow a closed class to become open, and vice-versa, chains
can themselves be classified into open chains and closed chains, depending on the classes that compose
them.
Jobs in open classes can only switch to another open class. Similarly, jobs in closed classes can only
switch to a closed class. Thus, class switching from open to closed classes (or vice-versa) is forbidden.
More details about class-switching are given in Section 3.1.5.
Reference station
Before we have shown that the specification of classes requires choosing a reference station. In L INE,
reference stations are properties of chains, thus if two closed classes belong to the same chain they must
38 CHAPTER 3. NETWORK MODELS
have the same reference station. This avoids ambiguities in the definition of the completion point for jobs
within a chain.
For example, the system throughput for a chain is defined as the sum of the arrival rates at the reference
station for all classes in that chain. That is, the solver counts a return to the reference station as a completion
of the visit to the system. In the case of open chains, the reference station is always the Source and the
system throughput corresponds to the rate at which jobs arrive at the sink Sink, which may be seen as the
arrival rate seen by the infinite pool of jobs in the external world. If there is no class switching, each chain
contains a single class, thus per-chain and per-class performance indexes will be identical.
Reference class
Occasionally, it is possible to encounter situations where a job needs to change class while remaining inside
the same station. In this case, L INE modifies the network automatically to introduce a class-switching node
for the job to route out of the station and immediately return to it in the new class.
One complication of the approach is that, by departing the node and returning to it, the job visits
the station one additional time, affecting the visit count to the station and therefore performance metrics
such as the residence time. To cope with this issue, L INE offers a method for the class objects, called
setReferenceClass, that allows users to specify whether the visit of that class to the reference station
should be considered upon computing the residence times across the network for the chain to which the
class belongs. By default, all classes traversing the reference station are used in the visit count calculation.
Jobs travel between nodes according to the network topology and a routing strategy. Typically a queueing
network will use a probabilistic routing strategy (RoutingStrategy.PROB), which requires specifying
routing probabilities among the nodes. The simplest way to specify a large routing topology is to define
the routing probability matrix for each class, followed by a call to the link function. This function will
automatically add certain nodes to the network to ensure the correct class switching for jobs moving between
stations (ClassSwitch elements).
In the running case, we may instantiate a routing topology as follows:
P = model.initRoutingMatrix()
P.set(class1, class1, source, queue, 1.0)
P.set(class1, class1, queue, queue, 0.3) # self-loop with probability 0.3
P.set(class1, class1, queue, delay, 0.7)
P.set(class1, class1, delay, sink, 1.0)
P.set(class2, class2, delay, queue, 1.0) # note: closed class jobs start at delay
P.set(class2, class2, queue, delay, 1.0)
model.link(P)
3.1. NETWORK OBJECT DEFINITION 39
When used as arguments to a cell array or matrix, class, and node objects will be replaced by a correspond-
ing numerical index. Normally, the indexing of classes and nodes matches the order in which they are
instantiated in the model and one can therefore specify the routing matrices using this property. In this case,
we would have
P = model.initRoutingMatrix()
pmatrix = np.empty(K, dtype=object)
pmatrix[0] = [[0,1,0,0], [0,0.3,0.7,0], [0,0,0,1], [0,0,0,0]]
pmatrix[1] = [[0,0,0,0], [0,0,1,0], [0,1,0,0], [0,0,0,0]]
P.setRoutingMatrix(jobclass, node, pmatrix)
Where needed, the getClassIndex and getNodeIndex functions return the numerical index associ-
ated with a node name, for example model.getNodeIndex(’Delay’). Class and node names in a
network must be unique. The list of names already assigned to nodes in the network can be obtained with
the getClassNames, getStationNames, and getNodeNames functions of the Network class.
It is also important to note that the routing matrix in the last example is specified between nodes, instead
of between just stations or stateful nodes, which means that for example elements such as the Sink need
to be explicitly considered in the routing matrix. The only exception is that ClassSwitch elements do
not need to be explicitly instantiated in the routing matrix, provided that one uses the link function to
instantiate the topology. Note that the routing matrix assigned to a model can be printed on the screen in a
human-readable format using the printRoutingMatrix function, e.g.,
model.printRoutingMatrix()
prints
The above routing specification style is only for models with probabilistic routing strategies between every
pair of nodes. A different style should be used for scheduling policies that do not require to explicit routing
probabilities, as in the case of state-dependent routing. Currently supported strategies include:
For the above policies, the function addLink should be first used to specify pairs of connected nodes
queue.setRouting(class1,RoutingStrategy.RROBIN)
assigns round robin among all outgoing links from the queue node.
A model could also include both classes with probabilistic routing strategies and classes that use round-
robin or other non-probabilistic strategies. To instantiate routing probabilities in such situations one should
then use, e.g.,
queue.setRouting(class1,RoutingStrategy.PROB)
queue.setProbRouting(class1, queue, 0.7)
queue.setProbRouting(class1, delay, 0.3)
model.link(Network.serialRouting(source,queue,sink))
3.1. NETWORK OBJECT DEFINITION 41
In a similar fashion, we can also rapidly instantiate a tandem network consisting of stations with PS and INF
scheduling as follows
lam = [10,20]
D = [[11,12], [21,22]] # D(i,r) - class-r demand at station i (PS)
Z = [[91,92], [93,94]] # Z(i,r) - class-r demand at station i (INF)
modelPsInf = Network.tandemPsInf(lam,D,Z)
The above snippet instantiates an open network with two queueing stations (PS), two delay stations (INF),
and exponential distributions with the given inter-arrival rates and mean service times. The Network.tandemPs,
Network.tandemFcfs, and Network.tandemFcfsInf functions provide static constructors for
networks with other combinations of scheduling policies, namely only PS, only FCFS, or FCFS and INF.
A tandem network with closed classes is instead called a cyclic network. Similar to tandem networks,
L INE offers a set of static constructors:
• Network.cyclicPs: cyclic network of PS queues
• Network.cyclicPsInf: cyclic network of PS queues and delay stations
• Network.cyclicFcfs: cyclic network of FCFS queues
• Network.cyclicFcfsInf: cyclic network of FCFS queues and delay stations
These functions only require replacing the arrival rate vector A by a vector N specifying the job populations
for each of the closed classes, e.g.,
Importantly, L INE assumes that a job switches class an instant after leaving a station, thus the perfor-
mance metrics of a class at the node refer to the class that jobs had upon arrival to that node.
# Block 1: nodes
...
csnode = ClassSwitch(model, 'ClassSwitch 1')
# Block 2: classes
jobclass = np.empty(2, dtype=object)
jobclass[0] = OpenClass(model, 'Class1', 0)
jobclass[1] = OpenClass(model, 'Class2', 0)
...
# Block 3: topology
C = csnode.initClassSwitchMatrix()
C[0][1] = 1.0
C[1][1] = 1.0
csnode.setClassSwitchingMatrix(C)
Note that for a network with M stations, up to M 2 ClassSwitch elements may be required to implement
class-switching across all possible links, including self-loops.
Cache-based class-switching
An advanced feature of L INE available for example within the Cache node, is that the class-switching
decision can dynamically depend on the state of the node (e.g., cache hit/cache miss). However, in order
to statically determine chains, L INE requires that every class-switching node declares the pair of classes
that can potentially communicate with each other via a switch. This is called the class-switching mask and
it is automatically computed. The boolean matrix returned by the model.getClassSwitchingMask
function provides this mask, which has an entry in row r and column s set to true only if jobs in class r can
switch into class s at some node in the network.
Upon cache hit or cache miss, a job in transit is switched to a user-specified class, as specified by the
setHitClass and setMissClass, so that it can be routed to a different destination based on whether it
3.1. NETWORK OBJECT DEFINITION 43
found the item in the cache or not. The setRead function allows the user to specify a discrete distribution
(e.g., Zipf, DiscreteSampler) for the frequency at which an item is requested. For example,
refModel = Zipf(0.5,nitems)
cacheNode.setRead(initClass, refModel)
cacheNode.setHitClass(initClass, hitClass)
cacheNode.setMissClass(initClass, missClass)
Here initClass, hitClass, and missClass can be either open or closed instantiated as usual with
the OpenClass or ClosedClass constructors.
• n-phase Erlang distribution: Erlang(α, n), where α is the rate of each of the n exponential phases
• 2-phase Coxian distribution: Coxian(µ1 , µ2 , ϕ1 ), which assigns phases µ1 and µ2 to the two rates,
and completion probability from phase 1 equal to ϕ1 (the probability from phase 2 is ϕ2 = 1.0).
• n-phase Coxian distribution: Coxian(µ, ϕ), which builds an arbitrary Coxian distribution from a
vector µ = [µ1 , . . . , µn ] of n rates and a completion probability vector ϕ = [ϕ1 , . . . , ϕn ] with ϕn =
1.0.
• n-phase acyclic phase-type distribution: APH(α, T ), which defines an acyclic phase-type distribution
with initial probability vector α = [α1 , . . . , αn ] and transient generator T .
For example, given mean µ = 0.2 and squared coefficient of variation SCV=10, where SCV=variance/µ2 ,
we can assign to a node a 2-phase Coxian service time distribution with these moments as
queue.setService(class2, Cox2.fitMeanAndSCV(0.2,10.0))
44 CHAPTER 3. NETWORK MODELS
where Cox2 is a static class to fit 2-phase Coxian distributions. Inter-arrival time distributions can be
instantiated in a similar way, using setArrival instead of setService on the Source node. For
example, if the Source is node 3 we may assign the inter-arrival times of class 2 to be exponential with
mean 0.1 as follows
source.setArrival(class2, Exp.fitMean(0.1))
Is it also possible to plot the structure of a phase-type distribution using PhaseType.plot static
method.
Non-Markovian distributions are also available, but typically they can restrict the available solvers to the
JMT simulator. They include the following distributions:
• Uniform distribution: Uniform(a, b) assigns uniform probability 1/(b − a) to the interval [a, b].
• Gamma distribution: Gamma(α, k) assigns a gamma density with shape α and scale k.
• Pareto distribution: Pareto(α, k) assigns a Pareto density with shape α and scale k.
Lastly, we discuss two special distributions. The Disabled distribution can be used to explicitly forbid
a class to receive service at a station. This may be useful to declare in models with sparse routing matrices
to debug the model specification. Performance metrics for disabled classes will be set to NaN.
Conversely, the Immediate class can be used to specify instantaneous service (zero service time). Nor-
mally, L INE solvers will replace zero service times with small positive values (ε =GlobalConstants.FineTol).
Fitting a distribution
The fitMeanAndSCV function is available for all distributions that inherit from the PhaseType class.
This function provides exact or approximate matching of the first two moments, depending on the theoretical
constraints imposed by the distribution. For example, an Erlang distribution with SCV=0.75 does not exist,
because in a n-phase Erlang it must be SCV=1/n. In a case like this, Erlang.fitMeanAndSCV(1,0.75)
will return the closest approximation, e.g., a 2-phase Erlang (SCV=0.5) with unit mean. The Erlang distri-
bution also offers a function fitMeanAndOrder(µ, n), which instantiates a n-phase Erlang with given
mean µ.
In distributions that are uniquely determined by more than two moments, fitMeanAndSCV chooses a
particular assignment of the residual degrees of freedom other than mean and SCV. For example, HyperExp
depends on three parameters, therefore it is insufficient to specify mean and SCV to identify the distribution.
Thus, HyperExp.fitMeanAndSCV automatically chooses to return a probability of selecting phase 1
equal to 0.99. Compared to other choices, this particular assignment corresponds to a higher probability
mass in the tail of the distribution. HyperExp.fitMeanAndSCVBalanced instead assigns p in a two-
phase hyper-exponential distribution so that p/µ1 = (1 − p)/µ2 .
3.1. NETWORK OBJECT DEFINITION 45
Moreover, the sample function can be used to generate values from the obtained distribution, e.g. we can
generate 3 samples as
The evalCDF and evalCDFInterval functions return the cumulative distribution function at the spec-
ified point or within a range, e.g.,
For more advanced uses, the distributions of the PhaseType class also offer the possibility to obtain
the standard (D0 , D1 ) representation used in the theory of Markovian arrival processes by means of the
getRepresentation function [5].
Load-dependent service
A queueing station i is called load-dependent whenever its service rate is a function of the number ni
of resident jobs at the station, summed across the ones in service and the ones in the waiting buffer. For
example, a multi-server station with c identical servers, each with processing rate µ, may be shown to behave
similarly to a single-server load-dependent station where the service rate is µ(ni ) = µα(ni ) = µ min(ni , c).
L INE presently supports limited load-dependence [11], meaning that it is possible to specify the form
of the load-dependent service up to a finite range of ni . As such, the support is currently limited to closed
models, which are guaranteed to have a finite population at all times.
To specify a load-dependence service for a queueing station over the range ni ∈ [1, N ] it is sufficient
to call the setLoadDependence method, passing a vector of size N in its input with the scaling factor
values for each ni . For example, to instantiate a c-server node we write
where the i-th element of the vector argument of setLoadDependence is the scaling factor α(ni ). It is
assumed by default that α(0) = 1.
Class-dependent service
A generalization of load-dependent service is class-dependent service, where the service rate is now a func-
tion of the vector ni = [ni,1 , . . . , ni,R ], where ni,r is the current number of class-r jobs at station i.
46 CHAPTER 3. NETWORK MODELS
L INE supports class-dependence in the MVA solver, provided that this is specified as a function handle.
The solver implicitly assumes that the function is smooth and defined also for fractional values of ni,r . For
example, in a two-class model we may write
It is sometimes useful to specify the statistical properties of a time series of service or inter-arrival times, as
in the case of systems with short- and long-range dependent workloads. When the model is stochastic, we
refer to these as situations where one specifies a process, as opposed to only specifying the distribution of
the service or inter-arrival times. In L INE processes include the 2-state Markov-modulated Poisson process
(MMPP2) and empirical traces read from files (Replayer).
For the latter, L INE assumes that empirical traces are supplied as text files (ASCII), formatted as a
column of numbers. Once specified, the Replayer object can be used as any other distribution. This
means that it is possible to run a simulation of the model with the specified trace. However, analytical
solvers will require tractable distributions from the PhaseType class.
3.2 Internals
In this section, we discuss the internal representation of the Network objects used within the L INE solvers.
By applying changes directly to this internal representation it is possible to considerably speed up the se-
quential evaluation of models.
sn = model.getStruct()
3.2. INTERNALS 47
The next tables present the properties of the NetworkStruct class. Table ?? gives the invariant properties
of the class, which specify the initial parameterization of the nodes, which we refer to as static parameters.
Table ?? further details parameters that require algorithmic calculations to derive from the static parameters.
TODO: under development
TODO: under development
For advanced nodes, such as Cache and Transition, additional parameters are specified under the nodeparam
cell array for the corresponding node. Table 3.2.1 illustrates the properties specified within the nodeparam{i}
cell array for Transition node i. Table 3.2.1 similarly illustrates the properties in nodeparam{i} array for
a Cache node i.
As shown in the tables, internally to L INE there is an explicit differentiation between properties of
nodes, stations, and stateful nodes. This distinction has impact in particular over routing and class-switching
mechanisms, and also allows solvers to better differentiate between different kinds of nodes.
48 CHAPTER 3. NETWORK MODELS
In some cases, one may want to access some properties of nodes that are contained in NetworkStruct
fields that are however referenced by station or stateful node index. To help this and similar situations, the
NetworkStruct class also provides static methods to quickly convert the indexing of nodes, stations, and
stateful nodes, which is used in referencing its data structures:
• nodeToStateful
• nodeToStation
• stationToNode
• stationToStateful
• statefulToNode
As an example, we can determine the portion of the nodevisits field that refers to stateful nodes in chain
c = 1 as follows
An example is shown in Figure Figure 3.1 below. Using a related function, jsimwView, it is also possible
to export the model to the JSIMwiz environment, which offers a wizard-based interface.
Another way to debug a L INE model is to transform it into a MATLAB graph object, e.g.
plots a graph of the network topology in term of stations only. In a similar manner, the following variant
of the same command shows the model in terms of nodes, which corresponds to the internal representation
within L INE.
which also adds automatic node coloring to highlight the class-switch nodes.
Figure 3.2 shows the difference between the two commands for an open queueing network with two
classes and class-switching. Weights on the edges correspond to routing probabilities. In the station topology
on the left, note that since the Sink node is not a station, departures to the Sink are drawn as returns to
the Source. The node topology on the right, illustrates all nodes, including ClassSwitch nodes that
are automatically added by L INE to apply the class-switching routing strategy. Double arcs between nodes
indicate that both classes are routed to the destination.
Furthermore, the graph properties concisely summarize the key features of the network
Here, Edge.Weight is the routing probability between the nodes, whereas Edge.Rate is the service
rate of the node appearing in the first column under EndNodes.
Figure 3.2: getGraph function: station topology (left) and node topology (right) for a 2-class tandem
queueing network with class-switching.
transforms and save the given JSIMgraph model into a corresponding L INE model.
L INE also provides two static functions to inspect jsimg and jsimw files before conversion, called
SolverJMT.jsimgOpen and SolverJMT.jsimwOpen require as an input parameter only the JMT
file name, e.g., myModel.jsimg.
It is also possible to automate the editing and import of JMT models from MATLAB using the jsimgEdit
command. This will open an empty JMT model and upon saving this will be automatically reimported into
MATLAB.
3.4. MODEL IMPORT AND EXPORT 51
• Fork and Join are supported with their default policies. Advanced policies, such as partial joins or
setting a distribution for the forked tasks on each output link, are not supported yet.
• a single Sink and a single Source can be instantiated in a L INE model, whereas there is no such
constraint in JMT.
52 CHAPTER 3. NETWORK MODELS
Table 3.4: Supported JSIM features for automated model import and analysis
JMT Feature Support Notes
Distributions Full Phase-Type, Burst (MAP), Burst (MMPP2), Deterministic, Dis-
abled, Exponential, Erlang, Gamma, Hyperexponential, Coxian, Lo-
gistic, Pareto, Uniform, Zero Service Time, Replayer, Weibull
Classes Full Open class, Closed class, Class priorities
Metrics Full Number of customers, Residence Time, Throughput, Response
Time, Throughput per sink, Utilization, Arrival Rate
Nodes Full Finite Capacity Region, ClassSwitch, Place, Delay, Logger, Queue,
Router, Transition
Routing Full Random, Probabilities, Round Robin, Join the Shortest Queue
Mechanisms Full N/A
Scheduling Full FCFS, HOL, LCFS, LCFS-PR, SIRO (Random), SJF, SEPT, LJF,
LEPT, PS, DPS, GPS, PS Priority, DPS Priority, GPS Priority
Nodes Partial Fork, Join, Source, Sink
Distributions No Burst (General), Normal
Nodes No Scaler, Semaphore
Routing No Shortest Response Time, Least Utilization, Fastest Service, Load De-
pendent, Class Switch Routing
Metrics No Drop rate, Response time per sink, Power
Scheduling No LPS, EDD, EDF, TBS, SRPT, QBPS
Mechanisms No Load Dependence, Retrial, Impatience, Soft deadlines, Parallelism,
Heterogeneous servers, Server Compatibilities, Setup times, Polling,
Switchover times
Chapter 4
Analysis methods
53
54 CHAPTER 4. ANALYSIS METHODS
Step 2: Instantiation of the solver(s). A solver is an instance of the Solver class. L INE offers multiple
solvers, which can be configured through a set of common and individual solver options. For example,
solver = SolverJMT(model)
returns a handle to a simulation-based solver based on JMT, configured with default options.
Step 3: Solution. Finally, this step solves the network and retrieves the concrete values for the performance
indexes of interest. This may be done as follows, e.g.,
Alternatively, all the above metrics may be obtained in a single method call as
In the methods above, L INE assigns station and class indexes (e.g., i, r) in order of creation in order of
creation of the corresponding station and class objects. However, large models may be easier to debug by
checking results using class and station names, as opposed to indexes. This can be done either by requesting
L INE to build a table with the result
AvgTable = solver.getAvgTable()
which however tends to be a rather slow data structure to use in case of repeated invocations of the solver,
or by indexing the matrices returned by getAvg using the model objects. That is, if the first instantiated
node is queue with name MyQueue and the second instantiated class is cclass with name MyClass,
then the following commands are equivalent
Similar methods are defined to obtain aggregate performance metrics at chain level at each station, namely
getAvgQLenChain for queue-lengths, getAvgUtilChain for utilizations, getAvgRespTChain
for response times, getAvgTputChain for throughputs, and the getAvgChain method to obtain all
the previous metrics.
4.3. SPECIFYING STATES 55
Note that the completes property of a class always refers to the reference station for the chain.
• cj : class of the job waiting in position j ≤ b of the buffer, out of the b currently occupied positions. If
b = 0, then the state vector is indicated with a single empty element c1 = 0.
• kj : service phase of the job waiting in position j ≤ b of the buffer, out of the b currently occupied
positions.
Here, by phase we mean the number of states of a distribution of class PhaseType. If the distribution is
not Markovian, then there is a single phase. With these definitions, the table below illustrates how to specify
in L INE a valid state for a station depending on its scheduling strategy. There, S is the number of servers of
the queueing station. All state variables are non-negative integers. The SchedStrategy.EXT policy is
used for the Source node, which may be seen as a special station with an infinite pool of jobs sitting in the
buffer and a dedicated server for each class r = 1, ..., R.
States can be manually specified or enumerated automatically. L INE library functions for handling and
generating states are as follows:
• State.fromMarginal: enumerates all states that have the same marginal state [n1 , n2 , ..., nR ].
4.3. SPECIFYING STATES 57
• State.toMarginal: extracts marginal statistics from a state, such as the total number of jobs in
a given class that are running at the station in a certain phase.
Note that if a function call returns an empty state ([]), this should be interpreted as an indication that no
valid state exists that meets the required criteria. Often, this is because the state supplied in input is invalid.
Example
We consider the example network in TODO. We look at the state of station 3, which is a multi-server FCFS
station. There are 4 classes all having exponential service times except class 2 that has Erlang-2 service
times. We are interested to states with 2 running jobs in class 1 and 1 in class 2, and with 2 jobs, respectively
of classes 3 and 4, waiting in the buffer. We can automatically generate this state space, which we store in
the space variable, as:
Here, each row of space corresponds to a valid state. The argument TODO gives the number of jobs in the
node for the 4 classes, while TODO gives the number of running jobs in each class. This station has four
valid states, differing on whether the class-2 job runs in the first or in the second phase of the Erlang-2 and
on the relative position of the jobs of class 3 and 4 in the waiting buffer.
To obtain states where the jobs have just started running, we can instead use
We see that the above state space restricted the one obtained with State.fromMarginalAndRunning
to states where the job in class 1 is always in the first phase.
If we instead remove the specification of the running jobs, we can use State.fromMarginal to
generate all possible combinations of states depending on the class and phase of the running jobs. In the
example, this returns a space of 20 possible states.
It is possible to assign the initial state to a station using the setState function on that station’s object.
L INE offers the possibility to specify a prior probability on the initial states, so that if multiple states have a
non-zero prior, then the solver will need to solve an independent model using each one of those initial states,
and then carry out a weighting of the results according to the prior probabilities. The default is to assign a
probability 1.0 to the first specified state. The functions setStatePrior and getStatePrior of can
be used to check and change the prior probabilities for the initial states specified for a station or stateful
node.
• the servers at each reference station are occupied by jobs of in class order, i.e., jobs in the firstly
created class are assigned to the server, then spare server are allocated to jobs in the second class, and
so forth;
• if the scheduling strategy requires it, jobs are ordered in the buffer by class, with the firstly created
class at the head and the lastly created class at the tail of the buffer.
where n(t) is an arbitrary performance index, e.g., the queue-length of a given class at time t.
We now consider instead the computation of the quantity E[n(t)|s0 ], which is the transient average
of the performance index, conditional on a given initial system state s0 . Compared to n(t), this quantity
averages the system state at time t across all possible evolutions of the system from state s0 during the t
time units, weighted by their probability. In other words, we observe all possible stochastic evolutions of
the system from state s0 for t time units, recording the final values of n(t) in each trajectory, and finally
average the recorded values at time t to obtain E[n(t)|s0 ].
After solving the model, we will be able to retrieve both steady-state and transient averages as follows
The transient average queue-length at node i for class r is stored within QNt in row i and column r.
Note that the above code specifies a maximum time t for the output time series. This can be done using
the timespan solver option. This applies also to average metrics. In the following example, the first model
is solved at steady-state, while the second model reports averages at time t = 1 after initialization
• sample: returns a data structure including the time-varying state of a given stateful node, labelled
with information about the events that changed the node state.
• sampleAggr: returns a data structure similar to the one provided by sample, but where the state
is aggregate to count the number of jobs in each class at the node.
• sampleSys: similar to the sample function, but returns the state of every stateful node in the
model.
• sampleSysAggr: similar to the sampleAggr function, but returns the aggragted state of every
stateful node in the model.
4.6. SENSITIVITY ANALYSIS AND NUMERICAL OPTIMIZATION 61
It is worth noting that the JMT solver only supports sampleAggr since the simulator does not offer a
simple way to extra detailed data such as phase change information in the service process. This information
is instead available with the SSA solver.
For example, the following command extract a sample path consisting of 10 samples for a AP H(2)/M/1
queue:
In the example, TODO refers to the time since initialization at which the node 2 (here the AP H(2)/M/1
queueing station) enters the state shown in the second column.
If we repeat the same experiment with the SSA solver and using the sampleSys function, we now
have the full state space of the model, including both the source and the queueing station:
• refreshArrival: this function should be called after updating the inter-arrival distribution at a
Source.
62 CHAPTER 4. ANALYSIS METHODS
• refreshCapacity: this function should be called after changing buffer capacities, as it updates
the capacity and classcapacity fields.
• refreshChains: this function should be used after changing the routing topology, as it refreshes
the rt, chains, nchains, nchainjobs, and visits fields.
• refreshProcesses: updates the mu, phi, phases, rates and scv fields.
For example, suppose we wish to update the service time distribution for class-1 at node 1 to be exponential
with unit rate. This can be done efficiently as follows:
As shown, resetNetwork updates the station indexes and the revised list of nodes that compose the
topology is obtained as a return parameter. To avoid stations to change index, one may simply create
ClassSwitch nodes as last before solving the model. This node list can be employed as usual to
reinstantiate new stations or ClassSwitch nodes. The addLink, setRouting, and possibly the
setProbRouting functions will also need to be re-applied as described in the previous sections.
Using the getName function it is then possible to verify that model has now name myModel1, since
the first assignment was by reference. Conversely, modelByCopy.setName did not affect the original
model since this is a clone of the original network.
Chapter 5
Network solvers
5.1 Overview
Solvers analyze objects of class Network to return average, transient, distributions, or state probability
metrics. A solver can implement one or more methods, which although featuring a similar overall solution
strategy, they can differ significantly from each other in the way this strategy is actually implemented and
on whether the final solution is exact or approximate.
A ‘method’ flag can be passed upon invoking a solver to specify the solution method that should be
used. For example, the following invocations are identical:
In what follows, we describe the general characteristics and supported model features for each solver
available in LINE and their methods.
Available solvers
The following Network solvers are available within L INE 2.0.x:
• L INE: This solver uses an algorithm to select the best solution method for the model under considera-
tion, among those offered by the other solvers. Analytical solvers are always preferred to simulation-
based solvers. This solver is implemented by the L INE class.
• CTMC: This is a solver that returns the exact values of the performance metrics by explicit generation
of the continuous-time Markov chain (CTMC) underpinning the model. As the CTMC typically incurs
state-space explosion, this solver can successfully analyze only small models. The CTMC solver is the
only method offered within L INE that can return an exact solution on all Markovian models, all other
solvers are either approximate or are simulators. This solver is implemented by the SolverCTMC
class.
63
64 CHAPTER 5. NETWORK SOLVERS
• FLUID: This solver analyzes the model by means of an approximate fluid model, leveraging a rep-
resentation of the queueing network as a system of ordinary differential equations (ODEs). The fluid
model is approximate, but if the servers are all PS or INF, it can be shown to become exact in the limit
where the number of users and the number of servers in each node grow to infinity [34]. This solver
is implemented by the SolverFluid class.
• JMT: This is a solver that uses a model-to-model transformation to export the L INE representation into
a JMT simulation (JSIM) or analytical (JMVA) models [2]. The JSIM simulation solver can analyze
also non-Markovian models, in particular those involving deterministic or Pareto distributions, or
empirical traces. This solver is implemented by the SolverJMT class.
• MAM: This is a matrix-analytic method solver, which relies on quasi-birth death (QBD) processes to
analyze open queueing systems. This solver is implemented by the SolverMAM class.
• MVA: This is a solver based on approximate and exact mean-value analysis. This solver is typically the
fastest and offers very good accuracy in a number of situations, in particular models where stations
have a single-server. This solver is implemented by the SolverMVA class.
• NC: This solver uses a combination of methods based on the normalizing constant of state probability
to solve a model. The underpinning algorithm are particularly useful to compute marginal and joint
state probabilities in queueing network models. This solver is implemented by the SolverNC class.
• SSA: This is a discrete-event simulator based on the CTMC representation of the model. Contrary to
the JMT simulator, which has online estimators for all the performance metrics, SSA estimates only
the probability distribution of the system states, indirectly deriving the metrics after the simulation is
completed. Moreover, the SSA execution can more efficiently parallelized on multi-core machines.
Moreover, it is possible to retrieve the evolution over time of each node state, including quantities
that are not loggable in JMT, e.g., the active phase of a service or arrival distribution. This solver is
implemented by the SolverSSA class.
Note that the LINE.load notation can also be used to instantiate a custom solver pre-configured with the
specified method. For example
runs the CTMC solver with default options. Solver-specific methods can be specified by appending their
name to the method option, e.g. this command creates the CTMC solver with gpu method enabled:
Table 5.1 – Solution methods for Network solvers. Continued from previous page
Solver Method Description Refs.
MVA mva Alias for the mva.amva method [38], [7]
MVA aba.upper Asymptotic bound analysis (upper bounds) [5, §9.4]
MVA aba.lower Asymptotic bound analysis (lower bounds) [5, §9.4]
MVA bjb.upper Balanced job bounds (upper bounds) [12, Table 3]
MVA bjb.lower Balanced job bounds (lower bounds) [12, Table 3]
MVA gb.upper Geometric square-root bounds (upper bounds) [12]
MVA gb.lower Geometric square-root bounds (lower bounds) [12]
MVA pb.upper Proportional bounds (upper bounds) [12, Table3]
MVA pb.lower Proportional bounds (lower bounds) [12, Table3]
MVA sb.upper Simple bounds (upper bounds, Thm. 3.2, n = 3) [27, Table3]
MVA sb.lower Simple bounds (lower bounds, Eq. 1.6) [27, Table3]
MVA gig1.allen Allen-Cunneen formula - GI/G/1 [5, §6.3.4]
MVA gig1.heyman Heyman formula - GI/G/1 –
MVA gig1.kingman Kingman upper bound- GI/G/1 [5, §6.3.6]
MVA gig1.klb Kramer-Langenbach-Belz formula - GI/G/1 [5, §6.3.4]
MVA gig1.kobayashi Kobayashi diffusion approximation - GI/G/1 [5, §10.1.1]
MVA gig1.marchal Marchal formula - GI/G/1 [5, §10.1.3]
MVA gigk Kingman approximation - GI/G/k
MVA mg1 Pollaczek–Khinchine formula - M/G/1 [5, §3.3.1]
MVA mm1 Exact formula - M/M/1 [5, §6.2.1]
MVA mmk Exact formula - M/M/k (Erlang-C)
NC default Alias for the adaptive method –
NC adaptive Automated choice of deterministic method –
NC exact Automated choice of exact solution method. –
NC ca Multiclass convolution algorithm (exact) –
NC comom Class-oriented method of moments for hommo- [8]
geneous models (exact)
NC cub Grundmann-Moeller cubature rules [9]
NC mva Product of throughputs on MVA lattice (exact) [37, Eq. (47)]
NC imci Improved Monte carlo integration sampler [44]
NC kt Knessl-Tier asymptotic expansion [30]
NC le Logistic asymptotic expansion [9]
NC ls Logistic sampling [9]
NC nr.logit Norlund-Rice integral with logit transformation [11]
NC nr.probit Norlund-Rice integral with probit transformation [11]
NC rd Reduction heuristic [11]
NC sampling Automated choice of sampling method –
Continued on next page
5.2. SOLUTION METHODS 67
Table 5.1 – Solution methods for Network solvers. Continued from previous page
Solver Method Description Refs.
SSA default Alias for the serial method –
SSA serial CTMC stochastic simulation on a single core [26]
SSA para Parallel simulations (independent replicas) –
5.2.1 L INE
The L INE class, also callable with the alias SolverAuto, provides interfaces to the core solution functions
(e.g., getAvg, ...) that dynamically bind to one of the other solvers implemented in L INE (CTMC, NC, ...).
It is often difficult to identify the best solver without some performance results on the model, for example
to determine if it operates in light, moderate, or heavy-load regime.
Therefore, heuristics are used to identify a solver based on structural properties of the model, such as
based on the scheduling strategies used at the stations as well as the number of jobs, chains, and classes.
Such heuristics, though, are independent of the core function called, thus it is possible that the optimal solver
does not support the specific function called (e.g., getTranAvg). In such cases the L INE solver determines
what other solvers would be feasible and prioritizes them in execution time order, with the fastest one on
average having the higher priority. Eventually, the solver will be always able to identify a solution strategy,
through at least simulation-based solvers such as JMT or SSA.
5.2.2 CTMC
The SolverCTMC class solves the model by first generating the infinitesimal generator of the Network
and then calling an appropriate solver. Steady-state analysis is carried out by solving the global balance
equations defined by the infinitesimal generator. If the keep option is set to true, the solver will save the
infinitesimal generator in a temporary file and its location will be shown to the user.
Transient analysis is carried out by numerically solving Kolmogorov’s forward equations using MAT-
LAB’s ODE solvers. The range of integration is controlled by the timespan option. The ODE solver
choice is the same as for SolverFluid.
The CTMC solver heuristically limits the solution to models with no more than 6000 states. The force
option needs to be set to true to bypass this control. In models with infinite states, such as networks with
open classes, the cutoff option should be used to reduce the CTMC to a finite process. If specified as a
scalar value, cutoff is the maximum number of jobs that a class can place at an arbitrary station. More
generally, a matrix assignment of cutoff indicates to L INE that cutoff has in row i and column r the
maximum number of jobs of class r that can be placed at station i.
Details on the additional configuration options of the CTMC solver is given in the next table.
68 CHAPTER 5. NETWORK SOLVERS
5.2.3 FLUID
This solver is based on the system of fluid ordinary differential equations for INF-PS queueing networks
presented in [35]. The latter is based on Kurtz’s mean-field approximation theory. The fluid ODEs are nor-
mally solved with a Java port of the LSODA algorithm for stiff and non-stiff ordinary differential equations.
More details about the port are available at: https://github.com/imperial-qore/lsoda-java.
ODE variables corresponding to an infinite number of jobs, as in the job pool of a source station, or to
jobs in a disabled class are not included in the solution vector. These rules apply also to the options.init sol
vector.
The solution of models with FCFS stations maps these stations into corresponding PS stations where
the service rates across classes are set identical to each other with a service distribution given by a mixture
of the service processes of the service classes. The mixture weights are determined iteratively by solving
a sequence of PS models until convergence. Upon initializing FCFS queues, jobs in the buffer are all
initialized in the first phase of the service.
5.2.4 JMT
The class is a wrapper for the JMT and consists of a model-to-model transformation from the Network
data structure into the JMT’s input XML formats (either .jsimg or .jmva) and a corresponding parser
for JMT’s results. Upon first invocation, the JMT JAR archive will be searched in the MATLAB path and if
unavailable automatically downloaded.
This solver offers two main methods. The default method is the JSIM solver (’jsim’ method), which
runs JMT’s discrete-event simulator. The alternative method is the JMVA analytical solver (’jmva’
method), which is applicable only to queueing network models that admit a product-form solution. This
can be verified calling model.hasProductFormSolution prior to running the JMVA solver.
In the transformation to JSIM, artificial nodes will be automatically added to the routing table to repre-
sent class-switching nodes used in the simulator to specify the switching rules. One such class-switching
node is defined for every ordered pair of stations (i, j) such that jobs change class in transit from i to j.
5.2.5 MAM
This is a basic solver for some Markovian open queueing systems that can be analyzed using matrix analytic
methods. The core solver is based on the BU tools library for matrix-analytic methods [29]. The solution
5.2. SOLUTION METHODS 69
of open queueuing networks is based on traffic decomposition methods that compute the arrival process at
each queue resulting from the superposition of multiple source streams.
5.2.6 MVA
The solver offers approximate mean value analysis (AMVA) (options.method=’default’), but also
exact MVA algorithms (options.method=’exact’). The default AMVA solver is based on Lin-
earizer [15], unless there are two or less jobs in total within closed classes, in which case the solver runs the
Bard-Schweitzer algorithm [41]. Extended queueing models are handled as follows:
• Non-exponential service times in FCFS nodes are handled only in the single-server case via the
method selected in the options.config.highvar setting. By default high variance is ignored,
as the FCFS solver tends to produce good result in closed models also without specialized correc-
tions. It is alternatively possible to handle high variance either using the Diffusion-M/G/k interpo-
lation from [10], casted with weights ai = bi = 10−8 , or using the high-variance MVA (HV-MVA)
corrections proposed in [6, 36]. The multi-server extension is ongoing; we point to the NC solver for
a version already available.
• Multi-servers are dealt with using the methods listed in Table 5.3 for the options.config.multiserver
option. These are coupled with a modification of the Rolia-Sevcik correction [39], where in light-load
the Rolia-Sevcik correction is treated as if there was a single server.
• Non-preemptive are dealt with using the methods listed in Table 5.3 for the configuration option
options.config.np priority. The solver feature in particular AMVA-CL and the shadow
server methods [21].
• DPS queues are analyzed with a standard method similar to the biased processor sharing approxima-
tion reported in [32, §11.4]. Here, an arriving job of class r sees a queue-length in class s ̸= r scaled
by the correction factor ws /wr , where ws is the weight of class s.
• Limited load-dependence (intended here as other than multi-server) and class-dependence are handled
through the correction factors proposed in [13]. If a station is both limited load-dependent and multi-
server, then if the softmin method is chosen the solver will suitably combine the softmin term and
the limited load-dependent correcting factors. Moreover, iterative queue-length corrections such as
those applied by the AQL and Linearizer methods are also applied to these terms.
• Fork-join networks are assumed to feature a direct acyclic graph (DAG) in-between forks and joins.
They are analyzed by iteratively transforming the sibling tasks into jobs belonging to independent
classes, using the algorithm specified in options.config.fork join. If a fork has fan out
f (i.e., the fork out-degree), in the implementation of the Heidelberger-Trivedi [28] method, one
artificial open class is created for each of f − 1 sibling task, while also retaining a task in the original
class. The residence times along a branch are then treated as exponential random variables and their
70 CHAPTER 5. NETWORK SOLVERS
maximum, corresponding to the response time of the fork-join section, is computed using specialized
results for this distribution. L INE supports this method, but uses as a default a custom variant whereby
in which the original and artificial classes can take with probability 1/f any of the outgoing branches.
While the latter can result in states that do not exist in the original model, since two sibling tasks may
take the same branch, it is correct in expectation and it does not treat differently the artificial classes
than the original class, which can be beneficial when the original class is closed and thus differs
significantly from an open artificial class.
5.2.7 NC
The SolverNC class implements a family of solution algorithms based on the normalizing constant of
state probability of product-form queueing networks. Contrary to the other solvers, this method typicallly
maps the problem to certain multidimensional integrals, allowing the use of numerical methods such as
MonteCarlo sampling and asymptotic expansions in their approximation.
5.2.8 SSA
The SolverSSA class is a basic stochastic simulator for continuous-time Markov chains. It reuses some
of the methods that underpin SolverCTMC to generate the network state space and subsequently simulates
the state dynamics by probabilistically choosing one among the possible events that can incur in the system,
according to the state spaces of each of node in the network. For efficiency reasons, states are tracked at the
level of individual stations, and hashed. The state space is not generated upfront, but rather stored during the
simulation, starting from the initial state. If the initialization of a station generates multiple possible initial
states, SSA initializes the model using the first state found. The list of initial states for each station can be
obtained using the getInitState functions of the Network class.
5.3. SUPPORTED LANGUAGE FEATURES AND OPTIONS 71
The SSA solver offers two methods: ’serial’ and ’para’ (default). The serial methods run on a
single core, while the parallel methods run on multicore
Every LINE solver implements the support to check if it supports all language features used in a certain
model
• getAvgChain: returns the mean queue-length, utilization, mean response time (for one visit), and
throughput for every station and chain.
• getAvgSys: returns the system response time and system throughput, as seen as the reference node,
by chain.
• getCdfRespT: returns the distribution of response times (for one visit) for the stations at steady-
state.
• getAvgNode: behaves similarly to getAvg, but returns performance metrics for each node and
class. For example, throughputs at the sinks can be obtained with this method.
• getProbAggr: returns marginal state probabilities for jobs of different classes at a given station.
• getProbSysAggr: returns joint probabilities for jobs of different classes at all stations.
5.3. SUPPORTED LANGUAGE FEATURES AND OPTIONS 73
• getProbNormConstAggr: returns the normalizing constant of the state probabilities for the model.
• getTranAvg: returns transient mean queue length, utilization and throughput for every station and
chain from a given initial state.
• sample: returns the transient marginal state for a station from a given initial state.
• sampleAggr: returns the transient marginal state for jobs of different classes at a given station from
a given initial state.
• sampleSys: returns the transient marginal system state for a station from a given initial state.
• sampleSysAggr: returns the transient marginal system state for jobs of different classes at a given
station from a given initial state.
74 CHAPTER 5. NETWORK SOLVERS
• non-preemptive (SchedStrategyType.NP)
The table primarily refeers to invocation of the getAvg methods. Specialized methods, such as transient or
response time distribution analysis, may be available only for a subset of the scheduling strategies supported
by a solver.
This can be specified as an argument to the constructor of the solver. For example, the following two
constructor invocations are identical
Modifiers to the default options can either be specified directly in the options data structure, or alterna-
tively be specified as argument pairs to the constructor, i.e., the following two invocations are equivalent
• cache (logical) if set to true the solver after the first invocation will return the same result upon
subsequent calls, without solving again the model. This option is true by default. Caching can be
bypassed using the refresh methods (see Section 4.6).
• config (struct) this is data structure to pass solver-specific configuration options to customize
the execution of particular methods.
• cutoff (integer ≥ 1) requires to ignore states where stations have more than the specified num-
ber of jobs. This is a mandatory option to analyze open classes using the CTMC solver.
• force (logical) requires the solver to proceed with analyzing the model. This bypasses checks
and therefore can result in the solver either failing or requiring an excessive amount of resources from
the system.
• iter max (integer ≥ 1) controls the maximum number of iterations that a solver can use, where
applicable. If iter max= n, this option forces the FLUID solver to compute the ODEs over the
timespan t ∈ [0, 10n/µmin ], where µmin is the slowest service rate in the model. For the MVA solver
this option instead regulates the number of successive substitutions allowed in the fixed-point iteration.
• iter tol (double) controls the numerical tolerance used to convergence of iterative methods. In
the FLUID solver this option regulates both the absolute and relative tolerance of the ODE solver.
• init sol (solver dependent) re-initializes iterative solvers with the given configuration of
the solution variables. In the case of MVA, this is a matrix where element (i, j) is the mean queue-
length at station i in class j. In the case of FLUID, this is a model-dependent vector with the values
of all the variables used within the ODE system that underpins the fluid approximation.
• keep (logical) determines if the model-to-model transformations store on file their intermediate
outputs. In particular, if verbose≥ 1 then the location of the .jsimg models sent to JMT will be
printed on screen.
• method (string) configures the internal algorithm used to solve the model.
5.4. SOLVER MAINTENANCE 77
• samples (integer ≥ 1) controls the number of samples collected for each performance index by
simulation-based solvers. JMT requires a minimum number of samples of 5 · 103 samples.
• seed (integer ≥ 1) controls the seed used by the pseudo-random number generators. For example,
simulation-based solvers will give identical results across invocations only if called with the same
seed.
• timestamp (real interval) requires the transient solver to produce a solution in the specified
temporal range. If the value is set to [Inf, Inf] the solver will only return a steady-state solution.
In the case of the FLUID solver and in simulation, [Inf, Inf] has the same computational cost of
[0, Inf] therefore the latter is used as default.
• tol default numerical tolerance for all uses other than the ones where iter tol is used.
• verbose controls the verbosity level of the solver. Supported levels are 0 for silent, 1 for standard
verbosity, 2 for debugging.
Table 5.9: Default values of the L INE solver options and their default assignments
Solver default
Option MVA CTMC FLUID JMT MAM NC SSA
cache true true true true true true true
config
cutoff (no default)
force false false false false false false false
keep false
init sol [] []
iter max 103 10
iter tol 10−6 10−4 10−4
method ’default’ ’default’ ’default’ ’default’ ’default’ ’default’ ’default’
lang ’matlab’ ’matlab’ ’matlab’ ’matlab’ ’matlab’ ’matlab’ ’matlab’
samples 104 104
seed rand rand rand rand rand rand rand
stiff true
timespan [Inf,Inf] [0,Inf] [0,Inf] [Inf,Inf] [0,Inf]
tol 10−4 10−4
verbose 1 1 1 1 1 1 1
• To install a new release of JMT, it is necessary to delete (or overwrite) the JMT.jar file under the
’SolverJMT’ folder. This forces L INE to download the latest version of the JMT executable.
• To remove temporary by-products of the JMT solver it is recommended to periodically run the
jmtCleanTempDir script. This is more important when using the ’keep’ option, which stores
on disk the temporary .jsimg and .jsimw models sent to JMT.
Chapter 6
In this chapter, we present the definition of the LayeredNetwork class, which encodes the support in
L INE for a class of generalized layered stochastic networks. In their basic form, these models are called
layered queueing networks (LQNs) and differ from regular queueing networks as servers, in order to process
jobs, can issue synchronous and asynchronous calls among each others. We point to [23] and to the LQNS
user manual for an introduction [24]. Contrary to the original LQNs, layered networks in L INE can also
include non-queueing servers, such as caches, hence they may be conceptualized as more general layered
stochastic networks.
The topology of call dependencies in a layered network makes it possible to partition the model into
a set of layers, each consisting of a subset of the servers. Each of these layers is then solved in isolation,
updating with an iterative procedure its parameters and performance metrics until the layers solutions jointly
converge to a consistent solution.
79
80 CHAPTER 6. LAYERED NETWORK MODELS
forwarding. At present, L INE supports only the first two kinds of activities. Synchronous calls are requests
that block the sender until a reply is received, while asynchronous calls are non-blocking and the sender
execution can continue after issuing the call. Calls can either be repeated either deterministic or stochastic,
meaning in the latter case that the number of calls issued is a random variable, e.g. geometrically distributed.
Contrary to ordinary layered queueing networks, a layered network in L INE can also feature cache tasks,
item entries, and cache-access precedence relations.
• Cache tasks have the basic properties of tasks, but add three specific properties for caching: the total
number of items, the cache capacity and the cache replacement policy. Cached items can be either
contents or services. Cache capacity indicates the storage constraints of the cache.
• An item-entry provides instead access to a group of entries of a cache, Item-entries have the basic
properties of entries, but add the property of the popularity of the items they give access to.
• A precedence relationship called cache-access is defined for the cache hit and miss activities under
each item-entry. That is, it is possible to proceed to a different activity depending on whether the
cache access produced a cache hit or cache miss. For example, a cache miss can produce a call to a
remote entry to retrieve the missing content.
Note that the above extensions are not queueing-based and this explains why these models are referred to
in L INE as layered networks and not as layered queueing networks. Similar to the latter, the analysis of a
layered networks uses a decomposition of the model into a set of submodels, each being a Network object,
which are then iterative analyzed using different solution methods.
We now proceed to instantiate the static topology of processors, tasks and entries:
An equivalent way to specify the above example is to use the Host class instead than the Processor
class, with identical parameters.
In the above code, the on method specifies the associations between the elements, e.g., task T1 runs on
processor P1, and accepts calls to entry E1. Furthermore, the multiplicity of T1 is 5, meaning that up to 5
calls can be simultaneously served by this element (i.e., 5 is the multiplicity of servers in the underpinning
queueing system for T1).
Both processors and tasks can be associated to the standard L INE scheduling strategies. For instance,
T2 will process incoming requests in parallel according as an infinite server node, since we selected the
SchedStrategy.INF scheduling policy. An exception is that SchedStrategy.REF should be used
to denote the reference task (e.g. a node representing the clients of the models), which has a similar meaning
to the reference node in the Network object.
• Every entry needs to specify an initial activity where the execution of the entry starts (the activity
is said to be “bound to the entry”) and a replying activity, which upon completion terminates the
execution of the entry.
For example, in our running example, we may now associate an activity to each entry as follows:
Here, A1 is a task activity for T1, acts as initial activity for E1, consumes an exponential distributed time on
the processor underpinning T1, and requires on average 3.5 synchronous calls to E2 to complete. Each call
to entry E2 is served by the activity A2, with a demand on the processor hosting T2 given by an exponential
distribution with rate λ = 2.0.
Activity graphs
Often, it is useful to structure the sequence of activities carried out by an entry in a graph. Activity graphs
can be characterized by precedence relationships of the following kinds:
• sequence: two activities are executed sequentially, one after each other. This is implemented through
the ActivityPrecedence.Serial construct.
• and-fork: a serial execution is forked into concurrent activities. This can be materialized using the
ActivityPrecedence.AndFork construct.
• or-fork: the server chooses probabilistically which activity to execute next among a set of alternatives.
This is implemented in ActivityPrecedence.OrFork.
• and-join: concurrent activities are joined into a single serial execution. This is implemented in
ActivityPrecedence.AndJoin.
• or-join: merge point for alternative activities that may execute in parallel after a or-fork. This is
implemented in ActivityPrecedence.OrFork.
• cache-access: split point for cache hit/cache miss results in an activity graph. This is implemented in
ActivityPrecedence.CacheAccess. For usage examples, see example cacheModel 3
and example cacheModel 4 in the examples/ folder.
A composite example showing fork/join precedences and loops is given in example layeredModel 7
in the examples/ folder.
For instance, we may replace in the running example the specification of the activities underpinning a
call to E2 as
such that a call to E2 serially executes A20, A21, and A22 prior to replying. Here, A21 is chosen to be an
Erlang distribution with given mean (1.0) and number of phases (2).
An example of the result is shown in the next figure. The figure shows two processors (P1 and P2), two
tasks (T1 and T2), and three entries (E1, E2, and E3) with their associated activities. Both dependencies
and calls are both shown as directed arcs, with the edge weight on call arcs corresponding to the average
number of calls to the target entry. For example, A1 calls E3 on average 2.0 times. In this figure, by clicking
with the mouse on a node MATLAB will display, using a data tip, some relevant properties of the node such
as scheduling or multiplicity.
When invoked as
the plot is also accompanied by a task graph that illustrates the client-server dependencies between the
layers.
6.3. INTERNALS 83
Lastly, the jsimgView and jsimwView methods can be used to visualize in JMT each layer. This can
be done by first calling the getLayers method to obtain a cell array consisting of the Network objects,
each one corresponding to a layer, and then invoking the jsimgView and jsimwView methods on the
desired layer. This is discussed in more details in the next section.
6.3 Internals
6.3.1 Representation of the model structure
It is possible to access the internal representation of a LayeredNetwork model in a similar way as for
Network objects, i.e.:
The return lqn structure, of class LayeredNetworkStruct, contains all the information about the
specified model. It relies on relative and absolute indexing for the elements of the LayeredNetwork.
• A relative index is a number between 1 and the number of similar elements in the model, e.g., for a
model with 3 tasks, the relative index t of a task would be a number in [1, 3].
• An absolute index is a number between 1 and the total number of elements (of any kind, except calls)
in the model, e.g., for a model with 2 hosts, 3 tasks, 5 entries, and 8 activities, the total number of
elements is nidx= 18 and last activity a may have an absolute index aidx= 18 and a relative index
a= 8.
84 CHAPTER 6. LAYERED NETWORK MODELS
• The difference between the relative and the absolute index of an element is referred to as shift, e.g., in
the previous example ashift= 18 − 8 = 10.
• Absolute and relative indexing for calls and hosts are identical, call index cidx ranges in [1, ncalls]
and host index hidx ranges in [1, nhosts].
Using the above convention, the internal representation of the model is described in Table 6.3.1. As in the
examples above, relative and absolute indexes are differentiated by using the suffix idx in the latter (e.g., a
vs. aidx). This indexing style is used throughout the codebase as well.
6.4 Solvers
L INE offers two solvers for the solution of a LayeredNetwork model consisting in its own native solver
(LN) and a wrapper (LQNS) to the LQNS solver [24]. The latter requires a distribution of LQNS to be
available on the operating system command line.
The solution methods available for LayeredNetwork models are similar to those for Network ob-
jects. For example, the getAvgTable can be used to obtain a full set of mean performance indexes for
the model, e.g.,
Note that in the above table, some performance indexes are marked as NaN because they are not defined
in a layered queueing network. Further, compared to the getAvgTable method in Network objects,
LayeredNetwork do not have an explicit differentiation between stations and classes, since in a layer a
task may either act as a server station or a client class.
The main challenge in solving layered queueing networks through analytical methods is that the param-
eterization of the artificial delays depends on the steady-state performance of the other layers, thus causing
a cyclic dependence between input parameters and solutions across the layers. Depending on the solver in
use, such issue can be addressed in a different way, but in general a decomposition into layers will remain
parametric on a set of response times, throughputs and utilizations.
This issue can be resolved through solvers that, starting from an initial guess, cyclically analyze the
layers and update their artificial delays on the basis of the results of these analyses. Both LN and LQNS
implement this solution method. Normally, after a number of iterations the model converges to a steady-
state solution, where the parameterization of the artificial delays does not change after additional iterations.
6.4.1 LQNS
The LQNS wrapper operates by first transforming the specification into a valid LQNS XML file. Subse-
quently, LQNS calls the solver and parses the results from disks in order to present them to the user in the
appropriate L INE tables or vectors. The options.method can be used to configure the LQNS execution
as follows:
• options.method=’exact’: the solver will execute the standard LQNS analytical solver with
the exact MVA method.
• options.method=’srvnexact’: the solver will execute the standard LQNS analytical solver
with SRVN layering and the exact MVA method.
• options.method=’lqsim’: LQSIM simulator, with simulation length specified via the samples
field (i.e., with parameter -A options.samples, 0.95).
Upon invocation, the lqns or lqsim commands will be searched for in the system path. If they are
unavailable, the termination of SolverLQNS will interrupt.
6.4.2 QNS
L INE also includes a dedicated wrapper Network solver for the qnsolver utility distributed within
LQNS, called SolverQNS. This allows users to evaluate product-form models using the MVA algorithms
implemented within LQNS. The available options specify the multiserver handling algorithm:
6.5. MODEL IMPORT AND EXPORT 87
6.4.3 LN
The native LN solver iteratively applies the layer updates until convergence of the steady-state measures.
Since updates are parametric on the solution of each layer, LN can apply any of the Network solvers
described in the solvers chapter to the analysis of individual layers, as illustrated in the following example
for the MVA solver
Options parameters may also be omitted. The LN method converges when the maximum relative change of
mean response times across layers from the last iteration is less than options.iter tol.
Methods supported by the LN solver include:
• options.method=’default’: default recursive solution based on mean values
In both examples, filename is a string including both file name and its path.
Finally, we point out that it is possible to export a LQN in the legacy SRVN file format2 by means of the
writeSRVN(filename) function.
1
https://raw.githubusercontent.com/layeredqueuing/V5/master/xml/lqn.xsd
2
http://www.sce.carleton.ca/rads/lqns/lqn-documentation/format.pdf
Chapter 7
Random environments
Systems modeled with L INE can be described as operating in an environment with a state that affects the
way the system dynamics. To distinguish the states of the environment from the ones of the system within
it, we shall refer to the former as the environment stages. In particular, L INE 2.0.0 supports the definition of
a class of random environments subject to three assumptions:
• The stage of the environment evolves independently of the state of the system.
• The dynamics of the environment stage can be described by a continuous-time Markov chain.
The above definitions are in particular appropriate to describe systems specified by input parameters (e.g.,
service rates, scheduling weights, etc) that change with the environment stage. For example, an environment
with two stages, say normal load and peak load, may differ for the number of servers that are available in a
queueing station, i.e., the system controller may add more servers during peak load. Upon a stage change
in the environment, the model parameters will instantaneously change, and the system state reached during
the previous stage will be used to initialize the system in the new stage.
Although in a number of cases the system performance may be similar to a weighted combination of the
average performance in each stage, this is not true in general, especially if the system dynamic (i.e., the rate
at which jobs arrive and get served) and the environment dynamic (i.e., the rate at which the environment
changes active stage) have a similar magnitude [14].
88
7.1. ENVIRONMENT OBJECT DEFINITION 89
restricted to be exponential. Although the time spent in each state of the MRP is not exponential, the MRP
with phase-type transitions can be easily transformed into an equivalent continuous-time Markov chain
(CTMC) to enable analysis, a task that L INE performs automatically.
To specify an environment, we first create an Env object with the environment name
where the constructor specifies the stage name, an arbitrary string to classify the stage (here taken from a
taxonomy in the Semantics class), follows by a Network object describing the system model conditional
on the environment being in the corresponding stage.
We now describe that the transitions between stages are both exponential, with different rates
which would cause a race condition between two distributions in stage two: the exponential transition back
to the offline stage, and the Erlang-2 distributed transition with unit rate that remains in the online stage. The
underpinning CTMC will therefore consider the distribution of the minimum between the exponential and
the Erlang-2 distribution, in order to decide the next stage transition. State space explosion may occur in the
definition of an environment if the user specifies a large number of non-exponential transition. For example,
a race condition among n Erlang-2 distribution translates at the level of the CTMC into a state space with
2n states. In such situations, it is recommended to replace some of the distributions with exponential ones.
To summarize the properties of the environment defined above we may use the getStageTable
method
In the table, the State column gives a numerical identifier for each stage, followed by its stage probability
at equilibrium, a Markovian representation of the time spent in it before a transition, and by a pointer to the
sub-model associated to that stage.
be possible in some models, for example when a station is removed from the model. In that case, one can
define a custom reset policy by instantiating transitions as, e.g.,
7.2 Solvers
The steady-state analysis of a system in a random environment is carried out in L INE using the blending
method [14], which is an iterative algorithm leveraging the transient solution of the model. In essence, the
model looks at the average state of the system at the instant of each stage transition, and upon restarting the
system in the new stage re-initializes it from this average value. This algorithm is implemented in L INE by
the SolverEnv class, which is described next.
7.2.1 ENV
The SolverEnv class applies the blending algorithm by iteratively carrying out a transient analysis of
each system model in each environment stage, and probabilistically weighting the solution to extract the
steady-state behavior of the system.
As in the transient analysis of Network objects, L INE does not supply a method to obtain mean re-
sponse times, since Little’s law does not hold in the transient regime. To obtain the mean queue-length,
utilization and throughput of the system one can call as usual the getAvg method on the SolverEnv
object, e.g.,
Note that as model complexity grows, the number of iterations required by the blending algorithm to con-
verge may grow large. In such cases, the options.iter max option may be used to bound the maximum
analysis time.
Bibliography
[1] S. Balsamo. Product form queueing networks. In Günter Haring, Christoph Lindemann, and Martin
Reiser, editors, Performance Evaluation: Origins and Directions, volume 1769 of Lecture Notes in
Computer Science, pages 377–401. Springer, 2000.
[2] M. Bertoli, G. Casale, and G. Serazzi. The JMT simulator for performance evaluation of non-product-
form queueing networks. In Proc. of the 40th Annual Simulation Symposium (ANSS), pages 3–10,
2007.
[3] D. Bini, B. Meini, S. Steffé, J. F. Pérez, and B. Van Houdt. Smcsolver and q-mam: tools for matrix-
analytic methods. SIGMETRICS Performance Evaluation Review, 39(4):46, 2012.
[4] A. Bobbio, A. Horváth, M. Scarpa, and M Telek. Acyclic discrete phase type distributions: properties
and a parameter estimation algorithm. Perform. Eval., 54(1):1–32, 2003.
[5] G. Bolch, S. Greiner, H. de Meer, and K. S. Trivedi. Queueing Networks and Markov Chains. Wiley,
2006.
[6] A. B. Bondi and W. Whitt. The influence of service-time variability in a closed network of queues.
Perform. Eval., 6:219–234, 1986.
[7] S. C. Bruell, G. Balbo, and P. V. Afshari. Mean value analysis of mixed, multiple class BCMP networks
with load dependent service stations. Performance Evaluation, 4:241–260, 1984.
[8] G. Casale. CoMoM: Efficient class-oriented evaluation of multiclass performance models. IEEE Trans.
on Software Engineering, 35(2):162–177, 2009.
[9] G. Casale. Accelerating performance inference over closed systems by asymptotic methods. In Proc.
of ACM SIGMETRICS. ACM Press, 2017.
[10] G. Casale. Integrated Performance Evaluation of Extended Queueing Network Models with Line. In
2020 Winter Simulation Conference (WSC), pages 2377–2388. IEEE, dec 2020.
[11] G. Casale, P.G. Harrison, and O.W. Hong. Facilitating load-dependent queueing analysis through
factorization. Perform. Eval., 2021.
91
92 BIBLIOGRAPHY
[12] G. Casale, Richard R. Muntz, and Giuseppe Serazzi. Geometric bounds: A noniterative analysis
technique for closed queueing networks. IEEE Trans. Computers, 57(6):780–794, 2008.
[13] G. Casale, J. F. Pérez, and W. Wang. QD-AMVA: Evaluating systems with queue-dependent service
requirements. In Proceedings of IFIP PERFORMANCE, 2015.
[14] G. Casale, M. Tribastone, and P. G. Harrison. Blending randomness in closed queueing network
models. Perform. Eval., 82:15–38, 2014.
[15] K. M. Chandy and D. Neuse. Linearizer: A heuristic algorithm for queuing network models of com-
puting systems. Comm. of the ACM, 25(2):126–134, 1982.
[16] W.-M. Chow. Approximations for large scale closed queueing networks. Perform. Eval, 3(1):1–12,
1983.
[17] A. E. Conway. Fast Approximate Solution of Queueing Networks with Multi-Server Chain-Dependent
FCFS Queues, pages 385–396. Springer US, Boston, MA, 1989.
[18] A. E. Conway and N. D. Georganas. RECAL - A new efficient algorithm for the exact analysis of
multiple-chain closed queueing networks. JACM, 33(4):768–791, 1986.
[19] E. de Souza e Silva and R. R. Muntz. A note on the computational cost of the linearizer algorithm for
queueing networks. IEEE Trans. Computers, 39(6):840–842, 1990.
[20] Rares-Andrei Dobre, Zifeng Niu, and Giuliano Casale. Approximating fork-join systems via mixed
model transformations. In Companion of the 15th ACM/SPEC International Conference on Perfor-
mance Engineering, ICPE ’24 Companion, page 273–280, New York, NY, USA, 2024. Association
for Computing Machinery.
[21] D. L. Eager and J. N. Lipscomb. The AMVA priority approximation. Performance Evaluation,
8(3):173–193, 1988.
[22] G. Franks. Performance Analysis of Distributed Server Systems. PhD thesis, Carleton, 1996.
[23] G. Franks, T. Al-Omari, M. Woodside, O. Das, and S. Derisavi. Enhanced modeling and solution of
layered queueing networks. Software Engineering, IEEE Transactions on, 35(2):148–161, 2009.
[24] G. Franks, P. Maly, C. M. Woodside, D. C. Petriu, A. Hubbard, and M. Mroz. Layered Queueing
Network Solver and Simulator User Manual, 2012.
[25] N. Gast and B. Van Houdt. Transient and steady-state regime of a family of list-based cache replace-
ment algorithms. Queueing Syst, 83(3-4):293–328, 2016.
[26] D. T. Gillespie. Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem.,
81(25):2340–2361, 1977.
BIBLIOGRAPHY 93
[27] A. Harel, S. Namn, and J. Sturm. Simple bounds for closed queueing networks. Queueing Systems,
31(1-2):125–135, 1999.
[28] P. Heidelberger and K. Trivedi. Queueing network models for parallel processing with asynchronous
tasks. IEEE Transactions on Computers, 100(11):1099–1109, 1982.
[29] G. Horváth and M. Telek. Butools 2: A rich toolbox for markovian performance evaluation. In
Proc. of VALUETOOLS, pages 137–142, ICST, Brussels, Belgium, Belgium, 2017. ICST (Institute for
Computer Sciences, Social-Informatics and Telecommunications Engineering).
[30] C. Knessl and C. Tier. Asymptotic expansions for large closed queueing networks with multiple job
classes. IEEE Trans. Computers, 41(4):480–488, 1992.
[33] KT Marshall. Some relationships between the distributions of waiting time, idle time and interoutput
time in the gi/g/1 queue. SIAM Journal on Applied Mathematics, 16(2):324–327, 1968.
[34] J. F. Pérez and G. Casale. Assessing SLA compliance from Palladio component models. In Proceed-
ings of the 2nd MICAS, 2013.
[35] J. F. Pérez and G. Casale. Line: Evaluating software applications in unreliable environments. IEEE
Transactions on Reliability, 66(3):837–853, Sept 2017.
[36] M. Reiser. A queueing network analysis of computer communication networks with window flow
control. Communications, IEEE Transactions on, 27(8):1199–1209, 1979.
[37] M. Reiser. Mean-value analysis and convolution method for queue-dependent servers in closed queue-
ing networks. Perform. Eval., 1:7–18, 1981.
[38] M. Reiser and S. Lavenberg. Mean-value analysis of closed multichain queuing networks. Journal of
the ACM, 27:313–322, 1980.
[39] J. A. Rolia and K. C. Sevcik. The method of layers. IEEE Transactions on Software Engineering,
21(8):689–700, August 1995.
[40] J. Ruuskanen, T. Berner, K.-E. Årzén, and A. Cervin. Improving the mean-field fluid model of pro-
cessor sharing queueing networks for dynamic performance models in cloud computing. Perform.
Evaluation, 151:102231, 2021.
94 BIBLIOGRAPHY
[41] P. J. Schweitzer. Approximate analysis of multiclass closed networks of queues. In Proc. of the Int’l
Conf. on Stoch. Control and Optim., pages 25–29, Amsterdam, 1979.
[42] A. Seidmann, P. J Schweitzer, and S. Shalev-Oren. Computerized closed queueing network models of
flexible manufacturing systems: A comparative evaluation. Large Scale Systems, 12:91–107, 1987.
[43] K. Sevcik. Priority scheduling disciplines in queuing network models of computer systems. In IFIP
Congress, 1977.
[44] W. Wang, G. Casale, and C. A. Sutton. A bayesian approach to parameter inference in queueing
networks. ACM Trans. Model. Comput. Simul., 27(1):2:1–2:26, 2016.
[45] M. Woodside. Tutorial Introduction to Layered Modeling of Software Performance. Carleton Univer-
sity, February 2013.
[46] J. Zahorjan, D. L. Eager, and H. M. Sweillam. Accuracy, speed, and convergence of approximate mean
value analysis. Perform. Eval., 8(4):255–270, 1988.
[47] S. Zhou and M. Woodside. A multiserver approximation for cloud scaling analysis. In Companion of
the 2022 ACM/SPEC International Conference on Performance Engineering, ICPE ’22, page 129–136,
New York, NY, USA, 2022. Association for Computing Machinery.
Appendix A
Examples
The table below lists the Jupyter notebooks available under the examples folder.
Example Problem
example cacheModel 1 A small cache model with an open arrival process
example cacheModel 2 A small cache model with a closed job population
example cacheModel 3 A layered network with a caching layer
example cacheModel 4 A layered network with a caching layer having a multi-level cache
example cacheModel 5 A caching model with state-dependent output routing
example cdfRespT 1 Station response time distribution in a single-class single-job closed network
example cdfRespT 2 Station response time distribution in a multi-chain closed network
example cdfRespT 3 Station response time distribution in a multi-chain open network
example cdfRespT 4 Simulation-based station response time distribution analysis
example cdfRespT 5 Station response time distribution under increasing job populations
example closedModel 1 Solving a single-class exponential closed queueing network
example closedModel 2 Solving a closed queueing network with a multi-class FCFS station
example closedModel 3 Solving exactly a multi-chain product-form closed queueing network
example closedModel 4 Local state space generation for a station in a closed network
example closedModel 5 1-line exact MVA solution of a cyclic network of PS and INF stations
example closedModel 6 Closed network with round robin scheduling
example closedModel 7 Comparison of different scheduling policies that preserve the product-form solution
example forkJoin 1 A simple single class open fork-join network
example forkJoin 2 A multiclass open fork-join network
example forkJoin 3 A closed model with nested forks and joins
example forkJoin 4 An open model with a fork but without a join
example forkJoin 5 A simple single class closed fork-join network
example forkJoin 6 Two open fork-joins subsystems in tandem
example forkJoin 7 Two-class fork-join with a class that switches into the other after the fork
example forkJoin 8 Two fork-joins loops within the same chain
example initState 1 Specifying an initial state and prior in a single class model.
example initState 2 Specifying an initial state and prior in a multiclass model.
Continued on next page
95
96 APPENDIX A. EXAMPLES