Simulation and Modeling. Part 3
Simulation and Modeling. Part 3
Discrete-event System Simulation J. Banks, J.S. Carson and B.L. Nelson Prentice Hall International, 1994 Simulation Modeling and Analysis A.M. Law and W.D. Kelton McGraw Hill, 2000
A Compositional Approach to Performance Modelling (rst three chapters) J. Hillston Cambridge University Press, 1996. On-line at: http://www.doc.ic.ac.uk/ jb/teaching/336/1994hillston-thesis.ps.gz Probability and Statistics with Reliability, Queuing and Computer Science Applications K. Trivedi Wiley, 2001
Course Overview
This course is about using models to understand performance aspects of real-world systems A model represents abstractly the dynamical behaviour of a system It then becomes a tool for reasoning about (the performance of) a system Models have many uses, e.g.: To understand the behaviour of an existing system To predict the eect of changes, rewrites or upgrades to a system To study new or imaginary systems
There are many application areas, e.g. Computer architecture Networks/distributed systems Software performance engineering Telecommunications Manufacturing Healthcare Transport Bioinformatics Environmental science/engineering
This course...
Were going to focus on state transition systems, with continuous time and discrete states The state space will either be nite, or countably innite:
...
...
...
States have holding times Transitions correspond to "events"
Here, states have holding times (random variables) and transitions may occur probabilistically
5 6
These models can variously be solved: Analytically Numerically By simulation Broadly, these are in decreasing order of eciency and increasing order of generality
System
Syllabus Part I
I.1 Fundamental Laws I.2 Markov Processes I.3 Queueing Systems I.4 Simulation
Discrete-event Simulation Random number generation Distribution sampling Output analysis
Syllabus PART II
II.1 Stochastic Petri Nets II.2 Stochastic Process Algebras II.3 Fluid Approximations
Analytical Model
(closed form expressions)
Numerical Model
(linear system of equations)
Simulation Model
(program for generating sample paths)
Direct calculation
Linear solver
Performance Measures
...
Transactions arrive randomly at some specied rate The TP server is capable of servicing transactions at a given service rate Q: If both the arrival rate and service rate are doubled, what happens to the mean response time?
9
V AR(X ) 50 E (X )2 Rank the following in order of mean processing time per job: 10
2 CX =
Round Robin (preemptive) First-Come-First-Served Shortest Job First (non-preemptive) Shortest Remaining Processing Time (preemptive)
10
2
Arrivals
Disk 1
3
BROKEN
Disk2
. . .
Around 0.5 customers pass through the terminal each second and it takes just under 8 seconds on average to scan each passenger The average delay is 30 minutes(!) Q: How long would it take on average if all 5 scanners were working?
11
On average, each submitted job makes 121 visits to the CPU, has 70 disk 1 accesses and 50 disk 2 accesses The mean service times are 5ms for the CPU, 30ms for disk 1 and 27ms for disk 2 Q: How would the system response time change if we replace the CPU with one twice as fast?
12
There are three job types, numbered 0, 1 and 2, which occur with probability 0.3, 0.5 and 0.2 respectively Jobs arrive at the rate of 4/hr and visit the stations in the order: 0 1 2 2, 0, 1, 4 3, 0, 2 1, 4, 0, 3, 2
Queue
The mean time for each job type at each station in visit order is
3 4
0 1 2
0.50, 0.60, 0.85, 0.50 1.10, 0.80, 0.75 1.10, 0.25, 0.70, 0.90, 1.00
Type 0 jobs
Queue
Q: If you were to invest in one extra machine, where would you put it?
13 14
Lets assume we observe the system for time T and let The number of arrivals be A The number of completions be C From these we can dene a number of useful performance measures and establish relationships between them
15
16
Littles Law
Suppose we plot (A C ) over the observation time (T ):
AC
U = S
Often we work with service rates rather than times, in which case we have U = / Importantly, note that U < 1 so we require = 1/S > Q: What if = ?
I/T
Total area= I
4 3 2 1 0
0 t1
t2
t3 t4 t5
t6
t7 t8
t9 t10 t11
time
17
18
The average number of jobs in the system is N = I/T The average time each job spends in the system (the average waiting time) is W = I/C Since I/T = C/T I/C we have:
Littles Law can be applied to any system (and subsystem) in equilibrium, e.g.
Clients . . . Subservers
N = W
Note: Assumes (A C ) is zero at either end of the observation period, or that C is much larger than the residual populations at either end
CPU
19
20
W = N/ Z
21
k = V k
where k is the throughput of resource k
22
which
23
24
Example: two machines share a sequential I/O device, but are otherwise independent Machine i issues I/O requests at rate ri and I/O requests from machine i are serviced at rate si , where i [1, 2]
r2 r1 s1
2
s1 s2
1. Both machines executing 2. Machine 1 doing I/O 3. Machine 2 doing I/O 4. Machine 1 doing I/O, machine 2 waiting 5. Machine 2 doing I/O, machine 1 waiting
1
s2 r2
item The exponential distribution is memoryless, in that the future is independent of the past, i.e. if X exp(r ) P (X t + s | X > t) = = = = 1 P (X > t + s & X > t)/P (X > t) 1 P (X > t + s)/P (X > t) 1 er(t+s) /ert = 1 ers P (X s ) s, t 0
r1
Q =
(r1 + r2 ) s1 s2 0 0
r1 (r2 + s1 ) 0 0 s2
r2 0 (r1 + s2 ) s1 0
0 r2 0 s1 0
0 0 r1 0 s2
25
26
In general there may be several possible transitions out of a state, each with an associated rate These race each other to be the rst to re The state holding time is thus the minimum of a set of exponentially distributed random variables; this is itself exponentially distributed: If Xi exp(ri ), 1 i n then P (( min Xi ) x) =
1 i n
1
i=1 n i=1 ri )
27
28
Linear Equations
Assume that the MP is irreducible: each state can be reached from every other S may be nite or may contain (countably) innitely many states Recall: if the time between transitions is exponentially distributed with some (rate) parameter r then the number of events, N , in time interval t is Poisson distributed: P (N = n) = Recall also the expansion rule: ex = 1 + x + x2 x3 + + ... 2! 3! (r t)n ert n!
30
Note that the probability of two or more rings in an interval (t, t + t) is o(t) In particular, given several possible event rings from a state i S (potentially dierent rates), e.g.
Event 1 qi,j Event 2 i j
o( h ) h
=0
1 r t + o(t)
k qi,k
P (Event 1 (only) res in (t, t + t)) Similarly: P (N = 1) = (r t) ert = r t + o(t) = P (Event 1 res once and Event 2 res zero times in (t, t + t)) = (qi,j t + o(t))(1 qi,k t + o(t)) = qi,j t + o(t)
31 32
However, P (Both Event 1 and Event 2 re in (t, t + t)) = (qi,j t + o(t))(qi,k t + o(t)) = o(t) Innitely many other multiple rings are possible (e.g. state m state n state i) but the probability of all such multiple rings occurring in (t, t + t) is o(t) The probability of there being no transitions out of state i in interval (t, t + t) is (1 qi,j t + o(t)) = 1
j S,j =i j S,j =i
Now lets put this all together... During an innitesimal interval (t, t + t) either nothing can happen or one event can trigger a transition or... ...any combination of multiple events can re, but each of these compound events happens with probability o(t). Thus: pi (t + t) = pi (t) 1
j S j S
qi,j t
qi,j t + o(t)
The probability of exactly one transition from state i to some other state j in the interval (t, t + t) is qi,j t + o(t) The probability of more than one transition is o(t)
33 34
Rearranging the above, pi (t + t) pi (t) = pi (t) t Thus, in the limit t 0, dpi (t) dt = pi (t)
j S,j =i
qi,j +
j S,j =i
j S,j =i
Theorem
For a nite irreducible CTMC there exists a limiting distribution p = limt p(t)
qi,j +
j S,j =i
pj (t) qj,i
For all i S , if there exists a limiting distribution such that, as t , pi (t) converges on the limit pi , (the equilibrium or steady state probability of being in state i) then: limt dpi (t) =0 dt pi
j S,j =i qi,j
j S,j =i
pj qj,i
The equation p Q = 0 is just a nice way of packaging up the balance equations It also denes a linear system of equations that can be solved numerically to obtain the pi , i S For example, consider the q 0 ,0 . . . (p1 , ..., pn ) qi,0 . . . qn,0 This is precisely: pi product of p and column i of Q: ... q0,i ... q0,n ... qi,i ... qi,n = (0, ..., 0) ... qn,i ... qn,n =
j S,j =i
j S,j =i qi,j
pj qj,i , as above
37
38
Note that Q is singular (every row sums to 0) and so has no inverse (there is no unique solution) However... if we encode the normalising condition sS ps = 1 into Q then the irreducibility of the process means that Q becomes non-singular; there is then a unique solution for p Thus, pick a column, c, and set all its elements to 1 (giving Qc , say); likewise the cth element of the 0 vector (giving 1c ) Now use your favourite linear solver to solve p Qc = 1c : q0,0 q0,1 ... q0,c1 1 ... q0,n q1,0 q1,1 ... q1,c1 1 ... q1,n = (0, 0, ..., 0, 1, 0, ..., 0) p . . c1 . qn,0 qn,1 ... qn,c1 1 ... qn,n
39
Theorem
The generator matrix for an irreducible CTMC with n states has rank n 1.
Corollary
The normalised generator matrix for an irreducible CTMC with n states, in which each element of a chosen column is replaced by 1, has rank n and is thus non-singular (i.e. invertible).
40
For example, with r1 = 0.05, r2 = 0.1, s1 = 0.02, s2 = 0.05 we get: p = (0.0693, 0.0990, 0.1683, 0.4951, 0.1683) Note that
5 i=1
pi = 1
41