0% found this document useful (0 votes)
4 views10 pages

Samavia Fayyaz

Xyz

Uploaded by

simranjesrani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views10 pages

Samavia Fayyaz

Xyz

Uploaded by

simranjesrani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

MEHRAN UNIVERSITY OF ENGINEERING & TECHNOLOGY, JAMSHORO

DEPARTMENT OF TELECOMMUNICATION
QUEUEING THEORY (TL431)

Lab 11: Analyse Open and Closed Queuing Networks using OMNeT++

Student ID Subject Knowledge Data Analysis and Interpretation Ability to Conduct Experiment
19TL101

Objective 1.2 NED file


import org.omnetpp.queueing.Fork;
The objective(s) of this lab is/are to, import org.omnetpp.queueing.Merge;
import org.omnetpp.queueing.Queue;
• Learn how to implement and analyse open and import org.omnetpp.queueing.SourceOnce;
closed queueing networks using OMNeT++
network closedbatch
{
@display("bgb=510,375");
1 Closed System submodules:
sourceOnce: SourceOnce {
@display("p=75.46875,285.775");
Consider a batch system shown in Fig. 1, where there }
merge1: Merge {
are always N = 6 jobs in this system. As soon as a job @display("p=242.50624,285.775");
completes service, a new job is started. Each job must }
merge2: Merge {
go through the “service facility”. At the service facility, @display("p=328,116");
with probability 1/2 the job goes to server 1, and with }
queue1: Queue {
probability 1/2 it goes to server 2. Server 1 services @display("p=243,53");
jobs at an average rate of 1 job every 3 seconds. Server }
queue2: Queue {
2 also services jobs at an average rate of 1 job every 3 @display("p=243,188");
seconds. The distribution on the service times of the jobs }
fork: Fork {
is irrelevant for this problem. Response time is defined @display("p=155,116");
as usual as the time from when a job first arrives at the }
connections:
service facility (at the fork) until it completes service. sourceOnce.out --> merge1.in++;
merge1.out --> fork.in;
N = 6 jobs fork.out++ --> queue1.in++;
queue1.out --> merge2.in++;
fork.out++ --> queue2.in++;
Server 1
queue2.out --> merge2.in++;
µ =⅓ merge2.out --> merge1.in++;
½ }
Server 2
½
µ =⅓
1.3 Configuration File
[General]
network = closedbatch
Figure 1: A closed batch system sim-time-limit = 1h
**.numJobs = 6
**.queue1.serviceTime = 3s
**.queue2.serviceTime = 3s
1.1 Task
Assume that the routing probabilities remain constant
at 1/2 and 1/2. You replace server 1 with a server that is
2 Open System
twice as fast (the new server services jobs at an average
Suppose the system is changed into an open system,
rate of 2 jobs every 3 seconds).
rather than a closed system, as shown in Fig. 2, where
1. Does this “improvement” affect the average re- arrival times are independent of service completions.
sponse time in the system?
Server 1
2. Does it affect the throughput?
µ =⅓
3. Suppose the system uses a lower value of N . Does ½
the answer change? Server 2
½
µ =⅓

To simulate this scenario, use queueinglib of om- 1

netpp. As a startup hint, you can use following .ned


and .ini file.
1
Right click on your Project -> Properties -> Project
Figure 2: An open system
References -> Tick Mark queueinglib -> OK

1/4
MEHRAN UNIVERSITY OF ENGINEERING & TECHNOLOGY, JAMSHORO
DEPARTMENT OF TELECOMMUNICATION
QUEUEING THEORY (TL431)

Lab 11: Analyse Open and Closed Queuing Networks using OMNeT++
3.1 Task 3 One Faster versus Many Slower
1. Does the “improvement” affect the average response You are given a choice between one fast CPU of speed s,
time in the system? i.e., you replace server 1 with or n slow CPUs each of speed s/n (see Fig. 3). Your goal
a server that is twice as fast. is to minimize mean response time. To start, assume
that jobs are non-preemptible (i.e., each job must be run
2. Does this “improvement” affect the utilization of
to completion).
servers in the system? Does this answer change with
the variation in N ?
µ =1
3. Does it affect the throughput?
4. Suppose the system uses a lower value of N . Does µ =1
versus µ =4
the answer change?
µ =1

To simulate this scenario, use queueinglib of om- µ =1

netpp. As a startup hint, you can use following .ned


and .ini file.
Figure 3: Many Slower versus One Faster

3.1 NED file


3.1 Task
import org.omnetpp.queueing.SourceOnce;
import org.omnetpp.queueing.Fork; 1. Which is the better choice: one fast machine or
import org.omnetpp.queueing.Queue; many slow ones?
import org.omnetpp.queueing.Merge;
import org.omnetpp.queueing.Sink; 2. Which system do you prefer when load is low and/or
network opennetwork high?
{
@display("bgb=597,278");
submodules: To simulate the scenario of many slower machines, use
sourceOnce: SourceOnce { queueinglib of omnetpp. As a startup hint, you can use
@display("p=60,120");
} following .ned and .ini file. Note that you have already
fork: Fork { done simulation of single queue and server, and we as-
@display("p=160,120");
} sume you can simulate one faster machine to compare
queue2: Queue { your results.
@display("p=260,200");
}
queue1: Queue {
@display("p=260,60"); 3.1 NED file
}
merge: Merge {
@display("p=360,120"); import org.omnetpp.queueing.Source;
} import org.omnetpp.queueing.PassiveQueue;
sink: Sink { import org.omnetpp.queueing.Server;
@display("p=460,120"); import org.omnetpp.queueing.Sink;
}
connections: network manyslow
sourceOnce.out --> fork.in; {
fork.out++ --> queue1.in++; @display("bgb=550,350");
queue1.out --> merge.in++; submodules:
merge.out --> sink.in++; source: Source {
fork.out++ --> queue2.in++; @display("p=50,150");
queue2.out --> merge.in++; }
} passiveQueue: PassiveQueue {
@display("p=150,150");
}
server1: Server {
@display("p=250,70");
}
2.3 Configuration File server2: Server {
@display("p=250,140");
}
[General] server3: Server {
network = opennetwork @display("p=250,210");
sim-time-limit = 1h }
**.numJobs = 6 server4: Server {
**.queue1.serviceTime = 3s @display("p=250,280");
**.queue2.serviceTime = 3s }
sink: Sink {

2/4
MEHRAN UNIVERSITY OF ENGINEERING & TECHNOLOGY, JAMSHORO
DEPARTMENT OF TELECOMMUNICATION
QUEUEING THEORY (TL431)

Lab 11: Analyse Open and Closed Queuing Networks using OMNeT++
@display("p=350,150"); Random Each job flips a fair coin to determine where
}
connections:
it is routed.
source.out --> passiveQueue.in++;
passiveQueue.out++ --> server1.in++;
Round-Robin The ith job goes to host i mod n, where
passiveQueue.out++ --> server2.in++; n is the number of hosts, and hosts are numbered
passiveQueue.out++ --> server3.in++;
passiveQueue.out++ --> server4.in++;
0, 1, · · · , n − 1.
server1.out --> sink.in++; Shortest-Queue Each job goes to the host with the
server2.out --> sink.in++;
server3.out --> sink.in++; fewest number of jobs.
server4.out --> sink.in++;
}
3.1 Task
1. Which of these task assignment policies yields the
3.3 Configuration File lowest mean response time?
2. Which system do you prefer when load is low and/or
[General]
network = manyslow high?
sim-time-limit = 0.5h
**.interArrivalTime = exponential(1s)
**.sendingAlgorithm = "roundRobin" To simulate this scenario, use queueinglib of om-
**.fetchingAlgorithm = "priority" netpp. As a startup hint, you can use following .ned
**.server1.serviceTime = 1s
**.server2.serviceTime = 1s and .ini file.
**.server3.serviceTime = 1s
**.server4.serviceTime = 1s
3.1 NED file
import org.omnetpp.queueing.Source;
import org.omnetpp.queueing.PassiveQueue;
4 Task Assignment import org.omnetpp.queueing.Server;
import org.omnetpp.queueing.Queue;
import org.omnetpp.queueing.Sink;
Consider a server farm with a central dispatcher and sev-
eral hosts. Each arriving job is immediately dispatched network taskassignment
{
to one of the hosts for processing. Figure 4 illustrates @display("bgb=550,350");
such a system. submodules:
source: Source {
Server farms like this are found everywhere. Web @display("p=50,150");
server farms typically deploy a front-end dispatcher like }
passiveQueue: PassiveQueue {
Cisco’s Local Director or IBM’s Network Dispatcher. @display("p=150,150");
Super-computing sites might use LoadLeveler or some }
server1: Server {
other dispatcher to balance load and assign jobs to hosts. @display("p=250,70");
For the moment, let’s assume that all the hosts are }
server2: Server {
identical (homogeneous) and that all jobs only use a @display("p=250,140");
single resource. Let’s also assume that once jobs are }
server3: Server {
assigned to a host, they are processed there in FCFS @display("p=250,210");
order and are non-preemptible. }
server4: Server {
@display("p=250,280");
Host 1 }
queue1: Queue {
@display("p=350,70");
}
Host 2 queue2: Queue {
@display("p=350,140");
Dispatcher }
Arrivals
(Load Balancer) queue3: Queue {
@display("p=350,210");
Host 3 }
queue4: Queue {
@display("p=350,280");
}
sink: Sink {
@display("p=450,150");
Figure 4: A Distributed Server System With Central }
Dispatcher connections:
source.out --> passiveQueue.in++;
passiveQueue.out++ --> server1.in++;
passiveQueue.out++ --> server2.in++;
There are many possible task assignment policies that passiveQueue.out++ --> server3.in++;
can be used for dispatching jobs to hosts. Here are a few: passiveQueue.out++ --> server4.in++;

3/4
MEHRAN UNIVERSITY OF ENGINEERING & TECHNOLOGY, JAMSHORO
DEPARTMENT OF TELECOMMUNICATION
QUEUEING THEORY (TL431)

Lab 11: Analyse Open and Closed Queuing Networks using OMNeT++
server1.out --> queue1.in++; import org.omnetpp.queueing.Queue;
server2.out --> queue2.in++; import org.omnetpp.queueing.Sink;
server3.out --> queue3.in++;
server4.out --> queue4.in++; network TandemQueue
queue1.out --> sink.in++; {
queue2.out --> sink.in++; parameters:
queue3.out --> sink.in++; @display("i=block/network");
queue4.out --> sink.in++; submodules:
} source: Source {
@display("p=32.0,74.0");
}
queue: Queue {
@display("p=127.0,74.0");
3.1 Configuration File }
queue1: Queue {
[General] @display("p=227.0,74.0");
network = taskassignment }
sim-time-limit = 0.5h queue2: Queue {
**.numJobs = 500 @display("p=328.0,74.0");
**.interArrivalTime = exponential(1s) }
**.sendingAlgorithm = "roundRobin" sink: Sink {
**.server*.serviceTime = 0s @display("p=430.0,74.0");
**.queue1.serviceTime = exponential(10s) }
**.queue2.serviceTime = exponential(10s) connections:
**.queue3.serviceTime = exponential(10s) source.out --> queue.in++;
**.queue4.serviceTime = exponential(10s) queue.out --> queue1.in++;
queue1.out --> queue2.in++;
queue2.out --> sink.in++;
}

5 Tandem Queues (Open)


When several simple M/M/1 queueing systems as nodes 3.1 Configuration File
are connected in serial to each other as shown in Fig. 5. [General]
Thus, at each node the arrival process is a poisson pro- network = TandemQueue
cess with parameter λ and the nodes operate independ- sim-time-limit = 0.5h
**.interArrivalTime = exponential(2s)
ently of each other. Hence, if the service times have **.serviceTime = exponential(2s)
parameter µi at the i node then ρi = λ/µi , all the per-
th

formance measures for a given node could be calculated.


6 Exercise
1 2 3
1. Plot the graph of response time versus load ρ ∈
(0, 1) for all the discussed scenarios.
Figure 5: Queueing network connected in tandem
2. Explain the impact of different values of N on the
performance for all the discussed scenarios.
3.1 Task
1. Verify that mean number of customers in the net-
work is the sum of the mean number of customers
in the nodes.
2. Similarly, verify that the mean waiting and response
times for the network are the sum of the related
measures in the nodes.
3. Suppose, the network of queues use different service
times, does it affect the performance?

To simulate this scenario, use queueinglib of om-


netpp. As a startup hint, you can use following .ned
and .ini file.

3.1 NED file


import org.omnetpp.queueing.Source;

4/4
MEHRAN UNIVERSITY OF ENGINEERING & TECHNOLOGY, JAMSHORO
DEPARTMENT OF TELECOMMUNICATION
QUEUEING THEORY (TL431)

Lab 11: Analyse Open and Closed Queuing Networks using OMNeT++
Exercise [Q.01 & Q.02]
Task: [Closed System]
Assume that the routing probabilities remain constant
at 1/2 and 1/2. You replace server 1 with a server that is
twice as fast (the new server services jobs at an average
rate of 2 jobs every 3 seconds).
1. Does this “improvement” affect the average re-
sponse time in the system?
Not really. The average response time is hardly affected.

2. Does it affect the throughput?


Not really. The throughput is hardly affected.

3. Suppose the system uses a lower value of N . Does


the answer change?
Yes. If N is sufficiently low, then the “improvement”
helps. Consider, for example, the case N = 1.

5/4
MEHRAN UNIVERSITY OF ENGINEERING & TECHNOLOGY, JAMSHORO
DEPARTMENT OF TELECOMMUNICATION
QUEUEING THEORY (TL431)

Lab 11: Analyse Open and Closed Queuing Networks using OMNeT++

Task: [Open System]

1. Does the “improvement” affect the average response


time in the system? i.e., you replace server 1 with
a server that is twice as fast.

Yes.
2. Does this “improvement” affect the utilization of
servers in the system? Does this answer change with
the variation in N ?
Yes, the improvement affects the utilization of
servers in the system. The utilization of a server is the
ratio of time the server is busy to the total time. With
the new server that is twice as fast, it can handle jobs
at a faster rate, leading to an increase in server
utilization.

3. Does it affect the throughput?


Not really. The throughput is hardly affected because it
depends on average arrival rate.

4. Suppose the system uses a lower value of N . Does


the answer change?
No.

6/4
MEHRAN UNIVERSITY OF ENGINEERING & TECHNOLOGY, JAMSHORO
DEPARTMENT OF TELECOMMUNICATION
QUEUEING THEORY (TL431)

Lab 11: Analyse Open and Closed Queuing Networks using OMNeT++

Task: [one faster versus many slower]


1. Which is the better choice: one fast machine
ormany slow ones?
It turns out that the answer depends on the
variability of the job size distribution, as well as
on the system load.
2. Which system do you prefer when load is low
and/or high?
When job size variability is high, we prefer many
slow servers because we do not want short jobs
getting stuck behind long ones. However, When
load is low, not all servers will be utilized, so it
seems better to go with one fast server.

7/4
MEHRAN UNIVERSITY OF ENGINEERING & TECHNOLOGY, JAMSHORO
DEPARTMENT OF TELECOMMUNICATION
QUEUEING THEORY (TL431)

Lab 11: Analyse Open and Closed Queuing Networks using OMNeT++

Task: [Task Assignment]


1. Which of these task assignment policies yields
thelowest mean response time?
When the load is low, a system with a smaller number of
resources may be sufficient and result in lower overhead
costs.

2. Which system do you prefer when load is low


and/or high?
If job size variability is low, then the LWL policy is best. If
job size variability is high, then it is important to keep
short jobs from getting stuck behind long ones, so a
SITA-like policy, which affords short jobs isolation from
long ones, can be far better.

8/4
MEHRAN UNIVERSITY OF ENGINEERING & TECHNOLOGY, JAMSHORO
DEPARTMENT OF TELECOMMUNICATION
QUEUEING THEORY (TL431)

Lab 11: Analyse Open and Closed Queuing Networks using OMNeT++
3. Suppose, the network of queues use different
service times, does it affect the performance?
Different service times in nodes can affect the
performance of the network, and load balancing across
nodes through task assignment policies can improve
performance. The bottleneck node determines the
overall performance of the network.

Task: [Tandem Queues (Open)]


1. Verify that mean number of customers in the
net- work is the sum of the mean number of
customers in the nodes.
The mean number of customers in a network of queues
is the sum of the mean number of customers in each
node, as a consequence of the law of total probability
and independence between the nodes.

2. Similarly, verify that the mean waiting and


response times for the network are the sum of
the related measures in the nodes.
The mean waiting and response times for the network
are also the sum of the related measures in each node,
weighted by the fraction of time that a customer spends
in each node.

9/4
MEHRAN UNIVERSITY OF ENGINEERING & TECHNOLOGY, JAMSHORO
DEPARTMENT OF TELECOMMUNICATION
QUEUEING THEORY (TL431)

Lab 11: Analyse Open and Closed Queuing Networks using OMNeT++

10/
4

You might also like