0% found this document useful (0 votes)
22 views39 pages

Lec 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views39 pages

Lec 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 39

NETWORK

PERFORMANCE
EVALUATION
Edited by
Dr. Fasee Ullah
2

Definitions
• Link bandwidth (capacity): maximum rate (in bps)
at which the sender can send data
• Propagation delay: time it takes the signal to travel
from source to destination
• Packet transmission time: time it takes the sender
to transmit all bits of the packet
• Queuing delay: time the packet need to wait before
being transmitted because the queue was not empty
when it arrived
• Processing Time: time it takes a router/switch to
process the packet header, manage memory, etc
3

Delay
• Delay (Latency) of bit (packet, file) from A to B
• The time required for bit (packet, file) to go from A to B
• Jitter
• Variability in delay
• Round-Trip Time (RTT)
• Two-way delay from sender to receiver and back
Overview of Performance Analysis
• Introduction
• purpose of evaluation
• applications of performance evaluation
• performance evaluation techniques
• criteria for selecting an evaluation technique
• applicability of evaluation techniques
• steps for a performance evaluation study
• performance evaluation metrics
• capacity of a system
• performance evaluation study example
• common mistakes in performance evaluation
Introduction

Computer system users, administrators, and designers are


all interested in performance evaluation since the goal is to
obtain or to provide the highest performance at the lowest cost.
A system could be any collection of H/W, S/W and firmware
components; e.g., CPU, DB system, Network. Computer
performance evaluation is of vital importance in the selection of
computer systems, the design of applications and equipment,
and analysis of existing systems.
Purpose of Evaluation
three general purposes of performance
evaluation:

• selection evaluation - system exists elsewhere

• performance projection - system does not yet exist

• performance monitoring - system in operation


Selection Evaluation
• evaluate plans to include performance as a major
criterion in the decision to obtain a particular system
from a vendor is the most frequent case
• to determine among the various alternatives which
are available and suitable for a given application
• to choose according to some specified selection
criteria
• at least one prototype of the proposed system must
exist
Performance Projection
• orientated towards designing a new system
• to estimate the performance of a system that does not
yet exist
• secondary goal - projection of a given system on a
new workload, i.e. modifying existing system in order
to increase its performance or decrease it costs or
both
• upgrading of a system - replacement or addition of
one or more hardware components
Performance Monitoring
• usually performed for a substantial portion of the
lifetime of an existing running system
• performance monitoring is done:
• to detect bottlenecks

• to predict future capacity shortcomings

• to determine most cost-effective way to upgrade the system

• to overcome performance problems, and

• to cope with increasing workload demands


Applications of Performance Evaluation

1. procurement

2. system upgrade

3. capacity planning : process of predicting when


future load levels will saturate the system and of
determining the most cost-effective way of
delaying system saturation as much as possible.

4. system design
Performance Measures
And Evaluation Techniques

1Evaluation Metrics

A computer system, like any other engineering machine, can be


measured and evaluated in terms of how well it meets the
needs and expectations of its users.

It is desirable to evaluate the performance of a computer system


because we want to make sure that it is suitable for its
intended applications, and that it satisfies the given
efficiency and reliability requirements.

We also want to operate the computer system near its optimal


level of processing power under the given resource
constraints.
All performance measures deal with three basic issues:

-How quickly a given task can be accomplished,

-How well the system can deal with failures and other unusual situations,
and

-How effectively the system uses the available resources.

We can categorize the performance measures as follows.

Responsiveness:

These measures are intended to evaluate how quickly a given task can be
accomplished by the system.

Possible measures are:

waiting time

Processing time
Conditional waiting time (waiting time for tasks requiring a specified amount of processing
time),

Queue length etc...

Usage Level:

These measures are intended to evaluate how well the various


components of the system are being used.

Possible measures are:

Throughput (In network communication, it is the amount of data that can be transmitted
successfully per unit time (bps)) and utilization of various resources.

Missionability:

These measures indicate if the system would remain continuously


operational for the duration of a mission.
Possible measures are:

the distribution of the work accomplished during the mission time,

interval availability (Probability that the system will keep performing satisfactorily throughout the
mission time),

life-time (time when the probability of unacceptable behavior increases beyond some threshold).

Dependability:

These measures indicate how reliable the system is over the long run.

Possible measures are:

number of failures/day or minutes etc.

MTTF(mean time to failure).

MTTR(mean time to repair).


Productivity:

These measures indicate how effectively a user can get his or her
work accomplished.

Possible measures are:

user friendliness.

Maintainability.

And understandability.
The relative importance of various measures

Because these measures are difficult to quantify, we shall not consider


them.

The relative importance of various measures depends on the application


involved.

In the following, we provide a broad classification of computer systems


according to the application domains, indicating which measures
are most relevant:

1. General purpose computing:

These systems are designed for general purpose problem solving.

Relevant measures are:

responsiveness, usage level and productivity.


2. High availability:

Such systems are designed for transaction processing environments:

(bank, Airline. Or telephone databases. Switching systems. etc.).

The most important measures are responsiveness and dependability.

Both of these requirements are more severe than for general purpose computing systems,

moreover, any data corruption or destruction is unacceptable.

Productivity is also an important factor.

.3Real-time control:

Such systems must respond to both periodic and randomly occurring events within
some(possibly hard) timing constraints.

Note that the utilization and throughput play little role in such systems.
.4Mission Oriented:

These systems require extremely high levels of reliability over a short period, called the
mission time.

Little or no repair/tuning is possible during the mission.

Such systems include battlefield systems, health monitoring systems and spacecrafts.

Responsiveness is also important, but usually not difficult to achieve.

Such systems may try to achieve high reliability during the short term at the expense of
poor reliability beyond the mission period.
5. Long-life:
Systems like the ones used for unmanned spaceships need long life without provision for
manual diagnostics and repairs.

Thus. In addition to being highly dependable, they should have considerable intelligence
built in to do diagnostics and repair either automatically or by remote control from
aground station.

Responsiveness is important but not difficult to achieve.

.1.2Techniques of Performance Evaluation

There are three basic techniques for performance evaluation:


(1) measurement
(2) simulation
(3) analytic modeling
The latter two techniques can also be combined to get what is
usually known as hybrid modeling.
In the following, we discuss these briefly and point out their
comparative advantages and disadvantages.

1.2.1 Measurement

Measurement is the most fundamental technique and is needed


even in analysis and simulation to calibrate the models.

Some measurements are best done in hardware, some in


software, and some in a hybrid manner.

1.2.2 Simulation Modeling

Simulation involves constructing a model for the behavior of the


system and driving it with an appropriate abstraction of the
workload.

The major advantage of simulation is its generality and flexibility,


almost any behavior can be easily simulated.
However, there are many important issues that must be
considered in simulation:

.1It must be decided what not to simulate and at what level of


detail.

.2Simulation, like measurement. Generates much raw data.


which must be analyzed using statistical techniques.

.3A careful experiment design is essential to keep the


simulation cost down.

Both measurement and simulation involve careful experiment


design, data gathering, and data analysis.

These steps could be tedious;


moreover, the final results obtained from the data analysis only
characterize the system behavior for the range of input
parameters covered.

1.2.3 Analytic Modeling

Analytic modeling involves constructing a mathematical model of


the system behavior(at the desired level of detail) and solving it.

Thus, analytic modeling will fail if the objective is to study the


behavior in not great detail.

However. For an overall behavior characterization, analytic


modeling is an excellent tool.

Usually in research paper, it is a recommended process to


evaluate the simulation model using analytical techniques and
also increases a chance of acceptance level
The major advantages of analytic modeling over the other two
techniques are

(a) it generates good insight into the workings of the system that
is valuable even if the model is too difficult to solve,

(b) simple analytic models can usually be solved easily, yet


provide surprisingly accurate results, and

(c) results from analysis have better predictive value than those
obtained from measurement or simulation.

1.2.4 Hybrid Modeling

A complex model may consist of several submodels, each


representing certain aspect of the system.
Only some of these submodels may be analytically tractable, the
others must be simulated.

We can take the hybrid approach. Which will proceed as follows:

1. Solve the analytic model assuming no fragmentation of


memory and determine the distribution of memory holding
time.

2. Simulate only memory allocation. Holding And deallocation.


And determine the average fraction time of memory that could
not be used because of fragmentation.

3. Recalibrate the analytic model of step 1 with reduced memory


and solve it.
(It may be necessary to repeat these steps a few times to get
convergence.)
Applications of Performance Evaluation

Performance modeling and evaluation may be needed for a variety


of purposes;

However, the following needs stand out:

1.System design:

In designing a new system.

One typically starts out with certain performance/ reliability


objectives and a basic system architecture.

And then decides how to choose various parameters to achieve


the objectives.

This involves constructing a model of the system behavior at the


appropriate level of detail, and evaluating it to choose the
parameters.
At higher levels of design, simple analytic reasoning
may be adequate to eliminate bad choice,

But simulation becomes an indispensable tool for


making detailed design decisions and avoiding
costly mistakes

.2System selection:

Here the problem is to select the “best” system from


among a group of systems that are under
consideration for reasons of cost, availability,
compatibility, etc.
There might be practical difficulties in doing so (e.g., not being
able to use them under realistic workloads, or not having the
system available locally).

Therefore, it may be necessary to make projections based on


available data and some simple modeling.

3. System upgrade:

This involves replacing either the entire system or parts there of


with a newer but compatible unit.

The compatibility and cost considerations may dictate the vendor,


So the only remaining problem is to choose quantity, speed, and
the like.

however, in large systems involving complex interactions between


subsystems. Simulation modeling may be essential.
.4System tuning:

The purpose of tune up is to optimize the performance by


appropriately changing the various resource management
policies.

Some examples are process scheduling mechanism.

Context switching, buffer allocation schemes, cluster size for


paging, and contiguity in file space allocation.

It is necessary to decide which parameters to consider changing


and how to change them to get maximum potential benefit.

Direct experimentation is the simplest technique to use here, but


may not be feasible in a production environment.
5. System analysis:

This involves monitoring the system and examining the behavior


of various resource management policies under different loading
conditions.

Suppose that we find a system to be unacceptably sluggish.


The reason could be either inadequate hardware resources(CPU,
memory, disk, etc.)

Or poor system management.

In the former case, we need system upgrade, and in the latter, a


system tuneup.
Benchmarking Computer Systems

When you are selecting a machine/model/simulation for a given


application environment or experiment, it is useful to have some
comparative benchmarks available that can be used to either narrow
down the choices or make a final selection/decision.

one must depend on the published data on the machine.

The benchmarks are primarily intended to provide an overall


assessment of various machines on the market.
Steps for a Performance Evaluation Study

1. State the goals of the study and define the system boundaries.
2. List system services and possible outcomes.
3. Select performance metrics.
4. List system and workload parameters.
5. Select evaluation techniques.
6. Select the workload.
7. Design the experiments.
8. Analyze and interpret the data.
9. Present the results.
Performance Evaluation Metrics

 Different performance metrics are used depending upon


the services offered by the system:

 correctly performed
 turnaround time - the time between the submission of
a batch job and the completion of its output
 response time - the interval between a user's request and
the system response
 throughput (or productivity) - the rate (requests per unit
time) at which requests are serviced by the system
 utilization of a resource - is a measured as the fraction
of time the resource is busy servicing requests
Performance Evaluation Metrics

 Performance metrics can be categorised into three


classes based on their utility function:
Higher is Better or HB
Lower is Better or LB
Nominal is Best or NB
LB HB NB

utility utility utility

metric (e.g., response time) metric (e.g., throughput) metric (e.g., utilisation)
Cost/performance Ratio

 A commonly used metric for comparing two or more


systems in system procurement application
cost includes h/w and s/w licensing, installation and
maintenance costs
performance is measured in terms of throughput
under a given response time constraint
Response Time

 defined as the time interval between the instant the inputting


command to an interactive system terminates and the
corresponding reply begins to appear at the terminal
Keying in of command

Wait in output queue


Wait in input queue

Transmission time
Transmission time

Wait Wait Wait

Output
CPU CPU or I/O I/O CPU

Send command
Start output
Stand-alone response time

Response time
Throughput

 Depending on the system, throughput is influenced by


many factors:
 e.g., in a computer network, good routing combined
with congestion control schemes can improve
throughput
routing 1
Mean routing 2
Throughput
ideal response time

with
congestion
control
Without
congestion
control
Load Load
throughputs
Common Mistakes in Performance Evaluation

1. Is the system correctly defined and the goals clearly stated?


2. Are the goals stated in an unbiased manner?

3. Have all the steps of the analysis followed systematically?

4. Is the problem clearly understood before analyzing it?

5. Are the performance metrics relevant for this problem?

6. Is the workload correct for this problem?

7. Is the evaluation technique appropriate?

8. Is the list of parameters that affect performance complete?


Common Mistakes in Performance Evaluation

9. Have all parameters that affect performance been chosen as factors


to be varied?

10. Is the experimental design efficient in terms of time and results?

11. Is the level of detail proper?

12. Is the measured data presented with analysis and interpretation?

13. Is the analysis statistically correct?

14. Has the sensitivity analysis been done?

15. Would errors in the input cause an insignificant change in the


results?

16. Have the outliers in the input and output been treated properly?
Common Mistakes in Performance Evaluation

17. Have the future changes in the system and workload been
modeled?

18. Have the variance of input been taken into account?

19. Have the variance of the results been analyzed?

20. Is the analysis easy to explain?

21. Is the presentation style suitable for its audience?

22. Have the results been presented graphically as much as


possible?

23. Are the assumptions and limitations of the analysis clearly


documented?

You might also like