0% found this document useful (0 votes)
75 views46 pages

Network Performance Evaluation: Dr. Muazzam A. Khan

- The document discusses objectives and techniques for evaluating network performance. It outlines a systematic approach to performance evaluation including defining goals, selecting metrics, designing experiments, analyzing data, and presenting results. - Common mistakes discussed are having undefined goals, using unrepresentative workloads, choosing the wrong evaluation technique, and omitting important assumptions. - The art of performance evaluation is selecting the appropriate methodology, workload, and tools based on an intimate understanding of the system being evaluated. There is no single correct way to evaluate performance.

Uploaded by

Tayyab Rafique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views46 pages

Network Performance Evaluation: Dr. Muazzam A. Khan

- The document discusses objectives and techniques for evaluating network performance. It outlines a systematic approach to performance evaluation including defining goals, selecting metrics, designing experiments, analyzing data, and presenting results. - Common mistakes discussed are having undefined goals, using unrepresentative workloads, choosing the wrong evaluation technique, and omitting important assumptions. - The art of performance evaluation is selecting the appropriate methodology, workload, and tools based on an intimate understanding of the system being evaluated. There is no single correct way to evaluate performance.

Uploaded by

Tayyab Rafique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 46

Network Performance

Evaluation
Introduction

Dr. Muazzam A. Khan


Let’s Get Started!
• Describe a performance study you have
done
• Describe a performance study you have
recently read about
– Research paper
– Newspaper article
– Scientific journal
• And list one good thing or one bad thing
about it
Outline
• Objectives (next)
• The Art
• Common Mistakes
• Systematic Approach
• Case Study
Objectives (1 of 6)
• Select appropriate evaluation techniques,
performance metrics and workloads for a system.
– Techniques: measurement, simulation, analytic
modeling
– Metrics: criteria to study performance ( Response
time)
– Workloads: requests by users/applications to the
system
• Example: What performance metrics should you
use for the following systems?
– Transactions processing systems
Objectives (2 of 6)
• Conduct performance measurements
correctly
– Need two tools: load generator and monitor
• Example: Which workload would be
appropriate to measure performance for
the following systems?
– a) Utilization on a LAN
– b) Response time from a Web server
– c) Audio quality in a VoIP network
Objectives (3 of 6)
• Use proper statistical techniques to compare several
alternatives
– One run of workload often not sufficient
• Many non-deterministic computer events that effect
performance
– Comparing average of several runs may also not lead to
correct results
• Especially if variance is high
• Example: Packets lost on a link. Which link is better?
File Size Link A Link B
10 5 10
12 7 3
13 3 0
50 0 1
Objectives (4 of 6)
• Design measurement.
– Many factors that affect performance. Separate out
the effects that individually matter.
• Example: The performance of a system depends upon
three factors:
– A) I/O collection technique
– B) type of workload: editing, compiling, AI
– C) type of CPU: P2, P4, Sparc
How many experiments are needed? How can the
performance of each factor be estimated?
Objectives (5 of 6)
• Perform simulations correctly
– Select correct language, length of
simulation run, and analysis
– Before all of that, may need to validate
simulator
• Example: To compare the performance of
two cache replacement algorithms:
– A) how long should the simulation be run?
– B) what can be done to get the same
accuracy with a shorter run?
Objectives (6 of 6)
• Use simple queuing models to analyze the
performance of systems.
• Often can model computer systems by
service rate and arrival rate of load
– Multiple servers
– Multiple queues
Outline
• Objectives (done)
• The Art (next)
• Common Mistakes
• Systematic Approach
• Case Study
The Art of Performance Evaluation
• Evaluation cannot be produced mechanically
– Requires intimate knowledge of system
– Careful selection of methodology, workload,
tools
• No one correct answer as two performance
analysts may choose different metrics or
workloads
• Like art, there are techniques to learn
– how to use them
– when to apply them
Example: Comparing Two Systems
• Two systems, two workloads, measure
transactions per second

Work- Work-
System load 1 load 2
A 20 10
B 10 20

• Which is better?
Example: Comparing Two Systems
• Two systems, two workloads, measure
transactions per second

Work- Work-
System load 1 load 2 Average
A 20 10 15
B 10 20 15

• They are equally good!


• … but is A better than B?
The Ratio Game
• Take system B as the base

Work- Work-
System load 1 load 2 Average
A 2 0.5 1.25
B 1 1 1

• A is better!
• … but is B better than A?
Outline
• Objectives (done)
• The Art (done)
• Common Mistakes (next)
• Systematic Approach
• Case Study
Common Mistakes (1 of 3)
• Undefined Goals
– There is no such thing as a general model
– Describe goals and then design experiments
– (Don’t shoot untill identify your target)
• Biased Goals
– Don’t show YOUR system better than Others
– (Performance analysis is like a jury)
• Unrepresentative Workload
– Should be representative of how system will
work “in the wild”
– Ex: large and small packets? Don’t test with
only large or only small
System Behavior
• Idle
• Worst scenario
• Best Scenario
• Optimal
• Get Max output using Min input

• Types of Data
• Data, Images, Videos, Live chat, Live Video,
hybrid,
• Multiple values, take its average.
• Wireless Networks Deployment
• No perform any survey?
• Where to install or equip their Access Point
• How to provide access to users
• How many users can be served at a time
• How the AP ll be connected with the server
• Its Performance evaluation
• Availability, Reliability, Security,
Scalability, real time data info delivery
Mistake
• To deploy a network, calculate the overall
performance of a network?

• The no of nodes?
• Bandwidth available?
• Either there is any specific distribution of BW
• Are they are equally sharing the BW
• Security actually have some extra load on the
system it may effect the performance of
networks/system in terms of efficiency
• In terms response time, efficiency of the
system
Common Mistakes (2 of 3)
• Wrong Evaluation Technique
– Use most appropriate: model, simulation,
measurement
– (Don’t have a hammer and see everything as a
nail)
• Inappropriate Level of Detail
– Can have too much!
– Can have too little!
• No Sensitivity Analysis
– Analysis is evidence
– Need to determine how sensitive results are.
Common Mistakes (3 of 3)
• Improper Presentation of Results
– It is not the number of graphs, but the
presentation of graphs that help to make
decisions
• Omitting Assumptions and Limitations
– Ex: may assume most traffic TCP, whereas
some links may have significant UDP traffic
– May lead to applying results where
assumptions do not hold
• Graphs Pie, lines, Compare

• Assumptions
• Everything is not in your control.
• Simulator is a very ideal environment
• While real world is totally different
• Ground realities
Outline
• Objectives (done)
• The Art (done)
• Common Mistakes (done)
• Systematic Approach (next)
• Case Study
A Systematic Approach
1. State goals and define boundaries
2. Select performance metrics
3. List system and workload parameters
4. Select factors and values
5. Select evaluation techniques
6. Select workload
7. Design experiments
8. Analyze and interpret the data
9. Present the results. Repeat.
• NS-2, NS-3
• OTCL
• Build Scenarios, tcl ext
• .Nam Animation of your experimental
scenario,
• nodes crated, move, direction, speed
• Pause time
• Trace file .tr Statistical data and protocols
info, Analyze trace file to get final results.
• In form of columns
State Goals and Define Boundaries
• Just “measuring performance” or “seeing
how it works” is too broad
– Ex: goal is to decide which ISP provides
better throughput
• Definition of system may depend upon goals
– Ex: if measuring CPU instruction speed,
system may include CPU + cache
– Ex: if measuring response time, system may
include CPU + memory + … + OS + user
workload
Select Metrics
• Criteria to compare performance
• In general, related to speed, accuracy
and/or availability of system services
• Ex: network performance
– Speed: throughput and delay
– Accuracy: error rate
– Availability: data packets sent do arrive
• Ex: processor performance
– Speed: time to execute instructions
List Parameters
• List all parameters that affect performance
• System parameters (hardware and
software)
– Ex: CPU type, OS type, …
• Workload parameters
– Ex: Number of users, type of requests
• List may not be initially complete, so have
working list and let grow as progress
Select Factors to Study
• Divide parameters into those that are to
be studied and those that are not
– Ex: may vary CPU type but fix OS type
– Ex: may fix packet size but vary number of
connections
• Select appropriate levels for each factor
– Want ones with potentially high impact
– For workload often smaller
– Start small or large number can quickly
overcome available resources!
Select Evaluation Technique
• Depends upon time, resources and desired
level of accuracy
• Analytic modeling
– Quick, less accurate
• Simulation
– Medium effort, medium accuracy
• Mathematical Measurement
– Typical most effort, most accurate
• Note, above are all typical but can be
reversed in some cases!
Select Workload
• Set of service requests to system
• Depends upon measurement technique
– Analytic model may have probability of
various requests
– Simulation may have trace of requests from
real system
– Measurement may have scripts impose
transactions
• Should be representative of real life
Design Experiments
• Want to maximize results with minimal
effort
• Phase 1:
– Many factors, few levels
– See which factors matter
• Phase 2:
– Few factors, more levels
– See where the range of impact for the
factors is
Analyze and Interpret Data
• Compare alternatives
• Take into account variability of results
– Statistical techniques
• Interpret results.
– The analysis does not provide a conclusion
– Different analysts may come to different
conclusions
Present Results
• Make it easily understood
• Graphs
• Disseminate (entire methodology!)
"The job of a scientist is not merely to see: it is to see,
understand, and communicate. Leave out any of these
phases, and you're not doing science. If you don't see,
but you do understand and communicate, you're a
prophet, not a scientist. If you don't understand, but
you do see and communicate, you're a reporter, not a
scientist. If you don't communicate, but you do see and
understand, you're a mystic, not a scientist."
Outline
• Objectives (done)
• The Art (done)
• Common Mistakes (done)
• Systematic Approach (done)
• Case Study (next)
Case Study
• Consider remote pipes (rpipe) versus
remote procedure calls (rpc)
– rpc is like procedure call but procedure is
handled on remote server
• Client caller blocks until return
– rpipe is like pipe but server gets output on
remote machine
• Client process can continue, non-blocking
• Goal: study the performance of
applications using rpipes to similar
applications using rpcs
System Definition
• Client and Server and Network
• Key component is “channel”, either a rpipe
or an rpc
– Only the subset of the client and server
that handle channel are part of the system

Client Network Server

- Try to minimize effect of components


outside system
Services
• There are a variety of services that can
happen over a rpipe or rpc
• Choose data transfer as a common one,
with data being a typical result of most
client-server interactions
• Classify amount of data as either large or
small
• Thus, two services:
– Small data transfer
– Large data transfer
Metrics
• Limit metrics to correct operation only (no
failure or errors)
• Study service rate and resources consumed
A) elapsed time per call
B) maximum call rate per unit time
C) Local CPU time per call
D) Remote CPU time per call
E) Number of bytes sent per call
Parameters
System Workload
• Speed of CPUs • Time between calls
– Local • Number and sizes
– Remote – of parameters
• Network – of results
– Speed • Type of channel
– Reliability (retrans) – rpc
• Operating system – Rpipe
overhead • Other loads
– For interfacing with – On CPUs
channels
– On network
– For interfacing with
network
Key Factors
• Type of channel
– rpipe or rpc
• Speed of network
– Choose short (LAN) across country (WAN)
• Size of parameters
– Small or larger
• Number of calls
– 11 values: 8, 16, 32 …1024
• All other parameters are fixed
• (Note, try to run during “light” network load)
Evaluation Technique
• Since there are prototypes, use
measurement
• Use analytic modeling based on measured
data for values outside the scope of the
experiments conducted
Workload
• Synthetic program generated specified
channel requests
• Will also monitor resources consumed and
log results
• Use “null” channel requests to get baseline
resources consumed by logging
– (Remember the Heisenberg principle!)
Experimental Design
• Full factorial (all possible combinations of
factors)
• 2 channels, 2 network speeds, 2 sizes, 11
numbers of calls
  2 x 2 x 2 x 11 = 88 experiments
Data Analysis
• Analysis of variance will be used to
quantify the first three factors
– Are they different?
• Regression will be used to quantify the
effects of n consecutive calls
– Performance is linear? Exponential?

You might also like