0% found this document useful (0 votes)
51 views

Intro PDF

The document outlines the objectives of a course on modeling and performance evaluation of network and computer systems. It discusses 6 main objectives: 1) Selecting appropriate evaluation techniques, metrics, and workloads. 2) Conducting performance measurements correctly. 3) Using proper statistical techniques to compare alternatives. 4) Designing experiments to provide information with minimal effort. 5) Performing simulations correctly. 6) Selecting appropriate evaluation techniques, metrics, and workloads. It also discusses common mistakes to avoid such as undefined goals, biased goals, unrepresentative workloads, wrong evaluation techniques, inappropriate detail levels, lack of sensitivity analysis, and improper presentation of results.

Uploaded by

Nang Nang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Intro PDF

The document outlines the objectives of a course on modeling and performance evaluation of network and computer systems. It discusses 6 main objectives: 1) Selecting appropriate evaluation techniques, metrics, and workloads. 2) Conducting performance measurements correctly. 3) Using proper statistical techniques to compare alternatives. 4) Designing experiments to provide information with minimal effort. 5) Performing simulations correctly. 6) Selecting appropriate evaluation techniques, metrics, and workloads. It also discusses common mistakes to avoid such as undefined goals, biased goals, unrepresentative workloads, wrong evaluation techniques, inappropriate detail levels, lack of sensitivity analysis, and improper presentation of results.

Uploaded by

Nang Nang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Let’s Get Started!

• Describe a performance study you have


CS533 done
– Work or School or …
Modeling and Performance • Describe a performance study you have
Evaluation of Network and recently read about
Computer Systems – Research paper
– Newspaper article
– Scientific journal
Introduction • And list one good thing or one bad thing
about it
(Chapters 1 and 2)

Outline Objectives (1 of 6)
• Objectives (next)
• Select appropriate evaluation techniques,
performance metrics and workloads for a system.
• The Art – Techniques: measurement, simulation, analytic
• Common Mistakes modeling

• Systematic Approach – Metrics: criteria to study performance (ex:


response time)
• Case Study – Workloads: requests by users/applications to the
system
• Example: What performance metrics should you
use for the following systems?
– a) Two disk drives
– b) Two transactions processing systems
– c) Two packet retransmission algorithms

Objectives (2 of 6) Objectives (3 of 6)
• Conduct performance measurements • Use proper statistical techniques to compare
several alternatives
correctly
– One run of workload often not sufficient
– Need two tools: load generator and monitor • Many non-deterministic computer events that effect
• Example: Which workload would be performance
– Comparing average of several runs may also not lead
appropriate to measure performance for to correct results
the following systems? • Especially if variance is high
– a) Utilization on a LAN • Example: Packets lost on a link. Which link is
better?
– b) Response time from a Web server File Size Link A Link B
– c) Audio quality in a VoIP network 1000 5 10
1200 7 3
1300 3 0
50 0 1

1
Objectives (4 of 6) Objectives (5 of 6)
• Design measurement and simulation experiments to • Perform simulations correctly
provide the most information with the least effort. – Select correct language, seeds for random
– Often many factors that affect performance. Separate numbers, length of simulation run, and
out the effects that individually matter. analysis
• Example: The performance of a system depends upon – Before all of that, may need to validate
three factors: simulator
– A) garbage collection technique: G1, G2 none • Example: To compare the performance of
– B) type of workload: editing, compiling, AI two cache replacement algorithms:
– C) type of CPU: P2, P4, Sparc – A) how long should the simulation be run?
How many experiments are needed? How can the – B) what can be done to get the same
performance of each factor be estimated? accuracy with a shorter run?

Objectives (6 of 6) Outline
• Select appropriate evaluation techniques, • Objectives (done)


performance metrics and workloads for a system.
Conduct performance measurements correctly.
• The Art (next)
• Use proper statistical techniques to compare
• Common Mistakes
several alternatives. • Systematic Approach
• Design measurement and simulation experiments • Case Study
to provide the most information with the least
effort.
• Use simple queuing models to analyze the
performance of systems.

The Art of Performance Evaluation Example: Comparing Two Systems


• Evaluation cannot be produced mechanically • Two systems, two workloads, measure
– Requires intimate knowledge of system transactions per second
– Careful selection of methodology, workload,
tools Work- Work-
• No one correct answer as two performance System load 1 load 2
analysts may choose different metrics or A 20 10
workloads B 10 20
• Like art, there are techniques to learn
– how to use them • Which is better?
– when to apply them

2
Example: Comparing Two Systems The Ratio Game
• Two systems, two workloads, measure • Take system B as the base
transactions per second

Work- Work- Work- Work-


System load 1 load 2 Average System load 1 load 2 Average
A 20 10 15 A 2 0.5 1.25
B 10 20 15 B 1 1 1

• They are equally good! • A is better!


• … but is B better than A?
• … but is A better than B?

Outline Common Mistakes (1 of 3)


• Objectives (done) • Undefined Goals
• The Art (done) – There is no such thing as a general model
• Common Mistakes (next)
– Describe goals and then design experiments
– (Don’t shoot and then draw target)
• Systematic Approach • Biased Goals
• Case Study – Don’t show YOUR system better than HERS
– (Performance analysis is like a jury)
• Unrepresentative Workload
– Should be representative of how system will
work “in the wild”
– Ex: large and small packets? Don’t test with
only large or only small

Common Mistakes (2 of 3) Common Mistakes (3 of 3)


• Wrong Evaluation Technique • Improper Presentation of Results
– Use most appropriate: model, simulation, – It is not the number of graphs, but the
measurement number of graphs that help make decisions
– (Don’t have a hammer and see everything as a
nail)
• Omitting Assumptions and Limitations
• Inappropriate Level of Detail – Ex: may assume most traffic TCP, whereas
some links may have significant UDP traffic
– Can have too much! Ex: modeling disk
– Can have too little! Ex: analytic model for – May lead to applying results where
congested router assumptions do not hold
• No Sensitivity Analysis
– Analysis is evidence and not fact
– Need to determine how sensitive results are
to settings

3
Outline A Systematic Approach
• Objectives (done) 1. State goals and define boundaries
• The Art (done) 2. Select performance metrics
• Common Mistakes (done) 3. List system and workload parameters
• Systematic Approach (next) 4. Select factors and values
• Case Study 5. Select evaluation techniques
6. Select workload
7. Design experiments
8. Analyze and interpret the data
9. Present the results. Repeat.

State Goals and Define Boundaries Select Metrics


• Just “measuring performance” or “seeing • Criteria to compare performance
how it works” is too broad • In general, related to speed, accuracy
– Ex: goal is to decide which ISP provides and/or availability of system services
better throughput • Ex: network performance
• Definition of system may depend upon goals – Speed: throughput and delay
– Ex: if measuring CPU instruction speed, – Accuracy: error rate
system may include CPU + cache
– Availability: data packets sent do arrive
– Ex: if measuring response time, system may
include CPU + memory + … + OS + user
• Ex: processor performance
workload – Speed: time to execute instructions

List Parameters Select Factors to Study


• List all parameters that affect performance • Divide parameters into those that are to
• System parameters (hardware and be studied and those that are not
software) – Ex: may vary CPU type but fix OS type
– Ex: may fix packet size but vary number of
– Ex: CPU type, OS type, … connections
• Workload parameters • Select appropriate levels for each factor
– Ex: Number of users, type of requests – Want typical and ones with potentially high
• List may not be initially complete, so have impact
working list and let grow as progress – For workload often smaller (1/2 or 1/10th)
and larger (2x or 10x) range
– Start small or number can quickly overcome
available resources!

4
Select Evaluation Technique Select Workload
• Depends upon time, resources and desired • Set of service requests to system
level of accuracy • Depends upon measurement technique
• Analytic modeling – Analytic model may have probability of
– Quick, less accurate various requests
• Simulation – Simulation may have trace of requests from
– Medium effort, medium accuracy real system
• Measurement – Measurement may have scripts impose
transactions
– Typical most effort, most accurate
• Note, above are all typical but can be • Should be representative of real life
reversed in some cases!

Design Experiments Analyze and Interpret Data


• Want to maximize results with minimal • Compare alternatives
effort • Take into account variability of results
• Phase 1: – Statistical techniques
– Many factors, few levels • Interpret results.
– See which factors matter – The analysis does not provide a conclusion
• Phase 2: – Different analysts may come to different
– Few factors, more levels conclusions
– See where the range of impact for the
factors is

Present Results Outline


• Make it easily understood • Objectives (done)
• Graphs • The Art (done)
• Disseminate (entire methodology!) • Common Mistakes (done)

"The job of a scientist is not merely to see: it is to see,


• Systematic Approach (done)
understand, and communicate. Leave out any of these • Case Study (next)
phases, and you're not doing science. If you don't see,
but you do understand and communicate, you're a
prophet, not a scientist. If you don't understand, but
you do see and communicate, you're a reporter, not a
scientist. If you don't communicate, but you do see
and understand, you're a mystic, not a scientist."

5
Case Study System Definition
• Consider remote pipes (rpipe) versus • Client and Server and Network
remote procedure calls (rpc) • Key component is “channel”, either a rpipe
– rpc is like procedure call but procedure is or an rpc
handled on remote server
• Client caller blocks until return – Only the subset of the client and server
that handle channel are part of the system
– rpipe is like pipe but server gets output on
remote machine
• Client process can continue, non-blocking
• Goal: study the performance of Client Network Server
applications using rpipes to similar
applications using rpcs
- Try to minimize effect of components
outside system

Services Metrics
• There are a variety of services that can • Limit metrics to correct operation only (no
happen over a rpipe or rpc failure or errors)
• Choose data transfer as a common one, • Study service rate and resources consumed
with data being a typical result of most A) elapsed time per call
client-server interactions B) maximum call rate per unit time
• Classify amount of data as either large or C) Local CPU time per call
small
D) Remote CPU time per call
• Thus, two services:
E) Number of bytes sent per call
– Small data transfer
– Large data transfer

Parameters
Key Factors
System Workload
• Speed of CPUs • Time between calls • Type of channel
– Local • Number and sizes – rpipe or rpc


– Remote
Network
– of parameters
– of results
• Speed of network
– Speed • Type of channel
– Choose short (LAN) across country (WAN)
– Reliability (retrans) – rpc • Size of parameters
• Operating system – Rpipe – Small or larger
overhead • Other loads • Number of calls
– For interfacing with – On CPUs
channels – 11 values: 8, 16, 32 …1024
• All other parameters are fixed
– On network
– For interfacing with
network
• (Note, try to run during “light” network load)

6
Evaluation Technique Workload
• Since there are prototypes, use • Synthetic program generated specified
measurement channel requests
• Use analytic modeling based on measured • Will also monitor resources consumed and
data for values outside the scope of the log results
experiments conducted • Use “null” channel requests to get baseline
resources consumed by logging
– (Remember the Heisenberg principle!)

Experimental Design Data Analysis


• Full factorial (all possible combinations of • Analysis of variance will be used to
factors) quantify the first three factors
• 2 channels, 2 network speeds, 2 sizes, 11 – Are they different?
numbers of calls • Regression will be used to quantify the
– Æ 2 x 2 x 2 x 11 = 88 experiments effects of n consecutive calls
– Performance is linear? Exponential?

You might also like