Intro PDF
Intro PDF
Outline Objectives (1 of 6)
• Objectives (next)
• Select appropriate evaluation techniques,
performance metrics and workloads for a system.
• The Art – Techniques: measurement, simulation, analytic
• Common Mistakes modeling
Objectives (2 of 6) Objectives (3 of 6)
• Conduct performance measurements • Use proper statistical techniques to compare
several alternatives
correctly
– One run of workload often not sufficient
– Need two tools: load generator and monitor • Many non-deterministic computer events that effect
• Example: Which workload would be performance
– Comparing average of several runs may also not lead
appropriate to measure performance for to correct results
the following systems? • Especially if variance is high
– a) Utilization on a LAN • Example: Packets lost on a link. Which link is
better?
– b) Response time from a Web server File Size Link A Link B
– c) Audio quality in a VoIP network 1000 5 10
1200 7 3
1300 3 0
50 0 1
1
Objectives (4 of 6) Objectives (5 of 6)
• Design measurement and simulation experiments to • Perform simulations correctly
provide the most information with the least effort. – Select correct language, seeds for random
– Often many factors that affect performance. Separate numbers, length of simulation run, and
out the effects that individually matter. analysis
• Example: The performance of a system depends upon – Before all of that, may need to validate
three factors: simulator
– A) garbage collection technique: G1, G2 none • Example: To compare the performance of
– B) type of workload: editing, compiling, AI two cache replacement algorithms:
– C) type of CPU: P2, P4, Sparc – A) how long should the simulation be run?
How many experiments are needed? How can the – B) what can be done to get the same
performance of each factor be estimated? accuracy with a shorter run?
Objectives (6 of 6) Outline
• Select appropriate evaluation techniques, • Objectives (done)
•
performance metrics and workloads for a system.
Conduct performance measurements correctly.
• The Art (next)
• Use proper statistical techniques to compare
• Common Mistakes
several alternatives. • Systematic Approach
• Design measurement and simulation experiments • Case Study
to provide the most information with the least
effort.
• Use simple queuing models to analyze the
performance of systems.
2
Example: Comparing Two Systems The Ratio Game
• Two systems, two workloads, measure • Take system B as the base
transactions per second
3
Outline A Systematic Approach
• Objectives (done) 1. State goals and define boundaries
• The Art (done) 2. Select performance metrics
• Common Mistakes (done) 3. List system and workload parameters
• Systematic Approach (next) 4. Select factors and values
• Case Study 5. Select evaluation techniques
6. Select workload
7. Design experiments
8. Analyze and interpret the data
9. Present the results. Repeat.
4
Select Evaluation Technique Select Workload
• Depends upon time, resources and desired • Set of service requests to system
level of accuracy • Depends upon measurement technique
• Analytic modeling – Analytic model may have probability of
– Quick, less accurate various requests
• Simulation – Simulation may have trace of requests from
– Medium effort, medium accuracy real system
• Measurement – Measurement may have scripts impose
transactions
– Typical most effort, most accurate
• Note, above are all typical but can be • Should be representative of real life
reversed in some cases!
5
Case Study System Definition
• Consider remote pipes (rpipe) versus • Client and Server and Network
remote procedure calls (rpc) • Key component is “channel”, either a rpipe
– rpc is like procedure call but procedure is or an rpc
handled on remote server
• Client caller blocks until return – Only the subset of the client and server
that handle channel are part of the system
– rpipe is like pipe but server gets output on
remote machine
• Client process can continue, non-blocking
• Goal: study the performance of Client Network Server
applications using rpipes to similar
applications using rpcs
- Try to minimize effect of components
outside system
Services Metrics
• There are a variety of services that can • Limit metrics to correct operation only (no
happen over a rpipe or rpc failure or errors)
• Choose data transfer as a common one, • Study service rate and resources consumed
with data being a typical result of most A) elapsed time per call
client-server interactions B) maximum call rate per unit time
• Classify amount of data as either large or C) Local CPU time per call
small
D) Remote CPU time per call
• Thus, two services:
E) Number of bytes sent per call
– Small data transfer
– Large data transfer
Parameters
Key Factors
System Workload
• Speed of CPUs • Time between calls • Type of channel
– Local • Number and sizes – rpipe or rpc
•
– Remote
Network
– of parameters
– of results
• Speed of network
– Speed • Type of channel
– Choose short (LAN) across country (WAN)
– Reliability (retrans) – rpc • Size of parameters
• Operating system – Rpipe – Small or larger
overhead • Other loads • Number of calls
– For interfacing with – On CPUs
channels – 11 values: 8, 16, 32 …1024
• All other parameters are fixed
– On network
– For interfacing with
network
• (Note, try to run during “light” network load)
6
Evaluation Technique Workload
• Since there are prototypes, use • Synthetic program generated specified
measurement channel requests
• Use analytic modeling based on measured • Will also monitor resources consumed and
data for values outside the scope of the log results
experiments conducted • Use “null” channel requests to get baseline
resources consumed by logging
– (Remember the Heisenberg principle!)