0% found this document useful (0 votes)
64 views82 pages

01 - Introduction To Computer System Performance Evaluation

The document provides an overview of performance evaluation and common mistakes made in performance evaluation projects. It discusses key topics around performance evaluation including goals, metrics, workloads, techniques, parameters, factors, and experimental design. Examples of common mistakes are not defining clear goals, using biased goals, an unsystematic approach, incorrect metrics, unrepresentative workloads, and inappropriate experimental design.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views82 pages

01 - Introduction To Computer System Performance Evaluation

The document provides an overview of performance evaluation and common mistakes made in performance evaluation projects. It discusses key topics around performance evaluation including goals, metrics, workloads, techniques, parameters, factors, and experimental design. Examples of common mistakes are not defining clear goals, using biased goals, an unsystematic approach, incorrect metrics, unrepresentative workloads, and inappropriate experimental design.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

Simulation and Modeling

• Syllabus
• Microsoft Teams
• Term Project (Slide 00)

1
Introduction to Computer
System Performance
Evaluation
010123211 Simulation and Modeling
Yuenyong Nilsiam

2
An Overview of Performance
Evaluation
• Users, admins, and designers are all interested in
performance evaluation.
• We all want the highest performance at the lowest
cost.
• Performance evaluation is required at every stage,
design, manufacturing, sales/purchase, use,
upgrade, and so on.
• Performance evaluation needs
• The right measures of performance
• The right measurement environment
• The right techniques

3
Outline of topics
• The purpose is to explain the performance evaluation
terminology and techniques.
• Specifying performance requirements,
• Evaluating design alternatives,
• Comparing two or more systems,
• Determining the optimal value of a parameter (system
tuning),
• Finding the performance bottleneck,
• Characterizing the load on the system (workload
characterization),
• Determining the number and sizes of components (capacity
planning),
• And predicting the performance at future load (forecasting)

4
A System
• Any collection of hardware, software, and
firmware.
• Does computer have firmware?

5
Examples of the types of problems
• Select appropriate evaluation techniques,
performance metrics, and workloads for a system.
• Techniques: measurement, simulation, and analytical
modeling.
• Metrics: criteria; for example, response time
• Workloads: CPU workload
• Conduct performance measurements correctly.
• Two tools: load generator and monitor

6
Examples of the types of problems
• Use proper statistical techniques to compare
several alternatives.

• Design measurement and simulation experiments


to provide the most information with the least
effort.
• How many experiments are needed?
• How does one estimate the performance impact of each
factor?
7
Examples of the types of problems
• Perform simulations correctly.
• What type of simulation model should be used?
• How long should the simulation be run?
• What can be done to get the same accuracy with a
shorter run?
• How can one decide if the random-number generator in
the simulation is a good generator?

8
Examples of the types of problems
• Use simple queueing models to analyze the
performance of systems.
• Queueing model to determine:
• System utilization
• Average service time per query
• Number of queries completed during the observation interval
• Average number of jobs in the system
• Probability of number of jobs in the system being greater than
10
• 90-percentile response time
• 90-percentile waiting time

9
1.2 The art of performance evaluation

• Performance evaluation is an art.


• Every evaluation requires an intimate knowledge
of the system and a careful selection of the
methodology, workload, and tools.
• “art” from abstract feeling to real problem, pick
tools and techniques.
• Each analyst has a unique style.
• Same problem, difference performance metrics and
evaluation methodologies
• Same data, difference interpretations

10
Example 1.7

“ratio game”

11
Games
• Similar games
• Selecting workload,
• Measuring the systems, and
• Presenting the results
• Some games are intentional.
• Others are simple a result of a lack of knowledge.

12
1.3 Professional organization,
journals, and conferences
• ACM SIGMETRICS: The Association for Computing
Machinery’s Special Interest Group
• IEEE Computer Society: The Institute of Electrical and
Electronic Engineers (IEEE) Computer Society
• ACM SIGSIM: The ACM’s special interest group on simulation
• CMG: The Computer Measurement Group, Inc.
• IFIP Working Group 7.3: The International Federation for
Information Processing
• The Society for Computer Simulation
• SIAM: The Society for Industrial and Applied Mathematics
• ORSA: The Operations Research Society of America

13
1.4 Performance projects
• The best way to learn a subject is to apply the
concepts to a real system.
• Student teams:
• Select a computer subsystem; a network mail program,
an operating system, a language compiler, a text editor,
a processor, or a database.
• Perform some measurements, analyze the collected
data, simulate or analytically model the subsystem,
predict its performance, and validate the model.

14
Example of some of the projects
• 1. Measure and compare the performance of window systems of two AI systems.
• 2. Simulate and compare the performance of two processor interconnection networks.
• 3. Measure and analyze the performance of two microprocessors.
• 4. Characterize the workload of a campus timesharing system.
• 5. Compute the effects of various factors and their interactions on the performance of two text-formatting programs.
• 6. Measure and analyze the performance of a distributed information system.
• 7. Simulate the communications controllers for an intelligent terminal system.
• 8. Measure and analyze the performance of a computer-aided design tool.
• 9. Measure and identify the factors that affect the performance of an experimental garbage collection algorithm.
• 10. Measure and compare the performance of remote procedure calls and remote pipe calls.
• 11. Analyze the effect of factors that impact the performance of two Reduced Instruction Set Computer (RISC) processor
architectures.
• 12. Analyze the performance of a parallel compiler running on a multiprocessor system.
• 13. Develop a software monitor to observe the performance of a large multiprocessor system.
• 14. Analyze the performance of a distributed game program running on a network of AI systems.
• 15. Compare the performance of several robot control algorithms.

15
Exercise
• The measured performance of two database
systems on two different work-loads is shown in
Table 1.6. Compare the performance of the two
systems and show that
• a. System A is better
• b. System B is better

16
Ch.2
Common mistakes and
how to avoid them

17
2.1 Common mistakes in
performance evaluation
1. No Goals:
• Each model must be developed with a particular goal in
mind.
• The metrics, workloads, and methodology all depend
upon the goal.
• First, it is important for the analyst to understand the
system and identify the problem to be solved.
• Understanding the problem sufficiently to write a set of
goals is difficult.

18
2.1 Common mistakes in
performance evaluation
2. Biased Goals:
• Implicit or explicit bias in the stating the goals.
• For example, the goal is “to show that OUR system is
better than THEIRS”
3. Unsystematic Approach:
• Select system parameters, factors, metrics, and
workloads arbitrarily.

19
2.1 Common mistakes in
performance evaluation
4. Analysis without Understanding the Problem:
• a large share of the analysis effort goes in to defining a
problem, 40%
• a large share goes into designing alternatives,
interpretation of the results, and presentation of
conclusions, 60%
5. Incorrect Performance Metrics:
• the criterion used to quantify the performance of the
system
• Throughput and response time
• Throughput, 2 CPUs, RISCs vs. CISCs, instructions are
unequally.

20
2.1 Common mistakes in
performance evaluation
6. Unrepresentative Workload:
• The workload used to compare two systems should be
representative of the actual usage of the systems in the
field.
7. Wrong Evaluation Technique:
• There are three evaluation techniques: measurement,
simulation, and analytical modeling.
• Analysts often have a preference for one evaluation
technique that they use for every performance
evaluation problem.

21
2.1 Common mistakes in
performance evaluation
8. Overlooking Important Parameters:
• It is a good idea to make a complete list of system and
workload characteristics that affect the performance of
the system.
• These characteristics are called parameters.
9. Ignoring Significant Factors:
• Parameters that are varied in the study are called
factors.
• Factors that are under the control of the end user (or
decision maker) and can be easily changed by the end
user should be given preference over those that cannot
be changed.

22
2.1 Common mistakes in
performance evaluation
10. Inappropriate Experimental Design:
• Experimental design relates to the number of
measurement or simulation experiments to be
conducted and the parameter values used in each
experiment.
11. Inappropriate Level of Detail:
• Avoid formulations that are either too narrow or too
broad.
• A common mistake is to take the detailed approach
when a high-level model will do and vice versa.

23
2.1 Common mistakes in
performance evaluation
12. No Analysis:
• One of the common problems with measurement
projects is that they are often run by performance
analysts who are good in measurement techniques but
lack data analysis expertise.
13. Erroneous Analysis:
• for example, taking the average of ratios and too short
simulations.

24
2.1 Common mistakes in
performance evaluation
14. No Sensitivity Analysis:
• The fact that the results may be sensitive to the
workload and system parameters is often overlooked.
• Without a sensitivity analysis, one cannot be sure if the
conclusions would change if the analysis was done in a
slightly different setting.
15. Ignoring Errors in Input:
• Often the parameters of interest cannot be measured.
• Instead, another variable that can be measured is used
to estimate the parameter.

25
2.1 Common mistakes in
performance evaluation
16. Improper Treatment of Outliers:
• Values that are too high or too low compared to a
majority of values in a set are called outliers.
• If an outlier is not caused by a real system phenomenon,
it should be ignored.
17. Assuming No Change in the Future:
• The analyst and the decision makers should discuss this
assumption and limit the amount of time into the future
that predictions are made.

26
2.1 Common mistakes in
performance evaluation
18. Ignoring Variability:
• For example, decisions based on the daily averages of
computer demands may not be useful if the load demand has
large hourly peaks, which adversely impact user
performance.
19. Too Complex Analysis:
• It is better to start with simple models or experiments, get
some results or insights, and then introduce the
complications.
• There is a significant difference in the types of models
published in the literature and those used in the real world.
• However, in the industrial world, Their chief concern is the
guidance that the model provides along with the time and
cost to develop the model.
27
2.1 Common mistakes in
performance evaluation
20. Improper Presentation of Results:
• An analysis that does not produce any useful results is a
failure, as is the analysis with results that cannot be
understood by the decision makers.
21. Ignoring Social Aspects:
• Successful presentation of the analysis results requires
two types of skills: social and substantive.
• Beginning analysts often fail to understand that social
skills are often more important than substantive skills.
• The decision makers are under time pressures and
would like to get to the final results as soon as possible.

28
2.1 Common mistakes in
performance evaluation
22. Omitting Assumptions and limitations:
• Assumptions and limitations of the analysis are often
omitted from the final report.
• This may lead the user to apply the analysis to another
context where the assumptions will not be valid.

29
Checklist for Avoiding Common
Mistakes in Performance Evaluation
1. Is the system correctly defined and the goals clearly 13. Is the analysis statistically correct?
stated?
14. Has the sensitivity analysis been done?
2. Are the goals stated in an unbiased manner?
15. Would errors in the input cause an insignificant
3. Have all the steps of the analysis followed change in the results?
systematically?
16. Have the outliers in the input or output been
4. Is the problem clearly understood before analyzing treated properly?
it?
17. Have the future changes in the system and
5. Are the performance metrics relevant for this workload been modeled?
problem?
18. Has the variance of input been taken into account?
6. Is the workload correct for this problem?
19. Has the variance of the results been analyzed?
7. Is the evaluation technique appropriate?
20. Is the analysis easy to explain?
8. Is the list of parameters that affect performance
complete? 21. Is the presentation style suitable for its audience?
9. Have all parameters that affect performance been 22. Have the results been presented graphically as
chosen as factors to be varied? much as possible?
10. Is the experimental design efficient in terms of 23. Are the assumptions and limitations of the analysis
time and results? clearly documented?
11. Is the level of detail proper?
12. Is the measured data presented with analysis and
interpretation?

30
2.2 A Systematic Approach to
Performance Evaluation
• Most performance problems are unique.
• The metrics, workload, and evaluation techniques
used for one problem generally cannot be used for
the next problem.
• Nevertheless, there are steps common to all
performance evaluation projects that help you
avoid the common mistakes

31
2.2 A Systematic Approach to
Performance Evaluation
1. State Goals and Define the System:
• to state the goals of the study and define what
constitutes the system by delineating system
boundaries.
• The choice of system boundaries affects the
performance metrics as well as workloads used to
compare the systems.
• Although the key consideration in setting the system
boundaries is the objective of the study, other
considerations, such as administrative control of the
sponsors of the study, may also need to be taken into
account.
32
2.2 A Systematic Approach to
Performance Evaluation
2. List Services and Outcomes:
• For example, a computer network allows its users to
send packets to specified destinations.
• A database system responds to queries.
• A processor performs a number of different instructions.
• When a user requests any of these services, there are a
number of possible outcomes.
• A list of services and possible outcomes is useful later in
selecting the right metrics and workloads.

33
2.2 A Systematic Approach to
Performance Evaluation
3. Select Metrics:
• The next step is to select criteria to compare the
performance.
• In general, the metrics are related to the speed,
accuracy, and availability of services.
• The performance of a network, for example, is
measured by the speed (throughput and delay),
accuracy (error rate), and availability of the packets sent.
• The performance of a processor is measured by the
speed of (time taken to execute) various instructions.

34
2.2 A Systematic Approach to
Performance Evaluation
4. List Parameters:
• list of all the parameters that affect performance.
• The list can be divided into system parameters and
workload parameters.
• System parameters include both hardware and software
parameters, which generally do not vary among various
installations of the system.
• Workload parameters are characteristics of users’
requests, which vary from one installation to the next.
• The list of parameters may not be complete.

35
2.2 A Systematic Approach to
Performance Evaluation
5. Select Factors to Study:
• The list of parameters can be divided into two parts:
• those that will be varied during the evaluation and
• those that will not.
• The parameters to be varied are called factors and their
values are called levels.
• more influential parameters are ignored simply because
of the difficulty involved.
• In selecting factors, it is important to consider the
economic, political, and technological constraints that
exist as well as including the limitations imposed by the
decision makers’ control and the time available for the
decision.

36
2.2 A Systematic Approach to
Performance Evaluation
6. Select Evaluation Technique:
• The three broad techniques for performance evaluation
are
• analytical modeling,
• simulation, and
• measuring a real system.
• The selection of the right technique depends upon the
time and resources available to solve the problem and
the desired level of accuracy.

37
2.2 A Systematic Approach to
Performance Evaluation
7. Select Workload:
• a list of service requests to the system.
• Depending upon the evaluation technique chosen, the
workload may be expressed in different forms.
• To produce representative workloads, one needs to
measure and characterize the workload on existing
systems.
• For analytical modeling, the workload is usually
expressed as a probability of various requests.
• For simulation, one could use a trace of requests
measured on a real system.
• For measurement, the workload may consist of user
scripts to be executed on the systems.
38
2.2 A Systematic Approach to
Performance Evaluation
8. Design Experiments:
• In practice, it is useful to conduct an experiment in two
phases.
• In the first phase, the number of factors may be large
but the number of levels is small. The goal is to
determine the relative effect of various factors.
• In the second phase, the number of factors is reduced
and the number of levels of those factors that have
significant impact is increased.

39
2.2 A Systematic Approach to
Performance Evaluation
9. Analyze and Interpret Data:
• the outcome would be different each time the
experiment is repeated.
• The statistical techniques to compare two alternatives
are needed.
• analysis only produces results and not conclusions.
• The results provide the basis on which the analysts or
decision makers can draw conclusions.

40
2.2 A Systematic Approach to
Performance Evaluation
10. Present Results:
• the results be presented in a manner that is easily
understood.
• presenting the results in graphic form.
• The graphs should be appropriately scaled.
• Often at this point in the project the knowledge gained
by the study may require the analyst to go back and
reconsider some of the decisions made in the previous
steps.
• The complete project, therefore, consists of several
cycles through the steps rather than a single sequential
pass.

41
Steps for a Performance Evaluation
Study
1. State the goals of the study 6. Select evaluation
and define the system techniques.
boundaries. 7. Select the workload.
2. List system services and 8. Design the experiments.
possible outcomes.
9. Analyze and interpret
3. Select performance metrics. the data.
4. List system and workload 10. Present the results.
parameters. Start over, if necessary.
5. Select factors and their
values.

42
Case Study 2.1
• Comparing remote pipes (non-blocking) with
remote procedure (blocking) calls.
• In a procedure call, the calling program is blocked.
• A remote procedure call is an extension of this
concept to a distributed computer system.
• The calling program waits until the procedure is
complete and the result is returned.
• The execution of the pipe occurs concurrently with
the continued execution of the caller. The results, if
any, are later returned asynchronously.

43
Case Study 2.1
1. State Goals and Define the System:
• The goal of the case study is to compare the
performance of applications using remote pipes to
those of similar applications using remote procedure
calls.
• A channel can be either a procedure or a pipe.
• The system consists of two computers connected via a
network

Client Network Server

System

44
Case Study 2.1
2. Services:
• two types of channel calls
• remote procedure call and
• remote pipe
• the system offers only two services:
• small data transfer or
• large data transfer

45
Case Study 2.1
3. Metrics:
• (a) Elapsed time per call
• (b) Maximum call rate per unit of time, or equivalently,
the time required to complete a block of n successive
calls
• (c) Local CPU time per call
• (d) Remote CPU time per call
• (e) Number of bytes sent on the link per call

46
Case Study 2.1
4. Parameters:
• The system parameters that affect the performance of a
given application and data size are the following:
• (a) Speed of the local CPU
• (b) Speed of the remote CPU
• (c) Speed of the network
• (d) Operating system overhead for interfacing with the
channels
• (e) Operating system overhead for interfacing with the
networks
• (f) Reliability of the network affecting the number of
retransmissions required

47
Case Study 2.1
4. Parameters:
• The workload parameters that affect the performance
are the following:
• (a) Time between successive calls
• (b) Number and sizes of the call parameters
• (c) Number and sizes of the results
• (d) Type of channel
• (e) Other loads on the local and remote CPUs
• (f) Other loads on the network

48
Case Study 2.1
5. Factors:
• (a) Type of channel. Two types—remote pipes and
remote procedure calls—will be compared.
• (b) Speed of the network. Two locations of the remote
hosts will be used—short distance (in the campus) and
long distance (across the country).
• (c) Sizes of the call parameters to be transferred. Two
levels will be used—small and large.
• (d) Number n of consecutive calls. Eleven different
values of n—1, 2, 4, 8, 16, 32, ..., 512, 1024—will be
used.
• based on resource availability and the interest of the
sponsors.

49
Case Study 2.1
6. Evaluation Technique:
• Since prototypes of both types of channels have already
been implemented, measurements will be used for
evaluation.
• Analytical modeling will be used to justify the
consistency of measured values for different
parameters.

50
Case Study 2.1
7. Workload:
• The workload will consist of a synthetic program
generating the specified types of channel requests.
8. Experimental Design:
• A full factorial experimental design with 23 × 11 = 88
experiments.
9. Data Analysis:
• Analysis of Variance (explained in Section 20.5) will be
used to quantify the effects of the first three factors and
• regression (explained in Chapter 14) will be used to
quantify the effects of the number n of successive calls.

51
Case Study 2.1
10. Data Presentation:
• The final results will be plotted as a function of the block
size n.

Reference
• Gifford, David K., and Nathan Glasser. "Remote
pipes and procedures for efficient distributed
communication." ACM Transactions on Computer
Systems (TOCS) 6.3 (1988): 258-283.

52
Ch. 3
Selection of Techniques
and Metrics
Selecting an evaluation technique and selecting a metric are
two key steps in all performance evaluation projects.

53
3.1 Selecting an Evaluation
Technique
• The key consideration in deciding the evaluation
technique is the life-cycle stage in which the
system is.
• Measurements are possible only if something
similar to the proposed system already exists.
• If it is a new concept, analytical modeling and
simulation are the only techniques from which to
choose.

54
3.1 Selecting an Evaluation
Technique

55
3.1 Selecting an Evaluation
Technique
• the time available for evaluation
• results are required yesterday
• Simulation > Measurement > Analytical modeling
• the availability of tools
• modeling skills
• simulation languages
• measurement instruments
• Level of accuracy desired
• Measurements > Simulation > Analytical modeling
• the accuracy of results can vary from very high to none
when using the measurements technique.

56
3.1 Selecting an Evaluation
Technique
• level of accuracy and correctness of conclusions are not
identical
• The goal of every performance study is either to
compare different alternatives or to find the optimal
parameter value.
• Analytical models generally provide the best insight into the
effects of various parameters and their interactions.
• With simulations, it may be possible to search the space of
parameter values for the optimal combination, but often it is
not clear what the trade-off is among different parameters.
• Measurement is the least desirable technique in this respect.

57
3.1 Selecting an Evaluation
Technique
• Cost allocated for the project is also important.
• Measurement > Simulation > Analytical modeling
• Sometimes it is helpful to use two or more techniques
simultaneously.
• Measurements are as susceptible to experimental
errors and bugs as the other two techniques.
• Two or more techniques can also be used sequentially.
• For example, in one case, a simple analytical model was used
to find the appropriate range for system parameters and a
simulation was used later to study the performance in that
range.

58
3.2 Selecting Performance Metrics
• A set of performance criteria or metrics must be chosen
• One way to prepare this set is to list the services offered by
the system.
• Generally, outcomes can be classified into three categories
• The system may perform the service correctly, incorrectly, or
refuse to perform the service.
• If the system performs the service correctly, its performance
is measured by the time taken to perform the service, the
rate at which the service is performed, and the resources
consumed while performing the service.
• These three metrics related to time-rate-resource for
successful performance are also called responsiveness,
productivity, and utilization metrics, respectively.

59
3.2 Selecting Performance Metrics
• If the system performs the service incorrectly, an error
is said to have occurred. It is helpful to classify errors
and to determine the probabilities of each class of
errors.
• If the system does not perform the service, it is said to
be down, failed, or unavailable.
• Once again, it is helpful to classify the failure modes
and to determine the probabilities of each class.
• The metrics associated with the three outcomes,
namely successful service, error, and unavailability, are
also called speed, reliability, and availability metrics.

60
3.2 Selecting Performance Metrics
• For many metrics, the mean value is all that is
important.
• However, do not overlook the effect of variability.
• Finally, the set of metrics included in the study
should be complete.
• All possible outcomes should be reflected in the set
of performance metrics.

61
3.2 Selecting Performance Metrics

62
Case Study 3.1
• Comparing two different congestion control
algorithms for computer networks.
• A number of end systems interconnected via a
number of intermediate systems
• The problem of congestion occurs when the
number of packets waiting at an intermediate
system exceeds the system’s buffering capacity and
some of the packets have to be dropped.

63
Case Study 3.1
• The system in this case consists of the network
• there are four possible outcomes:
1. Some packets are delivered in order to the correct
destination.
2. Some packets are delivered out of order to the destination.
3. Some packets are delivered more than once to the
destination (duplicate packets).
4. Some packets are dropped on the way (lost packets).

64
Case Study 3.1
• For packets delivered in order, straightforward
application of the time-rate-resource metrics
produces the following list:
1. Response time: the delay inside the network for
individual packets.
2. Throughput: the number of packets per unit of time.
3. Processor time per packet on the source end system.
4. Processor time per packet on the destination end
systems.
5. Processor time per packet on the intermediate
systems.

65
Case Study 3.1
• Others
6. The variability of the response time is also important
since a highly variant response results in unnecessary
retransmissions.
7. the probability of out-of-order arrivals
8. The probability of duplicate packets
9. The probability of lost packets
10. the probability of disconnect
11. Fairness index (a fair share of system resources)
𝑥𝑖 = 𝑡ℎ𝑟𝑜𝑢𝑔ℎ𝑝𝑢𝑡 𝑜𝑓 𝑖 𝑡ℎ
1
𝑓= (𝑤𝑜𝑟𝑠𝑡) … 1(𝑏𝑒𝑠𝑡)
𝑛
66
Case Study 3.1
• After a few experiments, it was clear that
throughput and delay were really redundant
metrics.
• Therefore, the two metrics were removed from the
list and instead a combined metric called power
• A higher power meant either a higher throughput
or a lower delay
• Thus, in this study a set of nine metrics were used
to compare different congestion control algorithms.

67
3.3 Commonly Used Performance
Metrics
• Response time is defined as the interval between a
user’s request and the system response
• For a batch stream, responsiveness is measured by
turnaround time, which is the time between the
submission of a batch job and the completion of its
output.
• The time between submission of a request and the
beginning of its execution by the system is called
the reaction time.

68
3.3 Commonly Used Performance
Metrics

69
3.3 Commonly Used Performance
Metrics
• Throughput is defined as the rate (requests per
unit of time) at which the requests can be serviced
by the system.
• batch streams, jobs per second
• CPUs, Millions of Instructions Per Second (MIPS) or
Millions of Floating-Point Operations Per Second
(MFLOPS)
• networks, packets per second (pps) or bits per second
(bps)
• Transactions processing systems, Transactions Per
Second (TPS)

70
3.3 Commonly Used Performance
Metrics
• The maximum achievable throughput under ideal
workload conditions is called nominal capacity of
the system, e.g. bandwidth
• The ratio of maximum achievable throughput
(usable capacity) to nominal capacity is called the
efficiency.
• The utilization of a resource is measured as the
fraction of time the resource is busy servicing
requests.

71
3.3 Commonly Used Performance
Metrics
• The reliability of a system is usually measured by
the probability of errors or by the mean time
between errors.
• The availability of a system is defined as the
fraction of the time the system is available to
service users’ requests.
• In system procurement studies, the
cost/performance ratio is commonly used as a
metric for comparing two or more systems.

72
3.4 Utility Classification of
Performance Metrics
• Higher is Better or HB. System users and system
managers prefer higher values of such metrics.
System throughput is an example of an HB metric.
• Lower is Better or LB. System users and system
managers prefer smaller values of such metrics.
Response time is an example of an LB metric.
• Nominal is Best or NB. Both high and low values are
undesirable. A particular value in the middle is
considered the best. System utilization is an
example of an NB characteristic.

73
3.4 Utility Classification of
Performance Metrics

74
3.5 Setting Performance
Requirements
• Specifying performance requirements for a system
to be acquired or designed.
• These typical requirement statements:
• The system should be both processing and memory
efficient. It should not create excessive overhead.
• There should be an extremely low probability that the
network will duplicate a packet, deliver a packet to the
wrong destination, or change the data in a packet.

75
3.5 Setting Performance
Requirements
• Why requirement statements are unacceptable
1. Nonspecific: No clear numbers are specified. Qualitative
words such as low, high, rare, and extremely small are used
instead.
2. Nonmeasurable: There is no way to measure a system and
verify that it meets the requirement.
3. Nonacceptable: Numerical values of requirements, if
specified, are set based upon what can be achieved or what
looks good. If an attempt is made to set the requirements
realistically, they turn out to be so low that they become
unacceptable.
4. Nonrealizable: Often, requirements are set high so that
they look good. However, such requirements may not be
realizable.
5. Nonthorough: No attempt is made to specify a possible
outcomes.

76
3.5 Setting Performance
Requirements
• The requirements must be Specific, Measurable,
Acceptable, Realizable, and Thorough. SMART
• Specificity precludes the use of words like “low
probability” and “rare.”
• Measurability requires verification that a given system
meets the requirements.
• Acceptability and Realizability demand new
configuration limits or architectural decisions so that
the requirements are high enough to be acceptable
and low enough to be achievable.
• Thoroughness includes all possible outcomes and
failure modes.

77
Case Study 3.2
• Specifying the performance requirements for a
high-speed LAN system.
• A LAN basically provides the service of transporting
frames (or packets) to the specified destination
station.
• Given a user request to send a frame to destination
station D, there are three categories of outcomes:
• The frame is correctly delivered to D,
• Incorrectly delivered (delivered to a wrong destination
or with an error indication to D), or
• Not delivered at all.

78
Case Study 3.2
1. Speed:
(a) The access delay at any station should be less
than 1 second.
(b) Sustained throughput must be at least 80
Mbits/sec.

79
Case Study 3.2
2. Reliability:
(a) The probability of any bit being in error must be less than 10-7.
(b) The probability of any frame being in error (with error indication
set) must be less than 1%.
(c) The probability of a frame in-15error being delivered without error
indication must be less than 10 .
(d) The probability of a frame being misdelivered due to an
undetected error in the destination address must be less than 10-18.
(e) The probability of a frame being delivered more than once
(duplicate) must be less than 10-5.
(f) The probability of losing a frame on the LAN (due to all sorts of
errors) must be less than 1%.

80
Case Study 3.2
3. Availability:
(a) The mean time to initialize the LAN must be less
than 15 milliseconds.
(b) The mean time between LAN initializations must
be at least 1 minute.
(c) The mean time to repair a LAN must be less than
1 hour. (LAN partitions may be operational during
this period.)
(d) The mean time between LAN partitioning must
be at least half a week.

81
Next
Workload and Workload Selection

Q&A
82

You might also like