0% found this document useful (0 votes)
102 views72 pages

Unit 5

This document discusses metrics for software process and products, risk management strategies, and quality management. It covers topics such as software measurement, size-oriented and function-oriented metrics, risk identification and categorization, quality concepts, and defect removal efficiency.

Uploaded by

227r5a0522
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views72 pages

Unit 5

This document discusses metrics for software process and products, risk management strategies, and quality management. It covers topics such as software measurement, size-oriented and function-oriented metrics, risk identification and categorization, quality concepts, and defect removal efficiency.

Uploaded by

227r5a0522
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 72

UNIT 5

Metrics for Process and Products:


Software measurement, metrics for software quality.

Risk management:
Reactive Vs proactive risk strategies, software risks, risk
identification, risk projection, risk refinement, RMMM,
RMMM plan.

Quality Management:
Quality concepts, Software quality assurance, software
reviews, formal technical reviews, statistical software quality
assurance, software reliability, the ISO 9000 quality
standards.
SOFTWARE MEASUREMENT
 Measurements in the physical world can be categorized in two ways:
 Direct measures (e.g., the length of a bolt) and

 Indirect measures (e.g., the "quality" of bolts produced, measured


by counting rejects).
 Direct measures of the software engineering process include cost
and effort applied.
 Direct measures of the product include lines of code (LOC)
produced, execution speed, memory size, and defects reported over
some set period of time.
 Indirect measures of the product include functionality, quality,
complexity, efficiency, reliability, maintainability, and many other "–
abilities
SIZE-ORIENTED METRICS
 Size-oriented software metrics are derived by normalizing quality and/or
productivity measures by considering the size of the software that has been
produced.
 If a software organization maintains simple records.
 The table lists each software development project that has been
completed over the past few years and corresponding measures for
that project.
 Referring to the table entry for project alpha: 12,100 lines of code
were developed with 24 person-months of effort at a cost of
$168,000.
 It should be noted that the effort and cost recorded in the table
represent all software engineering activities (analysis, design, code,
and test), not just coding.
 Project alpha indicates that 365 pages of documentation were
developed, 134 errors were recorded before the software was
released, and 29 defects were encountered after release to the
customer within the first year of operation.
 Three people worked on the development of software for project
alpha
 Size-oriented metrics are not universally accepted as the best way
to measure the process of software development
 A set of simple size-oriented metrics can be developed for each
project:
FUNCTION-ORIENTED METRICS
 Function-oriented software metrics use a measure of the
functionality delivered by the application as a normalization value.
 Function-oriented metrics were first proposed by who suggested a
measure called the function point.
EXTENDED FUNCTION POINT METRICS
Determining the complexity of a transformation for 3D function
points
METRICS FOR SOFTWARE QUALITY
 The overriding goal of software engineering is to produce a high-
quality system, application, or product within a timeframe that
satisfies a market need.
 To achieve this goal, software engineers must apply effective
methods coupled with modern tools within the context of a mature
software process.
Measuring Quality
 The measures of software quality are correctness, maintainability,
integrity, and usability.
 These measures will provide useful indicators for the project team.
Correctness:
 Correctness is the degree to which the software performs its required function.

 The most common measure for correctness is defects per KLOC, where a defect
is defined as a verified lack of conformance to requirements.
Maintainability:
 Maintainability is the ease with which a program can be corrected if an error is
encountered, adapted if its environment changes, or enhanced if the customer
desires a change in requirements.
 A simple time-oriented metric is mean-time-to change (MTTC), the time it takes
to analyze the change request, design an appropriate modification, implement
the change, test it, and distribute the change to all users.
Integrity:
 Attacks can be made on all three components of software: programs, data, and
documents.
Usability
 Usability is an attempt to quantify user-friendliness and can be
measured in terms of four characteristics:
 (1) the physical and or intellectual skill required to learn the
system,
 (2) the time required to become moderately efficient in the use of
the system,
 (3) the net increase in productivity (over the approach that the
system replaces) measured when the system is used by someone
who is moderately efficient, and
 (4) a subjective assessment (sometimes obtained through a
questionnaire) of users attitudes toward the system
DEFECT REMOVAL EFFICIENCY
 A quality metric that provides benefit at both the project and process
level is defect removal efficiency (DRE).
 DRE is a measure of the filtering ability of quality assurance and
control activities as they are applied throughout all process
framework activities.
 When considered for a project as a whole, DRE is defined in the
following manner:
DRE = E/(E + D)
E is the number of errors found before delivery of the software to the
end-user and
D is the number of defects found after delivery.
The ideal value for DRE is 1. That is, no defects are found in the
software
REACTIVE VS. PROACTIVE RISK STRATEGIES
 At best, a reactive strategy monitors the project for likely risks.
 The software team does nothing about risks until something goes
wrong. Then, the team flies into action in an attempt to correct the
problem rapidly. This is often called a fire fighting mode.
 Project team reacts to risks when they occur

 Mitigation—plan for additional resources in anticipation of fire


fighting
 Fix on failure—resource are found and applied when the risk strikes

 Crisis management—failure does not respond to applied resources


and project is in jeopardy
 A proactive strategy begins long before technical work is
initiated.
 Potential risks are identified, their probability and impact are
assessed, and they are ranked by importance.
Then, the software team establishes a plan for managing risk.
 formal risk analysis is performed organization corrects the root
causes of risk.
 Examining risk sources that lie beyond the bounds of the
software.
 Developing the skill to manage change.
REACTIVE VS. PROACTIVE RISK STRATEGIES
 Reactive risk strategies
 "Don'tworry, I'll think of something"
 The majority of software teams and managers rely on this
approach
 Nothing is done about risks until something goes wrong
 The team then flies into action in an attempt to correct the problem
rapidly (fire fighting)
 Crisis management is the choice of management techniques
 Proactive risk strategies
 Steps for risk management are followed
 Primary objective is to avoid risk and to have a contingency
plan in place to handle unavoidable risks in a controlled and
effective manner

16
RISK MANAGEMENT: DEFINITION OF RISK
 A riskis a potential problem – it might happen and it
might not
 Conceptual definition of risk
 Risk concerns future happenings
 Risk involves change in mind, opinion, actions, places, etc.
 Risk involves choice and the uncertainty that choice entails
 Two characteristics of risk
 Uncertainty – the risk may or may not happen, that is, there
are no 100% risks (those, instead, are called constraints)
 Loss – the risk becomes a reality and unwanted consequences
or losses occur

18
RISK CATEGORIZATION – APPROACH #1
 Project risks
 They threaten the project plan
 If they become real, it is likely that the project schedule will
slip and that costs will increase
 Technical risks
 They threaten the quality and timeliness of the software to be
produced
 If they become real, implementation may become difficult or
impossible
 Business risks
 They threaten the viability of the software to be built
 If they become real, they jeopardize the project or the product

19
 Sub-categories of Business risks
Market risk – building an excellent product or system
that no one really wants
Strategic risk – building a product that no longer fits
into the overall business strategy for the company
Sales risk – building a product that the sales force
doesn't understand how to sell
Management risk – losing the support of senior
management due to a change in focus or a change in
people
Budget risk – losing budgetary or personnel
commitment

20
RISK CATEGORIZATION – APPROACH #2
 Known risks
 Those risks that can be uncovered after careful evaluation of
the project plan, the business and technical environment in
which the project is being developed, and other reliable
information sources (e.g., unrealistic delivery date)
 Predictable risks
 Those risks that are extrapolated from past project
experience (e.g., past turnover)
 Unpredictable risks
 Those risks that can and do occur, but are extremely
difficult to identify in advance

21
RISK IDENTIFICATION
 Risk identification is a systematic attempt to specify threats to the
project plan
 By identifying known and predictable risks, the project manager takes a
first step toward avoiding them when possible and controlling them
when necessary
 Generic risks
 Risks that are a potential threat to every software project
 Product-specific risks
 Risks that can be identified only by those with a clear understanding
of the technology, the people, and the environment that is specific to
the software that is to be built
 This requires examination of the project plan and the statement of
scope
 "What special characteristics of this product may threaten our project
plan?"

22
RISK ITEM CHECKLIST
 Used as one way to identify risks
 Focuses on known and predictable risks in specific
subcategories (see next slide)
 Can be organized in several ways
 A list of characteristics relevant to each risk subcategory
 Questionnaire that leads to an estimate on the impact of each
risk
 A list containing a set of risk component and drivers and
their probability of occurrence

23
KNOWN AND PREDICTABLE RISK CATEGORIES
 Product size – risks associated with overall size of the software to be built
 Business impact – risks associated with constraints imposed by management or
the marketplace
 Customer characteristics – risks associated with the sophistication of the
customer and the developer's ability to communicate with the customer in a
timely manner
 Process definition – risks associated with the degree to which the software
process has been defined and is followed
 Development environment – risks associated with availability and quality of the
tools to be used to build the project
 Technology to be built – risks associated with complexity of the system to be
built and the "newness" of the technology in the system
 Staff size and experience – risks associated with overall technical and project
experience of the software engineers who will do the work

24
QUESTIONNAIRE ON PROJECT RISK
1) Have top software and customer managers formally
committed to support the project?
2) Are end-users enthusiastically committed to the project and
the system/product to be built?
3) Are requirements fully understood by the software
engineering team and its customers?
4) Have customers been involved fully in the definition of
requirements?
5) Do end-users have realistic expectations?
6) Is the project scope stable?
QUESTIONNAIRE ON PROJECT RISK
7) Does the software engineering team have the right mix of
skills?
8) Are project requirements stable?
9) Does the project team have experience with the technology to
be implemented?
10) Is the number of people on the project team adequate to do
the job?
11) Do all customer/user constituencies agree on the importance
of the project and on the requirements for the system/product
to be built?

26
RISK COMPONENTS AND DRIVERS
 The project manager identifies the risk drivers that affect the
following risk components
 Performance risk - the degree of uncertainty that the product will meet its
requirements and be fit for its intended use
 Cost risk - the degree of uncertainty that the project budget will be
maintained
 Support risk - the degree of uncertainty that the resultant software will be
easy to correct, adapt, and enhance
 Schedule risk - the degree of uncertainty that the project schedule will be
maintained and that the product will be delivered on time
 The impact of each risk driver on the risk component is divided into
one of four impact levels
 Negligible, marginal, critical, and catastrophic
 Risk drivers can be assessed as impossible, improbable, probable,
and frequent

27
RISK PROJECTION
 Risk projection (or estimation) attempts to rate each risk in two
ways
 The probability that the risk is real
 The consequence of the problems associated with the risk, should it occur
 The project planner, managers, and technical staff perform four risk
projection steps.
 The intent of these steps is to consider risks in a manner that leads
to prioritization
 Be prioritizing risks, the software team can allocate limited
resources where they will have the most impact

28
RISK PROJECTION/ESTIMATION STEPS
1) Establish a scale that reflects the perceived likelihood of a
risk (e.g., 1-low, 10-high)
2) Delineate the consequences of the risk
3) Estimate the impact of the risk on the project and product
4) Note the overall accuracy of the risk projection so that
there will be no misunderstandings

29
CONTENTS OF A RISK TABLE
 A risk table provides a project manager with a simple technique for risk
projection
 It consists of five columns
 Risk Summary – short description of the risk
 Risk Category – one of seven risk categories (slide 12)
 Probability – estimation of risk occurrence based on group input
 Impact – (1) catastrophic (2) critical (3) marginal (4) negligible
 RMMM – Pointer to a paragraph in the Risk Mitigation, Monitoring, and
Management Plan

Risk Summary Risk Category Probability Impact (1-4) RMMM

30
DEVELOPING A RISK TABLE
 List all risks in the first column (by way of the help of the risk item
checklists)
 Mark the category of each risk
 Estimate the probability of each risk occurring
 Assess the impact of each risk based on an averaging of the four risk
components to determine an overall impact value (See next slide)
 Sort the rows by probability and impact in descending order
 Draw a horizontal cutoff line in the table that indicates the risks that will be
given further attention

31
ASSESSING RISK IMPACT
 Three factors affect the consequences that are likely if a risk does
occur
 Its nature – This indicates the problems that are likely if the risk occurs
 Its scope – This combines the severity of the risk (how serious was it) with
its overall distribution (how much was affected)
 Its timing – This considers when and for how long the impact will be felt
 The overall risk exposure formula is RE = P x C
 P = the probability of occurrence for a risk
 C = the cost to the project should the risk actually occur

Example
 P = 80% probability that 18 of 60 software components will have to be
developed
 C = Total cost of developing 18 components is $25,000
 RE = .80 x $25,000 = $20,000

32
RISK AND MANAGEMENT CONCERN

33
RISK MITIGATION, MONITORING, AND MANAGEMENT
 An effective strategy for dealing with risk must consider three
issues
(Note: these are not mutually exclusive)
 Risk mitigation (i.e., avoidance)
 Risk monitoring
 Risk management and contingency planning

 Risk mitigation (avoidance) is the primary strategy and is


achieved through a plan
 Example: Risk of high staff turnover

34
Strategy for Reducing Staff Turnover
 Meet with current staff to determine causes for turnover (e.g., poor
working conditions, low pay, competitive job market)
 Mitigate those causes that are under our control before the project
starts
 Once the project commences, assume turnover will occur and
develop techniques to ensure continuity when people leave
 Organize project teams so that information about each
development activity is widely dispersed
 Define documentation standards and establish mechanisms to
ensure that documents are developed in a timely manner
 Conduct peer reviews of all work (so that more than one person is
"up to speed")
 Assign a backup staff member for every critical technologist

35
 During risk monitoring, the project manager monitors factors that
may provide an indication of whether a risk is becoming more or
less likely
 Risk management and contingency planning assume that mitigation
efforts have failed and that the risk has become a reality
 RMMM steps incur additional project cost
 Large projects may have identified 30 – 40 risks
 Risk is not limited to the software project itself
 Risks can occur after the software has been delivered to the user

36
 Software safety and hazard analysis
 These are software quality assurance activities that focus on the
identification and assessment of potential hazards that may
affect software negatively and cause an entire system to fail
 If hazards can be identified early in the software process,
software design features can be specified that will either
eliminate or control potential hazards

37
THE RMMM PLAN
 The RMMM plan may be a part of the software development
plan or may be a separate document
 Once RMMM has been documented and the project has begun,
the risk mitigation, and monitoring steps begin
 Risk mitigation is a problem avoidance activity
 Risk monitoring is a project tracking activity
 Risk monitoring has three objectives
 To assess whether predicted risks do, in fact, occur
 To ensure that risk aversion steps defined for the risk are being properly
applied
 To collect information that can be used for future risk analysis
 The findings from risk monitoring may allow the project
manager to ascertain what risks caused which problems
throughout the project

38
RISK INFORMATION SHEET

39
SEVEN PRINCIPLES OF RISK MANAGEMENT
 Maintain a global perspective
 View software risks within the context of a system and the business problem that is is
intended to solve
 Take a forward-looking view
 Think about risks that may arise in the future; establish contingency plans
 Encourage open communication
 Encourage all stakeholders and users to point out risks at any time
 Integrate risk management
 Integrate the consideration of risk into the software process
 Emphasize a continuous process of risk management
 Modify identified risks as more becomes known and add new risks as better insight is
achieved
 Develop a shared product vision
 A shared vision by all stakeholders facilitates better risk identification and assessment
 Encourage teamwork when managing risk
 Pool the skills and experience of all stakeholders when conducting risk management
activities

40
WHAT IS QUALITY MANAGEMENT
 Also called software quality assurance (SQA)
 Serves as an umbrella activity that is applied throughout the
software process
 Involves doing the software development correctly versus doing it
over again
 Reduces the amount of rework, which results in lower costs and
improved time to market
 Encompasses
 A software quality assurance process
 Specific quality assurance and quality control tasks (including formal
technical reviews and a multi-tiered testing strategy)
 Effective software engineering practices (methods and tools)
 Control of all software work products and the changes made to them
 A procedure to ensure compliance with software development standards
 Measurement and reporting mechanisms

41
QUALITY DEFINED
 Defined as a characteristic or attribute of something
 Refers to measurable characteristics that we can compare to known
standards
 In software it involves such measures as cyclomatic complexity,
cohesion, coupling, function points, and source lines of code
 Includes variation control
 A software development organization should strive to minimize
the variation between the predicted and the actual values for cost,
schedule, and resources
 They should make sure their testing program covers a known
percentage of the software from one release to another
 One goal is to ensure that the variance in the number of bugs is
also minimized from one release to another

42
QUALITY DEFINED (CONTINUED)
 Two kinds of quality are sought out
 Quality of design
 The characteristic that designers specify for an item
 This encompasses requirements, specifications, and the design of the system

 Quality of conformance (i.e., implementation)


 The degree to which the design specifications are followed during
manufacturing
 This focuses on how well the implementation follows the design and how

well the resulting system meets its requirements


 Quality also can be looked at in terms of user satisfaction
User satisfaction = compliant product+ good quality
+ delivery within budget and schedule

43
QUALITY CONTROL
 Involves a series of inspections, reviews, and tests used throughout
the software process
 Ensures that each work product meets the requirements placed on it
 Includes a feedback loop to the process that created the work
product
 This is essential in minimizing the errors produced
 Combines measurement and feedback in order to adjust the process
when product specifications are not met
 Requires all work products to have defined, measurable
specifications to which practitioners may compare to the output of
each process

44
QUALITY ASSURANCE FUNCTIONS
 Consists of a set of auditing and reporting functions that assess
the effectiveness and completeness of quality control activities
 Provides management personnel with data that provides insight
into the quality of the products
 Alerts management personnel to quality problems so that they
can apply the necessary resources to resolve quality issues

45
THE COST OF QUALITY
 Includes all costs incurred in the pursuit of quality or in
performing quality-related activities
 Is studied to
 Provide a baseline for the current cost of quality
 Identify opportunities for reducing the cost of quality
 Provide a normalized basis of comparison (which is usually dollars)
 Involves various kinds of quality costs (See next slide)
 Increases dramatically as the activities progress from
 Prevention  Detection  Internal failure  External failure

"It takes less time to do a thing right than to explain why you did it wrong." Longfellow

46
KINDS OF QUALITY COSTS
 Prevention costs
 Quality planning, formal technical reviews, test equipment, training
 Appraisal costs
 Inspections, equipment calibration and maintenance, testing
 Failure costs – subdivided into internal failure costs and external
failure costs
 Internal failure costs
 Incurred when an error is detected in a product prior to shipment
 Include rework, repair, and failure mode analysis

 External failure costs


 Involves defects found after the product has been shipped
 Include complaint resolution, product return and replacement, help line

support, and warranty work

47
SOFTWARE QUALITY DEFINED

Definition: "Conformance to explicitly stated


functional and performance requirements,
explicitly documented development standards, and
implicit characteristics that are expected of all
professionally developed software"

48
SOFTWARE QUALITY DEFINED
 This definition emphasizes three points
 Software requirements are the foundation from which quality is
measured; lack of conformance to requirements is lack of quality
 Specified standards define a set of development criteria that
guide the manner in which software is engineered; if the criteria
are not followed, lack of quality will almost surely result
 A set of implicit requirements often goes unmentioned; if
software fails to meet implicit requirements, software quality is
suspect
 Software quality is no longer the sole responsibility of the
programmer
 It extends to software engineers, project managers, customers,
salespeople, and the SQA group
 Software engineers apply solid technical methods and measures,
conduct formal technical reviews, and perform well-planned
software testing 49
THE SQA GROUP
 Serves as the customer's in-house representative
 Assists the software team in achieving a high-quality
product
 Views the software from the customer's point of view
 Does the software adequately meet quality factors?
 Has software development been conducted according to
pre-established standards?
 Have technical disciplines properly performed their roles as
part of the SQA activity?
 Performs a set of of activities that address quality
assurance planning, oversight, record keeping,
analysis, and reporting (See next slide)

50
SQA ACTIVITIES
 Prepares an SQA plan for a project
 Participates in the development of the project's software process
description
 Reviews software engineering activities to verify compliance with
the defined software process
 Audits designated software work products to verify compliance
with those defined as part of the software process
 Ensures that deviations in software work and work products are
documented and handled according to a documented procedure
 Records any noncompliance and reports to senior management
 Coordinates the control and management of change
 Helps to collect and analyze software metrics

51
PURPOSE OF REVIEWS
 Serve as a filter for the software process
 Are applied at various points during the software process
 Uncover errors that can then be removed
 Purify the software analysis, design, coding, and testing
activities
 Catch large classes of errors that escape the originator more than
other practitioners
 Include the formal technical review (also called a walkthrough
or inspection)
 Acts as the most effective SQA filter
 Conducted by software engineers for software engineers
 Effectively uncovers errors and improves software quality
 Has been shown to be up to 75% effective in uncovering design flaws
(which constitute 50-65% of all errors in software)
 Require the software engineers to expend time and effort, and
the organization to cover the costs 52
SOFTWARE REVIEWS
 A formal presentation of software design to an audience of customers,
management, and technical staff is also a form of
 review.
 We focus on the formal technical review, sometimes called a walkthrough
or an inspection.
 A formal technical review is the most effective filter from a quality
assurance standpoint. Conducted by software engineers (and others) for
software engineers, the FTR is an effective means for improving software
quality.
 Cost Impact of Software Defects
 The IEEE Standard Dictionary of Electrical and Electronics Terms (IEEE
Standard 100-1992) defines a defect as “a product anomaly.”
 The definition for fault in the hardware context can be found in IEEE
Standard 610.12-1990:
 (a) A defect in a hardware device or component; for example, a short circuit
or broken wire. 53
 (b) An incorrect step, process, or data definition in a computer program.
DEFECT AMPLIFICATION AND REMOVAL

54
DEFECT AMPLIFICATION, NO REVIEWS

55
DEFECT AMPLIFICATION, REVIEWS
CONDUCTED
FORMAL TECHNICAL REVIEW (FTR)
 Objectives
 To uncover errors in function, logic, or implementation for any representation
of the software
 To verify that the software under review meets its requirements
 To ensure that the software has been represented according to predefined
standards
 To achieve software that is developed in a uniform manner
 To make projects more manageable
 Serves as a training ground for junior software engineers to observe
different approaches to software analysis, design, and construction
 Promotes backup and continuity because a number of people become
familiar with other parts of the software
 May sometimes be a sample-driven review
 Project managers must quantify those work products that are the primary
targets for formal technical reviews
 The sample of products that are reviewed must be representative of the
products as a whole 57
THE FTR MEETING
 Has the following constraints
 From 3-5 people should be involved
 Advance preparation (i.e., reading) should occur for each participant but
should require no more than two hours a piece and involve only a small
subset of components
 The duration of the meeting should be less than two hours
 Focuses on a specific work product (a software requirements
specification, a detailed design, a source code listing)
 Activities before the meeting
 The producer informs the project manager that a work product is
complete and ready for review
 The project manager contacts a review leader, who evaluates the product
for readiness, generates copies of product materials, and distributes them
to the reviewers for advance preparation
 Each reviewer spends one to two hours reviewing the product and making
notes before the actual review meeting
 The review leader establishes an agenda for the review meeting and
schedules the time and location
58
THE FTR MEETING
 Activities during the meeting
 The meeting is attended by the review leader, all reviewers, and the producer
 One of the reviewers also serves as the recorder for all issues and decisions
concerning the product
 After a brief introduction by the review leader, the producer proceeds to "walk
through" the work product while reviewers ask questions and raise issues
 The recorder notes any valid problems or errors that are discovered; no time
or effort is spent in this meeting to solve any of these problems or errors
 Activities at the conclusion of the meeting
 All attendees must decide whether to
 Accept the product without further modification

 Reject the product due to severe errors (After these errors are corrected,

another review will then occur)


 Accept the product provisionally (Minor errors need to be corrected but no

additional review is required)


 All attendees then complete a sign-off in which they indicate that they took
part in the review and that they concur with the findings
59
THE FTR MEETING
 Activities following the meeting
 The recorder produces a list of review issues that
 Identifies problem areas within the product
 Serves as an action item checklist to guide the producer in making

corrections
 The recorder includes the list in an FTR summary report
 This one to two-page report describes what was reviewed, who reviewed
it, and what were the findings and conclusions
 The review leader follows up on the findings to ensure that the
producer makes the requested corrections

60
FTR GUIDELINES
1) Review the product, not the producer
2) Set an agenda and maintain it
3) Limit debate and rebuttal; conduct in-depth discussions off-line
4) Enunciate problem areas, but don't attempt to solve the problem
noted
5) Take written notes; utilize a wall board to capture comments
6) Limit the number of participants and insist upon advance
preparation
7) Develop a checklist for each product in order to structure and
focus the review
8) Allocate resources and schedule time for FTRs
9) Conduct meaningful training for all reviewers
10) Review your earlier reviews to improve the overall review
process
61
STATISTICAL SOFTWARE QUALITY ASSURANCE
- PROCESS STEPS

1) Collect and categorize information (i.e., causes) about


software defects that occur
2) Attempt to trace each defect to its underlying cause (e.g.,
nonconformance to specifications, design error, violation of
standards, poor communication with the customer)
3) Using the Pareto principle (80% of defects can be traced to
20% of all causes), isolate the 20%

62
A SAMPLE OF POSSIBLE CAUSES FOR DEFECTS
 Incomplete or erroneous specifications
 Misinterpretation of customer communication
 Intentional deviation from specifications
 Violation of programming standards
 Errors in data representation
 Inconsistent component interface
 Errors in design logic
 Incomplete or erroneous testing
 Inaccurate or incomplete documentation
 Errors in programming language translation of design
 Ambiguous or inconsistent human/computer interface

63
SIX SIGMA
 Popularized by Motorola in the 1980s
 Is the most widely used strategy for statistical quality assurance

 Uses data and statistical analysis to measure and improve a


company's operational performance
 Identifies and eliminates defects in manufacturing and service-
related processes
 The "Six Sigma" refers to six standard deviations (3.4 defects
per a million occurrences)

64
SIX SIGMA
 Three core steps
 Define customer requirements, deliverables, and project goals via well-
defined methods of customer communication
 Measure the existing process and its output to determine current quality
performance (collect defect metrics)
 Analyze defect metrics and determine the vital few causes (the 20%)

 Two additional steps are added for existing processes (and can be
done in parallel)
 Improve the process by eliminating the root causes of defects
 Control the process to ensure that future work does not reintroduce the
causes of defects

65
SIX SIGMA (CONTINUED)
 All of these steps need to be performed so that you can manage the process to
accomplish something
 You cannot effectively manage and improve a process until you first do these steps
(in this order):

Manage and improve the work process

Control the work process


Analyze the work process
Measure the work process
Define the work
process
The work to be
done

66
SOFTWARE RELIABILITY,
AVAILABILITY, AND SAFETY
RELIABILITY AND AVAILABILITY
 Software failure
 Defined: Nonconformance to software requirements
 Given a set of valid requirements, all software failures can be traced to design or
implementation problems (i.e., nothing wears out like it does in hardware)
 Software reliability
 Defined: The probability of failure-free operation of a software application in a
specified environment for a specified time
 Estimated using historical and development data
 A simple measure is MTBF = MTTF + MTTR = Uptime + Downtime
 Example:
 MTBF = 68 days + 3 days = 71 days
 Failures per 100 days = (1/71) * 100 = 1.4

 Software availability
 Defined: The probability that a software application is operating according to
requirements at a given point in time
 Availability = [MTTF/ (MTTF + MTTR)] * 100%
 Example:
 Avail. = [68 days / (68 days + 3 days)] * 100 % = 96%
68
SOFTWARE SAFETY
 Focuses on identification and assessment of potential hazards to
software operation
 It differs from software reliability
 Software reliability uses statistical analysis to determine the likelihood
that a software failure will occur; however, the failure may not
necessarily result in a hazard or mishap
 Software safety examines the ways in which failures result in conditions
that can lead to a hazard or mishap; it identifies faults that may lead to
failures
 Software failures are evaluated in the context of an entire
computer-based system and its environment through the process
of fault tree analysis or hazard analysis

69
ISO 9000 QUALITY STANDARDS
 A quality assurance system may be defined as the organizational
structure, responsibilities, procedures, processes, and resources for
implementing quality management.
 The ISO 9000 standards have been adopted by many countries
including all members of the European Community, Canada,
Mexico, the United States, Australia, New Zealand, and the Pacific
Rim. Countries in Latin and South America have also shown interest
in the standards.
THE ISO APPROACH TO QUALITY ASSURANCE
SYSTEMS
 The ISO 9000 quality assurance models treat an enterprise as a
network of interconnected processes.
 For a quality system to be ISO compliant, these processes must
address the areas identified in the standard and must be documented
and practiced as described.
 ISO 9000 describes the elements of a quality assurance system in
general terms.
 These elements include the organizational structure, procedures,
processes, and resources needed to implement quality planning,
quality control, quality assurance, and quality improvement.
THE ISO 9001 STANDARD
 ISO 9001 is the quality assurance standard that applies to
software engineering.
 The standard contains 20 requirements that must be present for an
effective quality assurance system.
 The requirements delineated by ISO 9001 address topics such as
management responsibility, quality system, contract review,
design control, document and data control, product identification
and traceability, process control, inspection and testing, corrective
and preventive action, control of quality records, internal quality
audits, training, servicing, and statistical techniques.

You might also like