0% found this document useful (0 votes)
31 views33 pages

Lesson 9

The document discusses software quality assurance and summarizes key points in 3 sentences: Software quality assurance aims to ensure software meets specifications, standards, and implicit expectations through organization-wide policies, project-specific quality plans, and quality control activities like testing and reviews. Quality is difficult to achieve due to incomplete specifications, conflicting attributes, and implicit requirements. Effective quality assurance reduces costs and defects through prevention and early detection activities at all stages of development.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views33 pages

Lesson 9

The document discusses software quality assurance and summarizes key points in 3 sentences: Software quality assurance aims to ensure software meets specifications, standards, and implicit expectations through organization-wide policies, project-specific quality plans, and quality control activities like testing and reviews. Quality is difficult to achieve due to incomplete specifications, conflicting attributes, and implicit requirements. Effective quality assurance reduces costs and defects through prevention and early detection activities at all stages of development.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 33

Software Quality Assurance

What is Software Quality?


Simplistically, quality is an attribute of software that implies
the software meets its specification
This definition is too simple for ensuring quality in software
systems

oftware specifications are often incomplete or ambiguous

ome quality attributes are difficult to specify

ension exists between some quality attributes, e.g. efficiency vs.


reliability
Software Quality Attributes
Safety Modularity
Security Complexity
Reliability Portability
Resilience Usability
Robustness Reusability
Understandability Efficiency
Testability Learnability
Adaptability
Software Quality
Conformance to explicitly stated functional and performance requirements,
explicitly documented development standards, and implicit characteristics that
are expected of all professionally developed software

oftware requirements are the foundation from which quality is measured.


Lack of conformance to requirements is lack of quality.

pecified standards define a set of development criteria that guide the manner in
which software is engineered.
If the criteria are not met, lack of quality will almost surely result.

here is a set of implicit requirements that often goes unmentioned.


If software conforms to its explicit requirements but fails to meet its implicit requirements,
software quality is suspect.

[Adapted from Pressman 4th Ed]


Software Quality Assurance
To ensure quality in a software product, an organization must have a three-prong approach
to quality management:

rganization-wide policies, procedures and standards must be established.

roject-specific policies, procedures and standards must be tailored from the organization-wide
templates.

uality must be controlled; that is, the organization must ensure that the appropriate procedures are
followed for each project
Standards exist to help an organization draft an appropriate software quality assurance plan.

SO 9000-3

NSI/IEEE standards
External entities can be contracted to verify that an organization is standard-compliant.
A Software Quality Plan
ISO
ISO9000
9000
model
model

Organization
Organization
quality
qualityplan
plan

Project
ProjectAA Project
ProjectBB Project
ProjectCC
quality
qualityplan
plan quality
qualityplan
plan quality
qualityplan
plan

[Adapted from Sommerville 5th Ed]


SQA Activities
Applying technical methods

o help the analyst achieve a high quality specification and a high quality design

Conducting formal technical reviews

stylized meeting conducted by technical staff with the sole purpose of uncovering quality problems

Testing Software

series of test case design methods that help ensure effective error detection

Enforcing standards
Controlling change

pplied during software development and maintenance

Measurement

rack software quality and asses the ability of methodological and procedural changes to improve software quality

Record keeping and reporting

rovide procedures for the collection and dissemination of SQA information


Advantages of SQA
Software will have fewer latent defects, resulting in
reduced effort and time spent during testing and
maintenance
Higher reliability will result in greater customer
satisfaction
Maintenance costs can be reduced
Overall life cycle cost of software is reduced
Disadvantages of SQA
It is difficult to institute in small organizations, where
available resources to perform necessary activities are
not available
It represents cultural change - and change is never
easy
It requires the expenditure of dollars that would not
otherwise be explicitly budgeted to software
engineering or QA
Quality Reviews
The fundamental method of validating the quality of a product or a process.
Applied during and/or at the end of each life cycle phase

oint out needed improvements in the product of a single person or team

onfirm those parts of a product in which improvement is either not desired or not needed

chieve technical work of more uniform, or at least more predictable, quality than what can be
achieved without reviews, in order to make technical work more manageable
Quality reviews can have different intents:

eview for defect removal

eview for progress assessment

eview for consistency and conformance


Quality Reviews
Requirements
Requirements
Analysis
Analysis Specification
Review
1x
Design Design
Design Review

3-6x
Code Code
Code Review

10x Test
Testing Review
Testing

15-70x Customer
Maintenance Feedback
Maintenance

40-1000x
[Adapted from Pressman 4th Ed]
Cost Impact of Software
Defects
Errors from
Previous
Steps

Errors Passed Through Percent Efficiency

Amplified Errors 1:X for error

Newly Generated Errors detection

Errors
Passed to
Next Step

[Adapted from Pressman 4th Ed]


Defect Amplification and
Removal
Preliminary Design
0
0 0%
10
Detailed Design
10 6
6
4 4x1.5 0%
37
Code/Unit Testing
25 10
10
27 94
37 27x3 20%
25

116
To integration
testing...
Defect Amplification (cont’d)
94
Integration Testing
94
94
0 0 50%
47
Validation
0 47 Testing
47
0 0 50%
24
94 System Testing
0 24
24
0 12
47 0 50%
0

24
Latent Errors
Review Checklist for Systems
Engineering
Are major functions defined in a bounded and unambiguous fashion?
Are interfaces between system elements defined?
Are performance bounds established for the system as a whole and for each
element?
Are design constraints established for each element?
Has the best alternative been selected?
Is the solution technologically feasible?
Has a mechanism for system validation and verification been established?
Is there consistency among all system elements?

[Adapted from Behforooz and Hudson]


Review Checklist for Software
Project Planning
Is the software scope unambiguously defined and bounded?
Is terminology clear?
Are resources adequate for the scope?
Are resources readily available?
Are tasks properly defined and sequenced?
Is the basis for cost estimation reasonable? Has it been developed using two different
sources?
Have historical productivity and quality data been used?
Have differences in estimates been reconciled?
Are pre-established budgets and deadlines realistic?
Is the schedule consistent?
Review Checklist for Software
Requirements Analysis
Is the information domain analysis complete, consistent, and accurate?
Is problem partitioning complete?
Are external and internal interfaces properly defined?
Are all requirements traceable to the system level?
Is prototyping conducted for the customer?
Is performance achievable with constraints imposed by other system
elements?
Are requirements consistent with schedule, resources, and budget?
Are validation criteria complete?
Review Checklist for Software
Design
(Preliminary Design Review)
Are software requirements reflected in the software
architecture?
Is effective modularity achieved? Are modules functionally
independent?
Is program architecture factored?
Are interfaces defined for modules and external system
elements?
Is data structure consistent with software requirements?
Has maintainability been considered?
Review Checklist for Software
Design
(Design Walkthrough)
Does the algorithm accomplish the desired function?
Is the algorithm logically correct?
Is the interface consistent with architectural design?
Is logical complexity reasonable?
Have error handling and “antibugging” been specified?
Is local data structure properly defined?
Are structured programming constructs used throughout?
Is design detail amenable to the implementation language?
Which are used: operating system or language dependent features?
Is compound or inverse logic used?
Has maintainability been considered?
Review Checklist for Coding
Is the design properly translated into code? (The results of the procedural
design should be available at this review)
Are there misspellings or typos?
Has proper use of language conventions been made?
Is there compliance with coding standards for language style, comments,
module prologue?
Are incorrect or ambiguous comments present?
Are typing and data declaration proper?
Are physical constraints correct?
Have all items on the design walkthrough checklist been reapplied (as
required)?
Review Checklist for Software
Testing (Test Plan)
Have major test phases been properly identified and sequenced?
Has traceability to validation criteria/requirements been established as part of
software requirements analysis?
Are major functions demonstrated early?
Is the test plan consistent with the overall project plan?
Has a test schedule been explicitly defined?
Are test resources and tools identified and available?
Has a test recordkeeping mechanism been established?
Have test drivers and stubs been identified, and has work to develop them been
scheduled?
Has stress testing for software been specified?
Review Checklist for Software
Testing
(Test Procedure)
Have both white and black box tests been specified?
Have all independent logic paths been tested?
Have test cases been identified and listed with expected
results?
Is error handling to be tested?
Are boundary values to be tested?
Are timing and performance to be tested?
Has acceptable variation from expected results been specified?
Review Checklist for
Maintenance
Have side effects associated with change been considered?
Has the request for change been documented, evaluated, and
approved?
Has the change, once made, been documented and reported to
interested parties?
Have appropriate FTRs been conducted?
Has a final acceptance review been conducted to assure that all
software has been properly updated, tested, and replaced?
Formal Technical Review
(FTR)
Software quality assurance activity that is performed by software engineering practitioners

ncover errors in function, logic, or implementation for any representation of the software

erify that the software under review meets its requirements

ssure that the software has been represented according to predefined standards

chieve software that is developed in a uniform manner

ake projects more manageable


FTR is actually a class of reviews

alkthroughs

nspections

ound-robin reviews

ther small group technical assessments of the software


The Review Meeting
Constraints

etween 3 and 5 people (typically) are involved

dvance preparation should occur, but should involve no more that 2 hours of work for each person

uration should be less than two hours


Components

roduct - A component of software to be reviewed

roducer - The individual who developed the product

eview leader - Appointed by the project leader; evaluates the product for readiness, generates
copies of product materials, and distributes them to 2 or 3 reviewers

eviewers - Spend between 1 and 2 hours reviewing the product, making notes, and otherwise
becoming familiar with the work

ecorder - The individual who records (in writing) all important issues raised during the review
Review Reporting and
Recordkeeping
Review Summary Report

hat was reviewed?

ho reviewed it?

hat were the findings and conclusions?


Review Issues List

dentify the problem areas within the product

erve as an action item checklist that guides the producer as corrections are made
Guidelines for FTR
Review the product, not the producer
Set an agenda and maintain it
Limit debate and rebuttal
Enunciate the problem areas, but don’t attempt to solve every problem that is noted
Take written notes
Limit the number of participants and insist upon advance preparation
Develop a checklist for each product that is likely to be reviewed
Allocate resources and time schedules for FTRs
Conduct meaningful training for all reviewers
Review your earlier reviews (if any)
Reviewer’s Preparation
Be sure that you understand the context of the material
Skim all product material to understand the location and the
format of information
Read the product material and annotate a hardcopy
Pose your written comments as questions
Avoid issues of style
Inform the review leader if you cannot prepare
Results of the Review Meeting
All attendees of the FTR must make a decision
ccept the product without further modification

eject the product due to severe errors (and perform another review after
corrections have been made)

ccept the product provisionally (minor corrections are needed, but no


further reviews are required)
A sign-off is completed, indicating participation and
concurrence with the review team’s findings
Software Reliability
Probability of failure-free operation for a specified time in a specified
environment.
This could mean very different things for different systems and
different users.
Informally, reliability is a measure of the users’ perception of how
well the software provides the services they need.

ot an objective measure

ust be based on an operational profile

ust consider that there are widely varying consequences for different errors
IO Mapping
Subset of inputs
Input
InputSet
Set causing erroneous
outputs

Software
Software

Output
OutputSet
Set

Erroneous
outputs

[Adapted from Sommerville 5th Ed]


Software Faults and Failures
A failure corresponds to erroneous/unexpected runtime behavior observed by a user.
A fault is a static software characteristic that can cause a failure to occur.
The presence of a fault doesn’t necessarily imply the occurrence of a failure.

Input Set

User A Erroneous
Inputs Inputs

User B User C
Inputs Inputs

[Adapted from Sommerville 5th Ed]


Reliability Improvements
Software reliability improves when faults which are present in
the most frequently used portions of the software are removed.
A removal of X% of faults doesn’t necessarily mean an X%
improvement in reliability.
In a study by Mills et al. in 1987 removing 60% of faults
resulted in a 3% improvement in reliability.
Removing faults with the most serious consequences is the
primary objective.

You might also like