0% found this document useful (0 votes)
7 views40 pages

Module4 - Process Framework, Planning and Monitoring Amd Doc Analysis - Lecture Notes

Vtu 6th sem software testing notes

Uploaded by

nayann.21.beis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views40 pages

Module4 - Process Framework, Planning and Monitoring Amd Doc Analysis - Lecture Notes

Vtu 6th sem software testing notes

Uploaded by

nayann.21.beis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Module:4
4.1Process Framework

4.1.1 Basic Principles

Analysis and testing (A&T) has been common practice since the earliest software projects.
A&T activities were for a long time based on common sense and individual skills. It has
emerged as a distinct discipline only in the last three decades.
General engineering principles:

– Partition: divide and conquer

– Visibility: making information accessible

– Feedback: tuning the development process

Specific A&T principles:

– Sensitivity: better to fail every time than sometimes

– Redundancy: making intentions explicit

– Restriction: making the problem easier

 Sensitivity:
Human developers make errors, producing faults in software. Faults may lead to failures, but
faulty software may not fail on every execution. The sensitivity principle states that it is better
to fail every time than sometimes.
Consider the cost of detecting and repairing a software fault. If it is detected immediately
(e.g., by an on-the-fly syntactic check in a design editor), then the cost of correction is very
small, and in fact the line between fault prevention and fault detection is blurred. If a fault is
detected in inspection or unit testing, the cost is still relatively small. If a fault survives initial
detection efforts at the unit level, but triggers a failure detected in integration testing, the cost
of correction is much greater. If the first failure is detected in system or acceptance testing,
the cost is very high indeed, and the most costly faults are those detected by customers in the
field.

1. a test selection criterion works better if every selected test provides the same result, i.e., if
the program fails with one of the selected tests, it fails with all of them (reliable criteria)

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 1


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

2. run time deadlock analysis works better if it is machine independent, i.e., if the program
deadlocks when analyzed on one machine, it deadlocks on every machine

 Redundancy
Redundancy is the opposite of independence. If one part of a software artifact (program, design
document, etc.) constrains the content of another, then they are not entirely independent, and it
is possible to check them for consistency.
The concept and definition of redundancy are taken from information theory. In
communication, redundancy can be introduced into messages in the form of error- detecting
and error-correcting codes to guard against transmission errors. In software test and analysis,
we wish to detect faults that could lead to differences between intended behavior and actual
behavior, so the most valuable form of redundancy is in the form of an explicit, redundant
statement of intent.
Redundant checks can increase the capabilities of catching specific faults early or more
efficiently.

– Static type checking is redundant with respect to dynamic type checking, but it
can reveal many type mismatches earlier and more efficiently.

– Validation of requirement specifications is redundant with respect to validation of


the final software, but can reveal errors earlier and more efficiently.

– Testing and proof of properties are redundant, but are often used together to
increase confidence

 Restriction
When there are no acceptably cheap and effective ways to check a property, sometimes one
can change the problem by checking a different, more restrictive property or by limiting the
check to a smaller, more restrictive class of programs.
Consider the problem of ensuring that each variable is initialized before it is used, on every
execution. Simple as the property is, it is not possible for a compiler or analysis tool to
precisely determine whether it holds. See the program in Figure 3.2 for an illustration. Can the
variable k ever be uninitialized the first time i is added to it? If someCondition(0) always

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 2


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

returns true, then k will be initialized to zero on the first time through the loop, before k is
incremented, so perhaps there is no potential for a run-time error - but method someCondition
could be arbitrarily complex and might even depend on some condition in the environment.
Java's solution to this problem is to enforce a stricter, simpler condition: A program is not
permitted to have any syntactic control paths on which an uninitialized reference could occur,
regardless of whether those paths could actually be executed.
The program has such a path, so the Java compiler rejects it.

• Suitable restrictions can reduce hard (unsolvable) problems to simpler (solvable)


problems

– A weaker spec may be easier to check: it is impossible (in general) to show that
pointers are used correctly, but the simple Java requirement that pointers are
initialized before use is simple to enforce.

– A stronger spec may be easier to check: it is impossible (in general) to show that
type errors do not occur at run-time in a dynamically typed language, but
statically typed languages impose stronger restrictions that are easily checkable.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 3


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

 Partition

Partition, often also known as "divide and conquer," is a general engineering principle.
Dividing a complex problem into sub problems to be attacked and solved independently is
probably the most common human problem-solving strategy. Software engineering in
particular applies this principle in many different forms and at almost all development levels,
from early requirements specifications to code and maintenance. Analysis and testing are no
exception: the partition principle is widely used and exploited.

Partitioning can be applied both at process and technique levels. At the process level, we
divide complex activities into sets of simple activities that can be attacked independently. For
example, testing is usually divided into unit, integration, subsystem, and system testing. In
this way, we can focus on different sources of faults at different steps, and at each step, we
can take advantage of the results of the former steps. For instance, we can use units that have
been tested as stubs for integration testing. Some static analysis techniques likewise follow
the modular structure of the software system to divide an analysis problem into smaller steps.

• Hard testing and verification problems can be handled by suitably partitioning the input
space:

– both structural and functional test selection criteria identify suitable partitions of
code or specifications (partitions drive the sampling of the input space)

– verification techniques fold the input space according to specific characteristics,


grouping homogeneous data together and determining partitions

 Visibility

Visibility means the ability to measure progress or status against goals. In software
engineering, one encounters the visibility principle mainly in the form of process visibility,
and then mainly in the form of schedule visibility: ability to judge the state of development
against a project schedule. Quality process visibility also applies to measuring achieved (or
predicted) quality against quality goals. The principle of visibility involves setting goals that
can be assessed as well as devising methods to assess their realization.
Visibility is closely related to observability, the ability to extract useful information from a
[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 4
Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

software artifact. The architectural design and build plan of a system determines what will be
observable at each stage of development, which in turn largely determines the visibility of
progress against goals at that stage.
• The ability to measure progress or status against goals

• X visibility = ability to judge how we are doing on X, e.g., schedule


visibility = “Are we ahead or behind schedule,” quality visibility = “Does
quality meet our objectives?”

– Involves setting goals that can be assessed at each stage of development

• The biggest challenge is early assessment, e.g., assessing specifications


and design with respect to product quality

• Related to observability

– Example: Choosing a simple or standard internal data format to facilitate unit


testing

 Feedback

Feedback is another classic engineering principle that applies to analysis and testing.
Feedback applies both to the process itself (process improvement) and to individual
techniques (e.g., using test histories to prioritize regression testing).
Systematic inspection and walkthrough derive part of their success from feedback.
Participants in inspection are guided by checklists, and checklists are revised and refined
based on experience. New checklist items may be derived from root cause analysis,
analyzing previously observed failures to identify the initial errors that lead to them.

• Learning from experience: Each project provides information to improve the next

• Examples

– Checklists are built on the basis of errors revealed in the past

– Error taxonomies can help in building better test selection criteria

– Design guidelines can avoid common pitfalls

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 5


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

4.1.2 The Quality Process


• Quality process: set of activities and responsibilities

– focused primarily on ensuring adequate dependability

– concerned with project schedule or with product usability

• The quality process provides a framework for

– selecting and arranging activities

– considering interactions and trade-offs with other important goals.

example

high dependability vs. time to market

• Mass market products:

– better to achieve a reasonably high degree of dependability on a tight schedule


than to achieve ultra-high dependability on a much longer schedule

• Critical medical devices:

– better to achieve ultra-high dependability on a much longer schedule than a


reasonably high degree of dependability on a tight schedule

Properties of the Quality Process:


• Completeness: Appropriate activities are planned to detect each important class of
faults.

• Timeliness: Faults are detected at a point of high leverage (as early as possible)

• Cost-effectiveness: Activities are chosen depending on cost and effectiveness, cost


must be considered over the whole development cycle and product life the dominant
factor is usually the cost of repeating an activity through many change cycles.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 6


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

4.1.3 Planning and Monitoring


• The quality process
– Balances several activities across the whole development process
– Selects and arranges them to be as cost-effective as possible
– Improves early visibility
• Quality goals can be achieved only through careful planning
• Planning is integral to the quality process
• A process is visible to the extent that one can answer the question
– How does our progress compare to our plan?
– Example: Are we on schedule? How far ahead or behind?
• The quality process has not achieved adequate visibility if one cannot gain strong
confidence in the quality of the software system before it reaches final testing
– quality activities are usually placed as early as possible
 design test cases at the earliest opportunity (not ``just in time'')
 uses analysis techniques on software artifacts produced before actual
code.
– motivates the use of “proxy” measures
 Ex: the number of faults in design or code is not a true measure of
reliability, but we may count faults discovered in design inspections as
an early indicator of potential quality problems

4.1.4 Quality Goals

• Process visibility requires a clear specification of goals, and in the case of quality process visibility
this includes a careful distinction among dependability qualities.
• A team that does not have a clear idea of the difference between reliability and robustness, for
example, or of their relative importance in a project, has little chance of attaining either. Goals must
be further refined into a clear and reasonable set of objectives.
• If an organization claims that nothing less than 100% reliability will suffice, it is not setting an
ambitious objective. Rather, it is setting no objective at all, and choosing not to make reasoned
trade-off decisions or to balance limited resources across various activities.
• It is, in effect, abrogating responsibility for effective quality planning, and leaving trade-offs among

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 7


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

cost, schedule, and quality to an arbitrary, ad hoc decision based on deadline and budget alone.
• The relative importance of qualities and their relation to other project objectives varies. Time-to-
market may be the most important property for a mass market product, usability may be more
prominent for a Web based application, and safety may be the overriding requirement for a life-
critical system
The external properties of software can ultimately be divided into dependability (does the software do
what it is intended to do?) and usefulness. There is no precise dependability way to distinguish these,
but a rule of thumb is that when software is not dependable, we say it has a fault, or a defect, or (most
often) a bug, resulting in an undesirable behavior or failure. It is quite possible to build systems that are
very reliable, relatively free from usefulness hazards, and completely useless. They may be unbearably
slow, or have terrible user interfaces and unfathomable documentation, or they may be missing several
crucial features. How should these properties be considered in software quality? One answer is that
they are not part of quality at all unless they have been explicitly specified, since quality is the presence
of specified properties. However, a company whose products are rejected by its customers will take
little comfort in knowing that, by some definitions, they were high-quality products.
Interface standards augment, rather than replace, usability requirements because conformance to the
standards is not sufficient assurance that the requirement is met. This is the same relation that other
specifications have to the user requirements they are intended to fulfill. In general, verifying
conformance to specifications does not replace validating satisfaction of requirements.

4.1.3 Dependability Properties

The simplest of the dependability properties is correctness: A program or system is correct if it is

consistent with its specification. By definition, a specification divides all possible system behaviours
into two classes, successes (or correct executions) and failures. All of the possible behaviors of a
correct system are successes.
 Correctness:

A program cannot be mostly correct or somewhat correct or 30% correct. It is absolutely correct on all
possible behaviors, or else it is not correct. It is very easy to achieve correctness, since every program
is correct with respect to some (very bad) specification. Achieving correctness with respect to a useful
specification, on the other hand, is seldom practical for nontrivial systems. Therefore, while
correctness may be a noble goal, we are often interested in assessing some more achievable level of
dependability.
 Reliability

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 8


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Reliability is a statistical approximation to correctness, in the sense that 100% reliability is


indistinguishable from correctness. Roughly speaking, reliability is a measure of the likelihood of
correct function for some "unit" of behavior, which could be a single use or program execution or a
period of time. Like correctness, reliability is relative to a specification (which determines
whether a unit of behavior is counted as a success or failure). Unlike correctness, reliability is
also relative to a particular usage profile. The same program can be more or less reliable depending on
how it is used.
 Availability

Availability is an appropriate measure when a failure has some duration in time. For example, a failure
of a network router may make it impossible to use some functions of a local area network until the
service is restored; between initial failure and restoration we say the router is "down" or "unavailable."
The availability of the router is the time in which the system is "up" (providing normal service) as a
fraction of total time. Thus, a network router that averages 1 hour of down time in each 24-hour period
would have an availability of 2324, or 95.8%.
 Safety - Preventing Hazards

Software safety is an extension of the well-established field of system safety into software. Safety is
concerned with preventing certain undesirable behaviors, called hazards. It is quite explicitly not
concerned with achieving any useful behavior apart from whatever functionality is needed to prevent
hazards. Software safety is typically a concern in "critical" systems such as avionics and medical
systems, but the basic principles apply to any system in which particularly undesirable behaviors can
be distinguished from run-of-the-mill failure. For example, while it is annoying when a word
processor crashes, it is much more annoying if it irrecoverably corrupts document files. The
developers of a word processor might consider safety with respect to the hazard of file corruption
separately from reliability with respect to the complete functional requirements for the word
processor.
 robustness -acceptable behavior under extreme conditions

Software safety is a kind of robustness, but robustness is a more general notion that concerns not only
avoidance of hazards (e.g., data corruption) but also partial functionality under unusual
situations. Robustness, like safety, begins with explicit consideration of unusual and undesirable
situations, and should include augmenting software undesirable events.
Example of Dependability Qualities

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 9


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Relation among dependability properties

1.5 Analysis Testing


Analysis techniques that do not involve actual execution of program source code play a prominent role
in overall software quality processes. Manual inspection techniques and automated analyses can be
applied at any development stage. They are particularly well suited at the early stages of specifications
and design, where the lack of executability of many intermediate artifacts reduces the efficacy of

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 10


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

testing.
Excerpt of Web Presence Feasibility Study
Purpose of this document
This document was prepared for the Chipmunk IT management team. It describes the results of a
feasibility study undertaken to advise Chipmunk corporate management whether to embark on a
substantial redevelopment effort to add online shopping functionality to the Chipmunk Computers'
Web presence.

Goals
The primary goal of a Web presence redevelopment is to add online shopping facilities. Marketing
estimates an increase of 15% over current direct sales within 24 months, and an additional 8% savings
in direct sales support costs from shifting telephone price inquiries to online price inquiries.

Architectural Requirements
The logical architecture will be divided into three distinct subsystems: human interface, business
logic, and supporting infrastructure. Each major subsystem must be structured for phased
development, with initial features delivered 6 months from inception, full features at 12 months, and a
planned revision at 18 months from project inception

Quality Requirements
Dependability
With the introduction of direct sales and customer relationship management functions, dependability

of Chipmunk's Web services becomes businesscritical. A critical core of functionality will be


identified, isolated from less critical functionality in design and implementation, and subjected to the
highest level of scrutiny. We estimate that this will be approximately 20% of new development and
revisions, and that the V&V costs for those portions will be approximately triple the cost of V&V for
noncritical development.
Usability The new Web presence will be, to a much greater extent than before, the public face of
Chipmunk Computers
Security Introduction of online direct ordering and billing raises a number of security issues. Some of
these can be avoided initially by contracting with one of several service companies that provide secure
credit card transaction services. Nonetheless, order tracking, customer relationship
management, returns, and a number of other functions that cannot be effectively outsourced raise
significant security and privacy issues. Identifying and isolating security concerns will add a

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 11


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

significant but manageable cost to design validation

1.5 Improving the Process


While the assembly-line, mass production industrial model is inappropriate for software, which is at
least partly custom-built, there is almost always some commonality among projects undertaken by an
organization over time. Confronted by similar problems, developers tend to make the same kinds of
errors over and over, and consequently the same kinds of software faults are often encountered project
after project. The quality process, as well as the software development process as a whole, can be
improved by gathering, analyzing, and acting on data regarding faults and failures.
The first part of a process improvement feedback loop, and often the most difficult to implement, is
gathering sufficiently complete and accurate raw data about faults and failures. A main obstacle is that
data gathered in one project goes mainly to benefit other projects in the future and may seem to have
little direct benefit for the current project, much less to the persons asked to provide the raw data. It is
therefore helpful to integrate data collection as well as possible with other, normal development
activities, such as version and configuration control, project management, and bug tracking. It is also
essential to minimize extra effort. For example, if revision logs in the revision control database can be
associated with bug tracking records, then the time between checking out a module and checking it
back in might be taken as a rough guide to cost of repair. Raw data on faults and failures must be
aggregated into categories and prioritized. Faults may be categorized along several dimensions, none
of them perfect. Fortunately, a flawless categorization is not necessary; all that is needed is some
categorization scheme that is sufficiently fine-grained and tends to aggregate faults with similar causes
and possible remedies, and that can be associated with at least rough estimates of relative
frequency and cost. A small number of categories - maybe just one or two - are chosen for further
study.

1.6 Organizational Factors


The quality process includes a wide variety of activities that require specific skills and attitudes and
may be performed by quality specialists or by software developers. Planning the quality process
involves not only resource management but also identification and allocation of responsibilities. A
poor allocation of responsibilities can lead to major problems in which pursuit of individual goals
conflicts with overall project success. For example, splitting responsibilities of development and
quality-control between a development and a quality team, and rewarding high productivity in terms
of lines of code per person-month during development may produce undesired results. The
development team, not rewarded to produce high-quality software, may attempt to maximize
productivity to the detriment of quality. The resources initially planned for quality assurance may not

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 12


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

suffice if the initial quality of code from the "very productive" development team is low. On the
other hand, combining development and quality control responsibilities in one undifferentiated team,
while avoiding the perverse incentive of divided responsibilities, can also have unintended effects: As
deadlines near, resources may be shifted from quality assurance to coding, at the expense of product
quality.

Conflicting considerations support both the separation of roles (e.g., recruiting quality specialists),
and the mobility of people and roles (e.g, rotating engineers between development and testing tasks).
At Chipmunk, responsibility for delivery of the new Web presence is distributed among a
development team and a quality assurance team. Both teams are further articulated into groups. The
quality assurance team is divided into the analysis and testing group, responsible for the dependability
of the system, and the usability testing group, responsible for usability.

2. PLANNING AND MONITORING THE PROCESS


Overview
Planning involves scheduling activities, allocating resources, and devising observable, unambiguous
milestones against which progress and performance can be monitored. Monitoring means answering
the question, "How are we doing?" Quality planning is one aspect of project planning, and quality
processes must be closely coordinated with other development processes. Coordination among quality
and development tasks may constrain ordering (e.g., unit tests are executed after creation of program
units). It may shape tasks to facilitate coordination; for example, delivery may be broken into smaller
increments to allow early testing. Some aspects of the project plan, such as feedback and design for
testability, may belong equally to the quality plan and other aspects of the project plan.

2. 1 Quality and Process


A software plan involves many intertwined concerns, from schedule to cost to usability and
dependability. Despite the intertwining, it is useful to distinguish individual concerns and objectives to
lessen the likelihood that they will be neglected, to allocate responsibilities, and to make the overall
planning process more manageable.
A typical spiral process model lies somewhere between, with distinct planning, design, and

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 13


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

implementation steps in several increments coupled with a similar unfolding of analysis and test
activities. A general principle, across all software processes, is that the cost of detecting and repairing
a fault increases as a function of time between committing an error and detecting the resultant faults.
Thus, whatever the intermediate work products in a software plan, an efficient quality plan will
include a matched set of intermediate validation and verification activities that detect most faults
within a short period of their introduction. Any step in a software process that is not paired with a
validation or verification step is an opportunity for defects to fester, and any milestone in a project
plan that does not include a quality check is an opportunity for a misleading assessment of progress.

The particular verification or validation step at each stage depends on the nature of the intermediate
work product and on the anticipated defects. For example, anticipated defects in a requirements
statement might include incompleteness, ambiguity, inconsistency, and overambition relative to
project goals and resources. A review step might address some of these, and automated analyses might
help with completeness and consistency checking.
Internal consistency check Check the artifact for compliance with structuring rules that define "well-
formed" artifacts of that type. An important point of leverage is defining the syntactic and semantic
rules thoroughly and precisely enough that many common errors result in detectable violations. This is
analogous to syntax and strong-typing rules in programming languages, which are not enough to
guarantee program correctness but effectively guard against many simple errors.

External consistency check Check the artifact for consistency with related artifacts. Often this means
checking for conformance to a "prior" or "higher-level" specification, but consistency checking does
not depend on sequential, top-down development - all that is required is that the related information
from two or more artifacts be defined precisely enough to support detection of discrepancies.
Consistency usually proceeds from broad, syntactic checks to more detailed and expensive semantic
checks, and a variety of automated and manual verification techniques may be applied.

Generation of correctness conjectures Correctness conjectures, which can be test outcomes or other
objective criteria, lay the groundwork for external consistency checks of other work products,
particularly those that are yet to be developed or revised. Generating correctness conjectures for other
work products will frequently motivate refinement of the current product. For example, an interface
definition may be elaborated and made more precise so that implementations can be effectively tested.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 14


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

2. 2 Test and Analysis Strategies and plans


Lessons of past experience are an important asset of organizations that rely heavily on technical skills.
A body of explicit knowledge, shared and refined by the group, is more valuable than islands of
individual competence. Organizational knowledge in a shared and systematic form is more amenable
to improvement and less vulnerable to organizational change, including the loss of key individuals.
Capturing the lessons of experience in a consistent and repeatable form is essential for avoiding errors,
maintaining consistency of the process, and increasing development efficiency.

Cleanroom
The Cleanroom process model, introduced by IBM in the late 1980s, pairs development with V&V
activities and stresses analysis over testing in the early phases. Testing is left for system certification.
The Cleanroom process involves two cooperating teams, the development and the quality teams, and
five major activities: specification, planning, design and verification, quality certification, and
feedback.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 15


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

In the specification activity, the development team defines the required behavior of the system, while
the quality team defines usage scenarios that are later used for deriving system test suites. The
planning activity identifies incremental development and certification phases.
After planning, all activities are iterated to produce incremental releases of the system. Each system
increment is fully deployed and certified before the following step. Design and code undergo formal
inspection ("Correctness verification") before release. One of the key premises underpinning the
Cleanroom process model is that rigorous design and formal inspection produce "nearly fault-free
software."
The quality strategy is an intellectual asset of an individual organization prescribing a set of solutions
to problems specific to that organization. Among the factors that particularize the strategy are:
Structure and size Large organizations typically have sharper distinctions between development and
quality groups, even if testing personnel are assigned to development teams. In smallerorganizations,
it is more common for a single person to serve multiple roles.
Overall process We have already noted the intertwining of quality process with other aspects of an
overall software process, and this is of course reflected in the quality strategy. For example, if an
organization follows the Cleanroom methodology, then inspections will be required but unit testing
forbidden. An organization that adopts the XP methodology is likely to follow the "test first" and
pair programming elements of that approach, and in fact would find a more document-heavy approach
a difficult fit.
Application domain The domain may impose both particular quality objectives (e.g., privacy and

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 16


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

security in medical records processing), and in some cases particular steps and documentation
required to obtain certification from an external authority. For example, the RTCA/DO-178B standard
for avionics software requires testing to the modified condition/decision coverage (MC/DC) criterion.

SRET
The software reliability engineered testing (SRET) approach, developed at AT&T in the early 1990s,

assumes a spiral development process and augments each coil of the spiral with rigorous testing
activities. SRET identifies two main types of testing: development testing, used to find and remove
faults in software at least partially developed in-house, and certification testing, used to either accept
or reject outsourced software. The SRET approach includes seven main steps. Two initial, quick
decision-making steps determine which systems require separate testing and which type of testing is
needed for each system to be tested. The five core steps are executed in parallel with each coil of a
spiral development process

The five core steps of SRET are:


Define "Necessary" Reliability Determine operational models, that is, distinct patterns of system
usage that require separate testing, classify failures according to their severity, and engineer the
reliability strategy with fault prevention, fault removal, and fault tolerance activities.

Develop Operational Profiles

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 17


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Develop both overall profiles that span operational models and operational profiles within single
operational models.

Prepare for Testing Specify test cases and procedures.

Execute Tests
Interpret Failure Data Interpretation of failure data depends on the type of testing. In development
testing, the goal is to track progress and compare present failure intensities with objectives. In
certification testing, the goal is to determine if a software component or system should be accepted or
rejected.

Extreme Programming (XP)


The extreme programming methodology (XP) emphasizes simplicity over generality, global vision
and communication over structured organization, frequent changes over big releases, continuous
testing and analysis over separation of roles and responsibilities, and continuous feedback over
traditional planning.
Customer involvement in an XP project includes requirements analysis (development, refinement, and
prioritization of user stories) and acceptance testing of very frequent iterative releases. Planning is
based on prioritization of user stories, which are implemented in short iterations. Test cases
corresponding to scenarios in user stories serve as partial specifications.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 18


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Test and Analysis Plans

An analysis and test plan details the steps to be taken in a particular project. A plan should answer the
following questions:

● What quality activities will be


carried out?

● What are the dependencies among the quality activities and between quality and development
activities?
● What resources are needed and how will they be
allocated?

How will both the process and the evolving product be monitored to maintain an adequate assessment
of quality and early warning of quality and schedule problems?
Each of these issues is addressed to some extent in the quality strategy, but must be elaborated and
particularized. This is typically the responsibility of a quality manager, who should participate in the
initial feasibility study to identify quality goals and estimate the contribution of test and analysis tasks
on project cost and schedule
The primary tactic available for reducing the schedule risk of a critical dependence is to decompose a
task on the critical path, factoring out subtasks that can be performed earlier. For example, an
acceptance test phase late in a project is likely to have a critical dependence on development and
system integration. One cannot entirely remove this dependence, but its potential to delay project
completion is reduced by factoring test design from test execution.
Figure 4.2 shows alternative schedules for a simple project that starts at the beginning of January
and must be completed by the end of May. In the top schedule, indicated as CRITICAL SCHEDULE,
the tasks Analysis and design, Code and Integration, Design and execute subsystem tests, and Design
and execute system tests form a critical path that spans the duration of the entire project. A delay in
any of the activities will result in late delivery. In this schedule, only the Produce user documentation
task does not belong to the critical path, and thus only delays of this task can be tolerated

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 19


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

In the middle schedule, marked as UNLIMITED RESOURCES, the test design and execution activities
are separated into distinct tasks. Test design tasks are scheduled early, right after analysis and design,
and only test execution is scheduled after Code and integration. In this way the tasks Design
subsystem tests and Design system tests are removed from the critical path, which now spans 16 weeks
with a tolerance of 5 weeks with respect to the expected termination of the project. This schedule
assumes enough resources for running Code and integration, Production of user documentation,
Design of subsystem tests, and Design of system tests.
The completed plan must include frequent milestones for assessing progress. A rule of thumb is that,
for projects of a year or more, milestones for assessing progress should occur at least every three
months. For shorter projects, a reasonable maximum interval for assessment is one quarter of project
duration.Figure 4.3 shows a possible schedule for the initial analysis and test plan for the business
logic of the Chipmunk Web presence in the form of a GANTT diagram. In the initial plan, the
manager has allocated time and effort to inspections of all major artifacts, as well as test design as
early as practical and ongoing test execution during development. Division of the project into major
parts is reflected in the plan, but further elaboration of tasks associated with units and smaller
subsystems must await corresponding elaboration of the architectural design. Thus, for example,
inspection of the shopping facilities code and the unit test suites is shown as a single aggregate task.
Even this initial plan does reflect the usual Chipmunk development strategy of regular "synch and

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 20


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

stabilize" periods punctuating development, and the initial quality plan reflects the Chipmunk strategy
of assigning responsibility for producing unit test suites to developers, with review by a member of the
quality team.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 21


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Figure 4.3 :Initial schedule for quality activities in development of the business logic subsystem of
the Chipmunk Web presence, presented as a GANTT diagram.

2. 3 Risk Planning
Risk is an inevitable part of every project, and so risk planning must be a part of every plan. Risks
cannot be eliminated, but they can be assessed, controlled, and monitored.
The duration of integration, system, and acceptance test execution depends to a large extent on the
quality of software under test. Software that is sloppily constructed or that undergoes inadequate
analysis and test before commitment to the code base will slow testing progress. Even if responsibility
for diagnosing test failures lies with developers and not with the testing group, a test execution session
that results in many failures and generates many failure reports is inherently more time consuming
than executing a suite of tests with few or no failures. This schedule vulnerability is yet another reason
to emphasize earlier activities, in particular those that provide early indications of quality problems.
Inspection of design and code (with quality team participation) can help control this risk, and also
serves to communicate quality standards and best practices among the team. If unit testing is the
responsibility of developers, test suites are part of the unit deliverable and should undergo
inspection for correctness, thoroughness, and automation. While functional and structural coverage
criteria no panacea for measuring test thoroughness, it is reasonable to require that deviations from
basic coverage criteria be justified on a case-by-case basis. A substantial deviation from the structural
coverage observed in similar products may be due to many causes, including inadequate testing,
incomplete specifications, unusual design, or implementation decisions. The modules that present

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 22


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

unusually low struc urtal coverage should be inspected to identify the cause.
Risks cannot be eliminated, but they can be assessed, controlled, and monitored
• Generic management risk
– personnel
– technology
– schedule
• Quality risk
– development
– execution
– requirements

Risk Management in the Quality Plan: Risks Generic to Process Management

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 23


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Risk Management in the Quality Plan: Risks Specific to Quality Management


Here we provide a brief overview of some risks specific to the quality process.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 24


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

2. 4 Improving the Process


Many classes of faults that occur frequently are rooted in process and development flaws. For

example, a shallow architectural design that does not take into account resource allocation can lead to

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 25


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

resource allocation faults. Lack of experience with the development environment, which leads to
misunderstandings between analysts and programmers on rare and exceptional cases, can result in
faults in exception handling. A performance assessment system that rewards faster coding without
regard to quality is likely to promote low quality code.
The occurrence of many such faults can be reduced by modifying the process and environment. For
example, resource allocation faults resulting from shallow architectural design can be reduced by
introducing specific inspection tasks. Faults attributable to inexperience with the development
environment can be reduced with focused training sessions. Persistently poor programming practices
may require modification of the reward system
What are the faults? The goal of this first step is to identify a class of important faults. Faults are
categorized by severity and kind. The severity of faults characterizes the impact of the fault on the
Product
Level Description Example

Critical The product is unusable The fault causes the program to crash

Severe Some product features cannot be The fault inhibits importing files saved with
used, and there is no workaround a previous version of the program, and
there is no workaround
Moderate Some product features require The fault inhibits exporting in Postscript
workarounds to use, and reduce format.
efficiency, reliability, or convenience Postscript can be produced using the
and usability printing facility, but with loss of usability
and efficiency
Cosmetic Minor inconvenience The fault limits the choice of colors for
customizing the graphical interface,
violating the specification but causing only
minor inconvenience

Process Improvement is done by Monitoring and improvement within a project or across multiple
projects:
1. Orthogonal Defect Classification (ODC)

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 26


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

2. Root Cause Analysis (RCA)

For process improvement we need to break the faults down further, classifying them so that we can
find ways to either prevent or detect them more efficiently. ODC is one way to do this, especially for
large projects with a large amount of raw data (e.g., a large number of bug reports in a bug tracking
database).
ODC Classification of Triggers Listed by Activity

● Design Review and Code Inspection

Design Conformance A discrepancy between the reviewed artifact and a prior-stage artifact that
serves as its specification.
Logic/Flow An algorithmic or logic flaw.
Backward Compatibility A difference between the current and earlier versions of an artifact that
could be perceived by the customer as failure.
Internal Document An internal inconsistency in the artifact (e.g., inconsistency between code and
comments).
Lateral Compatibility An incompatibility between the artifact and some other system or module with
which it should interoperate.
Concurrency A fault in interaction of concurrent processes or threads.
Language Dependency A violation of language-specific rules, standards, or best practices.
Side Effects A potential undesired interaction between the reviewed artifact and some other part of
the system
Rare Situation An inappropriate response to a situation that is not anticipated in the artifact. (Error
handling as specified in a prior artifact design conformance, not rare situation.)

● Structural (White-Box) Test

Simple Path The fault is detected by a test case derived to cover a single program element.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 27


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Complex Path The fault is detected by a test case derived to cover a combination of program
elements.

● Functional (Black-Box) Test


Coverage The fault is detected by a test case derived for testing a single procedure (e.g., C function or
Java method), without considering combination of values for possible parameters.
Variation The fault is detected by a test case derived to exercise a particular combination of
parameters for a single procedure.
Sequencing The fault is detected by a test case derived for testing a sequence of procedure calls.
Interaction The fault is detected by a test case derived for testing procedure interactions.

● System Test

Workload/Stress The fault is detected during workload or stress testing.


Recovery/Exception The fault is detected while testing exceptions and recovery procedures.
Startup/Restart The fault is detected while testing initialization conditions during start up or after
possibly faulty shutdowns.
Hardware Configuration The fault is detected while testing specific hardware configurations
Software Configuration The fault is detected while testing specific software configurations.
Blocked Test Failure occurred in setting up the test scenario.

ODC Classification of Customer Impact

Installability Ability of the customer to place the software into actual use. (Usability of the
installed software is not included.)
Integrity/Security Protection of programs and data from either accidental or malicious destruction or
alteration, and from unauthorized disclosure.
Performance
The perceived and actual impact of the software on the time required for the customer and customer
end users to complete their tasks.

Maintenance The ability to correct, adapt, or enhance the software system quickly and at minimal
cost.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 28


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Serviceability Timely detection and diagnosis of failures, with minimal customer impact.
Migration Ease of upgrading to a new system release with minimal disruption to existing customer
data and operations.
Documentation Degree to which provided documents (in all forms, including electronic) completely
and correctly describe the structure and intended uses of the software.
Usability The degree to which the software and accompanying documents can be understood and
effectively employed by the end user.
Standards The degree to which the software complies with applicable standards.
Reliability The ability of the software to perform its intended function without unplanned interruption
or failure.
Accessibility The degree to which persons with disabilities can obtain the full benefit of the software
system
.Capability
The degree to which the software performs its intended functions consistently with documented
system requirements.
Requirements The degree to which the system, in complying with document requirements, actually
meets customer expectations

ODC Classification of Defect Types for Targets Design and Code

Assignment/Initialization A variable was not assigned the correct initial value or was not assigned
any initial value.
Checking Procedure parameters or variables were not properly validated before use.
Algorithm/Method A correctness or efficiency problem that can be fixed by reimplementing a single
procedure or local data structure, without a design change.
Function/Class/Object
A change to the documented design is required to conform to product requirements or interface
specifications.
Timing/Synchronization
The implementation omits necessary synchronization of shared resources, or violates the prescribed
synchronization protocol.

Interface/Object-Oriented Messages
Module interfaces are incompatible; this can include syntactically compatible interfaces that differ in

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 29


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

semantic interpretation of communicated data.


Relationship
Potentially problematic interactions among procedures, possibly involving different assumptions but
not involving interface incompatibility.
A good RCA classification should follow the uneven distribution of faults across categories. If, for
example, the current process and the programming style and environment result in many interface
faults, we may adopt a finer classification for interface faults and a coarse-grain classification of other
kinds of faults. We may alter the classification scheme in future projects as a result of having
identified and removed the causes of many interface faults

ODC Fault Analysis

When we first apply the ODC method, we can perform some preliminary analysis using only part
of the collected information:

Distribution of fault types versus activities Different quality activities target different classes of
faults. For example, algorithmic (that is, local) faults are targeted primarily by unit testing, and we
expect a high proportion of faults detected by unit testing to be in this class. If the proportion of
algorithmic faults found during unit testing is unusually small, or a larger than normal proportion of
algorithmic faults are found during integration testing, then one may reasonably suspect that unit tests
have not been well designed. If the mix of faults found during integration testing contains an
unusually high proportion of algorithmic faults, it is also possible that integration testing has not
focused strongly enough on interface faults.

Distribution of triggers over time during field test Faults corresponding to simple usage should
arise early during field test, while faults corresponding to complex usage should arise late. In both
cases, the rate of disclosure of new faults should asymptotically decrease. Unexpected distributions of
triggers over time may indicate poor system or acceptance test. If triggers that correspond to simple
usage reveal many faults late in acceptance testing, we may have chosen a sample that is not
representative of the user population. If faults continue growing during acceptance test, system testing
may have failed, and we may decide to resume it before continuing with acceptance testing

Age distribution over target code Most faults should be located in new and rewritten code, while
few faults should be found in base or re-fixed code, since base and re-fixed code has already been

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 30


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

tested and corrected. Moreover, the proportion of faults in new and rewritten code with respect to base
and re-fixed code should gradually increase. Different patterns may indicate holes in the fault tracking
and removal process or may be a symptom of inadequate test and analysis that failed in revealing
faults early (in previous tests of base or re-fixed code). For example, an increase of faults located in
base code after porting to a new platform may indicate inadequate tests for portability.

Distribution of fault classes over time The proportion of missing code faults should gradually
decrease, while the percentage of extraneous faults may slowly increase, because missing
functionality should be revealed with use and repaired, while extraneous code or documentation
may be produced by updates.

Root cause analysis (RCA)


A good RCA classification should follow the uneven distribution of faults across categories. If, for
example, the current process and the programming style andenvironment result in many interface faults,
we may adopt a finer classification for interface faults and a coarse-grain classification of other kinds of
faults. We may alter the classification scheme in future projects as a result of having identified and
removed the causes of many interface faults.

Classification of faults should be sufficiently precise to allow identifying one or two most significant
classes of faults considering severity, frequency, and cost of repair. It is important to keep in mind that
severity and repair cost are not directly related. We may have cosmetic faults that are very expensive to
repair, and critical faults that can be easily repaired. When selecting the target class of faults, we need to
consider all the factors. We might, for example, decide to focus on a class of moderately severe faults
that occur very frequently and are very expensive to remove, investing fewer resources in preventing a
more severe class of faults that occur rarely and are easily repaired. When did faults occur, and when
were they found? It is typical of mature software processes to collect fault data sufficient to determine
when each fault was detected (e.g., in integration test or in a design inspection). In addition, for the class
of faults

When did faults occur, and when were they found?

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 31


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

It is typical of mature software processes to collect fault data sufficient to determine when each
fault was detected (e.g., in integration test or in a design inspection). In addition, for the class of
faults identified in the first step, we attempt to determine when those faults were introduced (e.g.,
was a particular fault introduced in coding, or did it result from an error in architectural design?).

Why did faults occur?


In this core RCA step, we attempt to trace representative faults back to causes, with the objective
of identifying a "root" cause associated with many faults in the class. Analysis proceeds
iteratively by attempting to explain the error thatled to the fault, then the cause of that error, the
cause of that cause, and so on. The rule of thumb "ask why six times" does not provide a precise
stopping rule for the analysis, but suggests that several steps may be needed to find a cause in
common among a large fraction of the fault class under consideration.

How could faults be prevented?

The final step of RCA is improving the process by removing root causes or making early
detection likely. The measures taken may have a minor impact on the development process (e.g.,
adding consideration of exceptional conditions to a design inspection checklist), or may involve
a substantial modification of the process (e.g., making explicit consideration of exceptional
conditions a part of all requirements analysis and design steps). As in tracing causes, prescribing
preventative or detection measures requires judgment, keeping in mind that the goal is not
perfection but cost-effective improvement.

ODC and RCA are two examples of feedback and improvement, which are an important
dimension of most good software processes. Explicit process improvement steps are, for
example, featured in both SRET and Cleanroom .

The 80/20 or Pareto Rule


Fault classification in root cause analysis is justified by the so-called 80/20 or Pareto rule. The
Pareto rule is named for the Italian conomist Vilfredo Pareto, who in the early nineteenth century
proposed a mathematical power law formula to describe the unequal distribution of wealth in his

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 32


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

country, observing that 20% of the people owned 80% of the wealth. Pareto observed that in
many populations, a few (20%) are vital and many (80%) are trivial. In fault analysis, the Pareto
rule postulates that 20% of the code is responsible for 80% of the faults. Although proportions
may vary, the rule captures two important facts:

1. Faults tend to accumulate in a few modules, so identifying potentially faulty modules can
improve the cost effectiveness of fault detection.
2. Some classes of faults predominate, so removing the causes of a predominant class of faults
can have a major impact on the quality of the process and of the resulting product. The
predominance of a few classes of faults justifies focusing on one class at a time.

2. 5 The Quality Team


The quality plan must assign roles and responsibilities to people. As with other aspects of planning,
assignment of responsibility occurs at a strategic level and a tactical level. The tactical level,
represented directly in the project plan, assigns responsibility to individuals in accordance with the
general strategy. It involves balancing level of effort across time and carefully managing personal
interactions. The strategic level of organization is represented not only in the quality strategy
document, but in the structure of the organization itself
The strategy for assigning responsibility may be partly driven by external requirements. For example,
independent quality teams may be required by certification agencies or by a client organization.
Additional objectives include ensuring sufficient accountability that quality tasks are not easily
overlooked; encouraging objective judgment of quality and preventing it from being subverted by
schedule pressure; fostering shared commitment to quality among all team members; and developing
and communicating shared knowledge and values regarding quality.
When quality tasks are distributed among groups or organizations, the plan should include specific
checks to ensure successful completion of quality activities. For example, when module testing is
performed by developers and integration and system testing is performed by an independent quality
team, the quality team should check the completeness of module tests performed by developers, for
example, by requiring satisfaction of coverage criteria or inspecting module test suites. If
testing is performed by an independent organization under contract, the contract should
carefully describe the testing process and results and documentation, and the client
organization should verify satisfactory completion of the contracted tasks.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 33


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

The plan must clearly define milestones and delivery for outsourced activities, as well as
checks on the quality of delivery in both directions: Test organizations usually perform quick
checks to verify the consistency of the software to be tested with respect to some minimal
"testability" requirements; clients usually check the completeness and consistency of test
results. For example, test organizations may ask for the results of inspections the delivered
artifact before they start testing, and may include some quick tests to verify the installability
and testability of the artifact. Clients may check that tests satisfy specified functional and
structural coverage criteria, and may inspect the test documentation to check its quality.
Although the contract should detail the relation between the development and the testing
groups, ultimately, outsourcing relies on mutual trust between organizations.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 34


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

3. Documenting Analysis and Test


Mature software processes include documentation standards for all the activities of the
software process, including test and analysis activities. Documentation can be inspected to
verify progress against schedule and quality goals and to identify problems, supporting
process visibility, monitoring and replicability
Overview
Documentation is an important element of the software development process, including the
quality process. Complete and well-structured documents increase the reusability of test suites
within and across projects. Documents are essential for maintaining a body of knowledge that
can be reused across projects. Consistent documents provide a basis for monitoring and
assessing the process, both internally and for external authorities where certification is
desired. Finally, documentation includes summarizing and presenting data that forms the
basis for process improvement. Test and analysis documentation includes summary
documents designed primarily for human comprehension and details accessible to the
human reviewer but designed primarily for automated analysis.

Organizing Documents
In a small project with a sufficiently small set of documents, the arrangement of other project
artifacts (e.g., requirements and design documents) together with standard content (e.g.,
mapping of subsystem test suites to the build schedule) provides sufficient organization to
navigate through the collection of test and analysis documentation. In larger projects, it is
common practice to produce and regularly update a global guide for navigating among
individual documents.
Naming conventions help in quickly identifying documents. A typical standard for document
names would include keywords indicating the general scope of the document, its nature, the
specific document, and its version.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 35


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Figure 3.1: Sample document naming conventions, compliant with IEEE standards.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 36


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Test Strategy Document

While the format of an analysis and test strategy vary from company to company, the structure of
an analysis and test plan is more standardized. A typical structure of a test and analysis plan
includes information about items to be verified, features to be tested, the testing approach, pass
and fail criteria, test deliverables, tasks, responsibilities and resources, and environment
constraints.
The overall quality plan usually comprises several individual plans of limited scope. Each test
and analysis plan should indicate the items to be verified through analysis or testing.
They may include specifications or documents to be inspected, code to be analyzed or tested, and
interface specifications to undergo consistency analysis. They may refer to the whole system or
part of it - like a subsystem or a set of units. Where the project plan includes planned
development increments, the analysis and test plan indicates the applicable versions of items to
be verified. For each item, the plan should indicate any special hardware or external software
required for testing. For example, the plan might indicate that one suite of subsystem tests for a
security package can be executed with a software simulation of a smart card reader, while
another suite requires access to the physical device. Finally, for each item, the plan should
reference related documentation, such as requirements and design specifications, and user,
installation, and operations guides.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 37


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Test Design Specification Documents


Design documentation for test suites and test cases serve essentially the same purpose as other
software design documentation, guiding further development and preparing for maintenance.
Test suite design must include all the information needed for initial selection of test cases and
maintenance of the test suite over time, including rationale and anticipated evolution.
Specification of individual test cases includes purpose, usage, and anticipated changes.

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 38


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Test and Analysis Reports


Reports of test and analysis results serve both developers and test designers. They identify open faults
for developers and aid in scheduling fixes and revisions. They help test designers assess and refine
their approach, for example, noting when some class of faults is escaping early test and analysis and
showing up only in subsystem and system testing
Functional Test Design Specification of check configuration

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 39


Module: 4- Process Framework, Planning and Monitoring the Process, Documenting Analysis and Test

Test Case Specification for check Configuration

[Prof. Pushpalatha K S , Dept. of ISE, ACIT ] Page 40

You might also like