0% found this document useful (0 votes)
212 views19 pages

Software Testing Sessional-2

Ideal Institute of Technology provides a model solution for a sessional test on software testing and audit. The document includes a 50 mark exam with two sections. Section A contains short answer questions on testing definitions, types of testing, and metrics. Section B contains longer answer questions on software testing life cycles, test case development, and environments. The summary provides an overview of the key topics covered in the test without reproducing full answers.

Uploaded by

Mohit Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
212 views19 pages

Software Testing Sessional-2

Ideal Institute of Technology provides a model solution for a sessional test on software testing and audit. The document includes a 50 mark exam with two sections. Section A contains short answer questions on testing definitions, types of testing, and metrics. Section B contains longer answer questions on software testing life cycles, test case development, and environments. The summary provides an overview of the key topics covered in the test without reproducing full answers.

Uploaded by

Mohit Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 19

Ideal Institute of Technology, Ghaziabad

Department of CSE/IT
Model Solution-Sessional Test-2
Course:
B.Tech
Session:
2016-17
Subject:
Software Testing &Audit
Max Marks: 50

Semester: VII
Section: CSE+IT+IMT
Sub. Code: NEOE-073
Time: 1.5 hour

Note: All questions are compulsory. All questions carry equal marks.

SECTION-A
1. Attempt all the parts of the following.

2x5=10

(a) What is testing? Differentiate between effective and exhaustive software testing.
Sol. Software testing is the process done with the intent of finding errors also Software
testing is an investigation conducted to provide stakeholders with information about the
quality of the product or service under test. Software testing can also provide an objective,
independent view of the software to allow the business to appreciate and understand the
risks of software implementation. Test techniques include the process of executing a
program or application with the intent of finding software bugs (errors or other defects).
Effective software testing: The effectiveness of Testing can be measured if the goal and
purpose of the testing effort is clearly defined. Some of the typical Testing goals are:

Testing in each phase of the Development cycle to ensure that the bugs(defects) are
eliminated at the earliest
Testing to ensure no bugs creep through in the final product
Testing to ensure the reliability of the software
Above all testing to ensure that the user expectations are met

Unreliable software can severely hurt businesses and endanger lives depending on the
criticality of the application. The simplest application poorly written can deteriorate the
performance of your environment such as the servers, the network and thereby causing an
unwanted mess.
Exhaustive software testing: Exhaustive testing is a test approach in which all possible
data combinations are used for testing. Exploratory testing includes implicit data
combinations present in the state of the software/data at the start of testing. In this type of
testing we try to check the output given by the software by entering all the possible inputs,
in fact we use all the permutations and combinations of the inputs.

Complete or exhaustive testing is near impossible because of the following reasons:

The domain of possible inputs of a program is too large to be completely used in


testing a system. There are both valid inputs and invalid inputs.The program may have
a large number of states. There may be timing constraints on the inputs, that is, an
input may be valid at a certain time and invalid at other times. An input value which is
valid but is not properly timed is called an inopportune input. The input domain of a
system can be very large to be completely used in testing a program.
The design issues may be too complex to completely test. The design may have
included implicit design decisions and assumptions. For example, a programmer may
use a global variable or a static variable to control program execution.
It may not be possible to create all possible execution environments of the system.
This becomes more significant when the behaviour of the software system depends on
the real, outside world, such as weather, temperature, altitude, pressure, and so on

(b) Define the following terms:


Errors, faults and failures.
Sol. People make errors. A good synonym is mistake. This may be a syntax error or
misunderstanding of specifications. Sometimes, there are logical errors.
When developers make mistakes while coding, we call these mistakes bugs.
A fault is the representation of an error, where representation is the mode of expression,
such as narrative text, data flow diagrams, ER diagrams, source code etc. Defect is a good
synonym for fault.
A failure occurs when a fault executes. A particular fault may cause different failures,
depending on how it has been exercised. Humans are prone to mistakes .When people are
involved in decision making or some thought process than errors may happen in
reading/writing/speech/understanding etc.
(c) Differentiate between testing and debugging.
Sol.: Difference between Testing and Debugging
Testing
1. Testing always starts with known
conditions, uses predefined methods,
and has predictable outcomes too.
2. Testing can and should definitely be
planned, designed, and scheduled.
3. It proves a programmers failure.

Debugging
1. Debugging starts from possibly unknown initial conditions and its end cannot
be predicted, apart from statistically.
2. The procedures for, and period of,
debugging cannot be so constrained.
3. It is the programmers vindication.

4. It is a demonstration of error or
apparent correctness.

4. It is always treated as a deductive


process.

5. Testing as executed should strive to


be predictable, dull, constrained, rigid,
and inhuman.
6. Much of the testing can be done
without design knowledge.
7. It can often be done by an outsider.

5. Debugging demands intuitive leaps,


conjectures, experimentation, and some
freedom also.
6. Debugging is impossible without
detailed design knowledge.
7. It must be done by an insider.

8. Much of test execution and design


can be automated.
9. Testing purpose is to find bug.

8. Automated debugging is still a dream for


programmers.
9. Debugging purpose is to find cause of
bug.

(d) Why do bugs occur? How many states does a bug have?
Sol. Various reasons for the occurrence of bugs in software are as follows:
1. Human factor: It is because human beings develop software. hey are prone to make
mistakes. As human beings develop software, it would be foolish to expect the
software to be perfect and without any defects in it.
2. Communication failure: Another common reason for software defects can be
miscommunication, lack of communication or erroneous communication during
software development.
3. Unrealistic development timeframe: Lets face it. More often than not software are
developed under crazy release schedules, with limited/insufficient resources and with
unrealistic project deadlines.
4. Poor design logic: In this era of complex software systems development, sometimes
the software is so complicated that it requires some level of R&D and brainstorming to
reach a reliable solution. Lack of patience and an urge to complete it as quickly as
possible may lead to errors.
5. Poor coding practices: Sometimes errors are slipped into the code due to simply bad
coding. Bad coding practices such as inefficient or missing error/exception handling,
lack of proper validations (datatypes, field ranges, boundary conditions, memory
overflows etc.) may lead to introduction of errors in the code.
6. Buggy third-party tools: Quite often during software development we require many
third-party tools, which in turn are software and may contain some bugs in them.
These tools could be tools that aid in the programming (e.g. class libraries, shared
DLLs, compilers, HTML editors, debuggers etc.)
7. Lack of skilled testing: No tester would want to accept it but lets face it; poor testing
do take place across organizations. There can be shortcomings in the testing process
that are followed.
BUG LIFECYCLE:

(e) Write a note on test metrics and measurements.


Sol. A software metric is a quantitative measure of a degree to which a software system
or process possesses some property. Since quantitative measurements are essential in all
sciences, there is a continuous effort by computer science practitioners and theoreticians to
bring similar approaches to software development. The goal is obtaining objective,
reproducible and quantifiable measurements, which may have numerous valuable
applications in schedule and budget planning, cost estimation, quality assurance testing,
software debugging, software performance optimization, and optimal personnel task
assignments.
COMMON SOFTWARE MEASUREMENTS:

Code coverage
Cohesion
Coupling
Cyclomatic Complexity
Number of lines of code
Program execution time
Program load time
Program size.
SECTION-B

1. Answer any four parts of the following:


(a)What is software testing life cycle?

7x4=28

Sol. Software Testing Life Cycle (STLC) is the testing process which is executed in
systematic and planned manner. In STLC process, different activities are carried out to
improve the quality of the product. Lets quickly see what all stages are involved in typical
Software Testing Life Cycle (STLC).
Following steps are involved in Software Testing Life Cycle (STLC). Each step is have its
own Entry Criteria and deliverable.
Requirement Analysis: Requirement Analysis is the very first step in Software
Testing Life Cycle (STLC). In this step Quality Assurance (QA) team understands the
requirement in terms of what we will testing & figure out the testable requirements. If
any conflict, missing or not understood any requirement, then QA team follow up with
the various stakeholders like Business Analyst, System Architecture.
Test Planning: Test Planning is most important phase of Software testing life cycle
where all testing strategy is defined. This phase also called as Test Strategy phase. In
this phase typically Test Manager (or Test Lead based on company to company)
involved to determine the effort and cost estimates for entire project.
Test Case Development: The test case development activity is started once the test
planning activity is finished. This is the phase of STLC where testing team write down
the detailed test cases. Along with test cases testing team also prepare the test data if
any required for testing. Once the test cases are ready then these test cases are
reviewed by peer members or QA lead.
Environment Setup: Setting up the test environment is vital part of the STLC.
Basically test environment decides on which conditions software is tested. This is
independent activity and can be started parallel with Test Case Development.
Test Execution: Once the preparation of Test Case Development and Test
Environment setup is completed then test execution phase can be kicked off. In this
phase testing team start executing test cases based on prepared test planning &
prepared test cases in the prior step
Test Cycle Closure: Call out the testing team member meeting & evaluate cycle
completion criteria based on Test coverage, Quality, Cost, Time, Critical Business
Objectives, and Software. Discuss what all went good, which area needs to be improve
& taking the lessons from current STLC as input to upcoming test cycles, which will
help to improve bottleneck in the STLC process.

(b)Explain white-box testing. What are the types of errors detected by black-box
testing?

Sol. White-box testing (also known as clear box testing, glass box testing, transparent box
testing, and structural testing) is a method of testing software that tests internal structures
or workings of an application, as opposed to its functionality (i.e. black-box testing). In
white-box testing an internal perspective of the system, as well as programming skills, are
used to design test cases. The tester chooses inputs to exercise paths through the code and
determine the appropriate outputs.
White-box test design techniques include the following code coverage criteria:
Control flow testing
Data flow testing
Branch testing
Statement coverage
Decision coverage
Modified condition/decision coverage
Prime path testing
Path testing.
Advantages:White-box testing is one of the two biggest testing methodologies used today. It has
several major advantages:
1. Side effects of having the knowledge of the source code is beneficial to thorough
testing.
2. Optimization of code by revealing hidden errors and being able to remove these
possible defects.
3. Gives the programmer introspection because developers carefully describe any new
implementation.
4. Provides traceability of tests from the source, allowing future changes to the software
to be easily captured in changes to the tests.
5. White box tests are easy to automate.
6. White box testing give clear, engineering-based, rules for when to stop testing.
Disadvantages:Although white-box testing has great advantages, it is not perfect and contains some
disadvantages:
1. White-box testing brings complexity to testing because the tester must have
knowledge of the program, including being a programmer. White-box testing requires
a programmer with a high-level of knowledge due to the complexity of the level of
testing that needs to be done.
2. On some occasions, it is not realistic to be able to test every single existing condition
of the application and some conditions will be untested.
3. The tests focus on the software as it exists, and missing functionality may not be
discovered.
Black Box Testing, also known as Behavioral Testing, is a software testing method in
which the internal structure/ design/ implementation of the item being tested is not known
to the tester. These tests can be functional or non-functional, though usually functional.

This method is named so because the software program, in the eyes of the tester, is like a
black box; inside which one cannot see. This method attempts to find errors in the
following categories:
Incorrect or missing functions
Interface errors
Errors in data structures or external database access
Behavior or performance errors
Initialization and termination errors
(c)What is the objective of white-box testing? What is the significance of cyclomatic
complexity?
Sol. The Major objective of white-box testing is to focus on internal program structure,
and discover all internal program errors.
The major testing focuses:
- Program structures
- Program statements and branches
- Various kinds of program paths
- Program internal logic and data structures
- Program internal behaviors and states.
The goal is to:
- Guarantee that all independent paths within a module have been exercised at least
once.
- Exercise all logical decisions one their true and false sides.
- Execute all loops at their boundaries and within their operational bounds.
- Exercise internal data structures to assure their validity.
- Exercise all data define and use paths.
Significance of Cyclomatic complexity:
Cyclomatic complexity is a software metric provides a quantitative measure of the
global complexity of a program.
When this metric is used in the context of the basis path testing, the value computed
for cyclomatic complexity defines the number of independent paths in the basis set of
a program.
Three ways to compute cyclomatic complexity:
- The number of regions of the flow graph correspond to the cyclomatic complexity.
- Cyclomatic complexity, V(G), for a flow graph G is defined as
V(G) = E - N +2
where E is the number of flow graph edges and N is the number of flow graph nodes.
- Cyclomatic complexity, V(G) = P + 1
where P is the number of predicate nodes contained in the flow graph G.

McCabe's cyclomatic complexity is a software quality metric that quantifies the


complexity of a software program. Complexity is inferred by measuring the number of
linearly independent paths through the program. The higher the number the more complex
the code.
The Significance of the McCabe Number
Measurement of McCabe's cyclomatic complexity metric ensures that developers are
sensitive to the fact that programs with high McCabe numbers (e.g. > 10) are likely to be
difficult to understand and therefore have a higher probability of containing defects. The
cyclomatic complexity number also indicates the number of test cases that would have to
be written to execute all paths in a program.
Calculating the McCabe Number
Cyclomatic complexity is derived from the control flow graph of a program as follows:
Cyclomatic complexity (CC) = E - N + 2P
Where:
P = number of disconnected parts of the flow graph (e.g. a calling program and a
subroutine)
E = number of edges (transfers of control)
N = number of nodes (sequential group of statements containing only one transfer of
control)
(d)Write short notes on the following:
(i)
Model based testing
(ii)
Structural testing
Sol. (i) Model based testing: Model-based testing is an application of model-based
design for designing and optionally also executing artifacts to perform software
testing or system testing. Models can be used to represent the desired behavior of a
System Under Test (SUT), or to represent testing strategies and a test environment. The
picture on the right depicts the former approach.

A model describing a SUT is usually an abstract, partial presentation of the SUT's desired
behavior. Test cases derived from such a model are functional tests on the same level of
abstraction as the model. These test cases are collectively known as an abstract test suite.
An abstract test suite cannot be directly executed against an SUT because the suite is on
the wrong level of abstraction. An executable test suite needs to be derived from a
corresponding abstract test suite.

Deploying model-based testing:


There are various known ways to deploy model-based testing, which include online
testing, offline generation of executable tests, and offline generation of manually
deployable tests.
Online testing means that a model-based testing tool connects directly to an SUT and tests
it dynamically.
Offline generation of executable tests means that a model-based testing tool generates test
cases as computer-readable assets that can be later run automatically; for example, a
collection of Python classes that embodies the generated testing logic.
(ii) Structural testing:
The structural testing is the testing of the structure of the system or component.
Structural testing is often referred to as white box or glass box or clear-box testing
because in structural testing we are interested in what is happening inside the
system/application.
In structural testing the testers are required to have the knowledge of the internal
implementations of the code. Here the testers require knowledge of how the software is
implemented, how it works.
Structural testing can be used at all levels of testing. Developers use structural testing in
component testing and component integration testing, especially where there is good tool
support for code coverage. Structural testing is also used in system and acceptance
testing, but the structures are different. For example, the coverage of menu options or
major business transactions could be the structural element in system or acceptance
testing.
White-box or Structural test design techniques include the following code
coverage criteria:
Control flow testing
Data flow testing
Branch testing
Statement coverage
Decision coverage
Modified condition/decision coverage
Prime path testing
Path testing.
(e) What is the difference between system testing and acceptance testing?
System Testing
1. System Testing does not include any
thing means it is not known by any other
name.

Acceptance Testing
1. Acceptance Testing Include Alpha Testing
means it is also sometimes known as alpha
testing.

2. User is not involved in System Testing.


3. It is performed before Acceptance
Testing.
4. It is not the final stage of Validation.
5. System testing of software or hardware
is testing conducted on a whole, integrated
system to estimate the systems
compliance with its specified set of
requirements?
6. The only things Tester should be testing
at the System Test stage are things that he
and she couldnt test before.

2. User is completely involved in Acceptance


Testing.
3. It is performed after System Testing.
4. It is the final stage of Validation.
5. Acceptance Testing of software or
hardware is testing conducted to evaluate the
system compliance with its specified set of
user requirements.
6. The only things Developer and User should
be testing at the Acceptance Test stage are
things that developer and tester couldnt test
before.

7. It is done to check how the system as a


whole is functioning. Here the
functionality and performance of the
system
is
validated.

7. It is done by the developer before releasing


the product to check whether it meets the
requirements of user or not and it is also done
by the user to check whether I have to accept
the product or not.

8. It is used to check whether the system


meets the defined Specifications or not.
9. System Testing refers to the testing of
complete system as a whole that is carried
out by testers and sometimes by
developers to check whether it meets the
system specifications or not.
10. System Testing determine the
developer and tester for satisfaction with
system specifications.
11. System Testing involves both
Functional and Non- Functional Testing.

8. It is used to check whether it meets the


defined User Requirements or not.
9. Acceptance Testing is carried out by the
users to determine whether they accept the
delivery of the system or not. It is normally
performed by users and sometimes developers
are also involved in it.
10. Acceptance Testing determine the
customer for satisfaction with software
product.
11. Acceptance Testing only involves
the Functional
Testing based
on
the
requirement given by client/user.

SECTION-C
2. Answer any one part of the following:

12x1=20

(a) (i)How does regression testing help in producing quality software?


Sol: Regression means retesting the unchanged parts of the application. Test cases are reexecuted in order to check whether previous functionality of application is working fine
and new changes have not introduced any new bugs. This test can be performed on a new
build when there is significant change in original functionality or even a single bug fix

Regression testing focuses on finding defects after a major code change has occurred.
Specifically, it seeks to uncover software regressions, as degraded or lost features,
including old bugs that have come back. Such regressions occur whenever software
functionality that was previously working, correctly, stops working as intended. Typically,
regressions occur as an unintended consequence of program changes, when the newly
developed part of the software collides with the previously existing code. Common
methods of regression testing include re-running previous sets of test-cases and checking
whether previously fixed faults have re-emerged. The depth of testing depends on the phase
in the release process and the risk of the added features. They can either be complete, for
changes added late in the release or deemed to be risky, or be very shallow, consisting of
positive tests on each feature, if the changes are early in the release or deemed to be of low
risk. Regression testing is typically the largest test effort in commercial software
development, due to checking numerous details in prior software features, and even new
software can be developed while using some old test-cases to test parts of the new design to
ensure prior functionality is still supported.
Regression Testing Services

Do you face challenges in achieving effective regression cycles without an escalation


in costs?
Are there proven methods which help reduce maintenance costs and regression cycle
time, while improving the quality of testing?
How do you determine when to stop testing, and whether all risks have been
addressed?
How can you accelerate the releases of systems under test?

Benefits of Regression Testing


Achieve risk-free regression testing
Provide precision in regression testing to facilitate maximum coverage through
minimal number of test cases
Increase the productivity and efficiency of quality assurance applications
Reduce time to market significantly
(ii)Explain the following regression test selection techniques:
(i)
Minimization
Sol: Minimization Techniques- Minimization-based regression test selection
techniques (e.g., Fischer et al. [1981] and Hartmann and Robson [1990]), hereafter
referred to as minimization techniques , attempt to select minimal sets of test cases
from T that yield coverage of modified or affected portions of P.
For example, the technique of Fischer et al. [1981] uses systems of linear equations to
express relationships between test cases and basic blocks (single-entry, single-exit
sequences of statements in a procedure). The technique uses a 0-1 integer
programming algorithm to identify a subset T of T that ensures that every segment
that is statically reachable from a modified segment is exercised by at least one test
case in T that also

exercises the modified segment


(ii)
Dataflow
Sol: Dataflow Techniques-. Dataflow-coverage-based regression test selection
techniques (e.g., Harrold and Soffa [1988], Ostrand and Weyuker [1988], and Taha et
al. [1989]), hereafter referred to as dataflow techniques, select test cases that exercise
data interactions that have been affected by modifications.
For example, the technique of Harrold and Soffa [1988] requires that every definitionuse pair that is deleted from P , new in P, or modified for P be tested. The technique
selects every test case in T that, when executed on P , exercised deleted or modified
definition-use pairs, or executed a statement containing a modified predicate.
(iii)
Ad hoc
Sol: Ad Hoc/Random Techniques-. When time constraints prohibit the use of a retestall approach, but no test selection tool is available, developers often select test cases
based on hunches, or loose associations of test cases with functionality. Another
simple approach is to randomly select a predetermined number of test cases from T
.

(b) Write short notes on the following:


(i)
Iterative testing
Sol: Iterative design is a design methodology based on a cyclic process of
prototyping, testing, analyzing, and refining a product or process. Based on the results
of testing the most recent iteration of a design, changes and refinements are made.
This process is intended to ultimately improve the quality and functionality of a
design. In iterative design, interaction with the designed system is used as a form of
research for informing and evolving a project, as successive versions, or iterations of
a design are implemented.
Iterative design is a design methodology based on a cyclic process of prototyping,
testing, analyzing, and refining a product or process. Based on the results of testing
the most recent iteration of a design, changes and refinements are made. This process
is intended to ultimately improve the quality and functionality of a design. In iterative
design, interaction with the designed system is used as a form of research for
informing and evolving a project, as successive versions, or iterations of a design are
implemented.
Iterative design is commonly used in the development of human computer interfaces.
This allows designers to identify any usability issues that may arise in the user
interface before it is put into wide use. Even the best usability experts cannot design
perfect user interfaces in a single attempt, so a usability engineering lifecycle should
be built around the concept of iteration
The typical steps of iterative design in user interfaces are as follows:
1.
2.
3.
4.

Complete an initial interface design


Present the design to several test users
Note any problems had by the test user
Refine interface to account for/fix the problems

5.
(ii)

Repeat steps 2-4 until user interface problems are resolved


Defect seeding

Sol: Defect seeding is a practice in which defects are intentionally inserted into a
program by one group for detection by another group. The ratio of the number of
seeded defects detected to the total number of defects seeded provides a rough idea of
the total number of unseeded defects that have been detected.
Suppose on GigaTron 3.0 that you intentionally seeded the program with 50 errors. For
best effect, the seeded errors should cover the full breadth of the products
functionality and the full range of severitiesranging from crashing errors to cosmetic
errors.
Suppose that at a point in the project when you believe testing to be almost complete
you look at the seeded defect report. You find that 31 seeded defects and 600
indigenous defects have been reported. You can estimate the total number of defects
with the formula:
IndigenousDefectsTotal = ( SeededDefectsPlanted / SeededDefectsFound ) * IndigenousDefectsFound

This technique suggests that GigaTron 3.0 has approximately 50 / 31 * 600 = 967 total
defects.
To use defect seeding, you must seed the defects prior to the beginning of the tests
whose effectiveness you want to ascertain. If your testing uses manual methods and
has no systematic way of covering the same testing ground twice, you should seed
defects before that testing begins. If your testing uses fully automated regression tests,
you can seed defects virtually any time to ascertain the effectiveness of the automated
tests.
A common problem with defect seeding programs is forgetting to remove the seeded
defects. Another common problem is that removing the seeded defects introduces new
errors. To prevent these problems, be sure to remove all seeded defects prior to final
system testing and product release. A useful implementation standard for seeded errors
is to require them to be implemented only by adding one or two lines of code that
create the error; this standard assures that you can remove the seeded errors safely by
simply removing the erroneous lines of code.
(iii)Test Planning
Sol: IEEE Standard for Software Test Documentation defines Test Planning as a
document describing the scope, approach, resources and schedule of intended testing
activities
A Software Test Plan is a document describing the testing scope and activities. It is the
basis for formally testing any software/product in a project.
ISTQB Definition

test plan: A document describing the scope, approach, resources and schedule of
intended test activities. It identifies amongst others test items, the features to be tested,
the testing tasks, who will do each task, degree of tester independence, the test
environment, the test design techniques and entry and exit criteria to be used, and the
rationale for their choice,and any risks requiring contingency planning. It is a record of
the test planning process.

master test plan: A test plan that typically addresses multiple test levels.

phase test plan: A test plan that typically addresses one test phase.

TEST PLAN TYPES


One can have the following types of test plans:
Master Test Plan: A single high-level test plan for a project/product that unifies
all other test plans.

Testing Level Specific Test Plans:Plans for each level of testing.


Unit Test Plan
Integration Test Plan
System Test Plan
Acceptance Test Plan
Testing Type Specific Test Plans: Plans for major types of testing like
Performance Test Plan and Security Test Plan.
TEST PLAN TEMPLATE
The format and content of a software test plan vary depending on the processes,
standards, and test management tools being implemented. Nevertheless, the following
format, which is based on IEEE standard for software test documentation, provides a
summary of what a test plan can/should contain.
Test Plan Identifier:
Provide a unique identifier for the document. (Adhere to the Configuration
Management System if you have one.)
Introduction:
Provide an overview of the test plan.
Specify the goals/objectives.
Specify any constraints.
References:
List the related documents, with links to them if available, including the following:
Project Plan
Configuration Management Plan
Test Items:
List the test items (software/products) and their versions.
Features to be Tested:
List the features of the software/product to be tested.
Provide references to the Requirements and/or Design specifications of the
features to be tested
Features Not to Be Tested:

List the features of the software/product which will not be tested.


Specify the reasons these features wont be tested.
Approach:
Mention the overall approach to testing.
Specify the testing levels [if it's a Master Test Plan], the testing types, and the
testing methods [Manual/Automated; White Box/Black Box/Gray Box]
Item Pass/Fail Criteria:
Specify the criteria that will be used to determine whether each test item
(software/product) has passed or failed testing.
Suspension Criteria and Resumption Requirements:
Specify criteria to be used to suspend the testing activity.
Specify testing activities which must be redone when testing is resumed.
Test Deliverables:
List test deliverables, and links to them if available, including the following:
Test Plan (this document itself)
Test Cases
Test Scripts
Defect/Enhancement Logs
Test Reports

Test Environment:
Specify the properties of test environment: hardware, software, network etc.
List any testing or related tools.
Estimate:
Provide a summary of test estimates (cost or effort) and/or provide a link to the
detailed estimation.
Schedule:
Provide a summary of the schedule, specifying key test milestones, and/or provide a
link to the detailed schedule.
Staffing and Training Needs:
Specify staffing needs by role and required skills.
Identify training that is necessary to provide those skills, if not already acquired.
Responsibilities:
List the responsibilities of each team/role/individual.
Risks:
List the risks that have been identified.
Specify the mitigation plan and the contingency plan for each risk.
Assumptions and Dependencies:
List the assumptions that have been made during the preparation of this plan.
List the dependencies.
Approvals:
Specify the names and roles of all persons who must approve the plan.

Provide space for signatures and dates. (If the document is to be printed.)
TEST PLAN GUIDELINES
Make the plan concise. Avoid redundancy and superfluousness. If you think you do
not need a section that has been mentioned in the template above, go ahead and delete
that section in your test plan.
Be specific. For example, when you specify an operating system as a property of a
test environment, mention the OS Edition/Version as well, not just the OS Name.
Make use of lists and tables wherever possible. Avoid lengthy paragraphs.
Have the test plan reviewed a number of times prior to baselining it or sending it
for approval. The quality of your test plan speaks volumes about the quality of the
testing you or your team is going to perform.
Update the plan as and when necessary. An out-dated and unused document stinks
and is worse than not having the document in the first place.

(iv)Software Quality
Sol: DEFINITION-Software quality is the degree of conformance to explicit or
implicit requirements and expectations.
Explanation:
Explicit: clearly defined and documented
Implicit: not clearly defined and documented but indirectly suggested
Requirements: business/product/software requirements
Expectations: mainly end-user expectations
Note: Some people tend to accept quality as compliance to only explicit requirements
and not implicit requirements. We tend to think of such people as lazy.
Definition by IEEE

The degree to which a system, component, or process meets specified


requirements.

The degree to which a system, component, or process meets customer or user


needs or expectations.
Definition by ISTQB

quality: The degree to which a component, system or process meets specified


requirements and/or user/customer needs and expectations.

software quality: The totality of functionality and features of a software product


that bear on its ability to satisfy stated or implied needs.
As with any definition, the definition of software quality is also varied and debatable.
Some even say that quality cannot be defined and some say that it can be defined but
only in a particular context. Some even state confidently that quality is lack of bugs.
Whatever the definition, it is true that quality is something we all aspire to.
Software quality has many dimensions. See Dimensions of Quality.
In order to ensure software quality, we undertake Software Quality Assurance and
Software Quality Control.
(v)Test Management System
Sol: Test management most commonly refers to the activity of managing the computer
software testing process. A test management tool is software used to manage tests
(automated or manual) that have been previously specified by a test procedure. It is

often associated with automation software. Test management tools often include
requirement and/or specification management modules that allow automatic
generation of the requirement test matrix (RTM), which is one of the main metrics to
indicate functional coverage of a system under test (SUT).
Test definition includes: test plan, association with product requirements and
specifications. Eventually, some relationship can be set between tests so that
precedences can be established. E.g. if test A is parent of test B and if test A is failing,
then it may be useless to perform test B. Tests should also be associated with priorities.
Every change on a test must be versioned so that the QA team has a comprehensive
view of the history of the test.
(vi)Test Automation
Sol: Developing software for software testing is called software automation.
What to automate:
Certain types of testing are difficult to perform without automation (stress, reliability,
scalability, performance)
Certain types of testing are repeated in nature (regression)
Testing for standards (508, server-client, protocols, state transitions)
A good amount of testing is repetitive in the product development scenario (A good
product has a lifetime of 10 years with regular / periodic enhancements)
Automate only if test cases need to be executed at least 10 times in the near future and
if the effort for automation doesnt exceed 15 times of executing those test cases
(conservative estimate)
Automate those areas where requirements go through the least amount of changes
Normally, change in requirements causes scenarios & implementation to be impacted,
not the basic functionality

Automate areas that motivate use of standard methodology, test tools (not
implementation specifics)
Automate areas for which management commitment exists
User interfaces go through good amount of changes; Automate non-GUI (command
line?) portions first and integrate them with GUI modules as pluggable modules with an
option to execute non-GUI portions independently
Automate areas of critical functions & where increasing coverage is needed

Classify the test cases into HIGH, MED, LOW based on customer expectation and
automate HIGH criticality test cases first
Automate areas where permutations & combinations are high good return on
investment with less code
Automate areas for which static decision tables exist - dynamic results are difficult to
automate and take more effort
Automate dependent & core test cases this helps in getting the critical defects early

Automate the areas that can benefit more than one product and for the framework
requirements
Automate those test cases that are easy and take less effort this satisfies those people
who look at immediate returns

Automate the areas that will be required for the industry 40% of the test tools
available in the market were internal tools before
(vii)Software Maintenance

Sol: Software maintenance in software engineering is the modification of a software


product after delivery to correct faults, to improve performance or other attributes. A
common perception of maintenance is that it merely involves fixing defects. However,
one study indicated that over 80% of maintenance effort is used for non-corrective
actions. This perception is perpetuated by users submitting problem reports that in
reality are functionality enhancements to the system.[citation needed] More recent
studies put the bug-fixing proportion closer to 21%
The key software maintenance issues are both managerial and technical. Key
management issues are: alignment with customer priorities, staffing, which
organization does maintenance, estimating costs. Key technical issues are: limited
understanding, impact analysis, testing, maintainability measurement.
Software maintenance is a very broad activity that includes error correction,
enhancements of capabilities, deletion of obsolete capabilities, and optimization.
Because change is inevitable, mechanisms must be developed for evaluation,
controlling and making modifications.
So any work done to change the software after it is in operation is considered to be
maintenance work. The purpose is to preserve the value of software over the time. The
value can be enhanced by expanding the customer base, meeting additional
requirements, becoming easier to use, more efficient and employing newer technology.
Maintenance may span for 20 years, whereas development may be 1-2 years
(viii)Defect Management
Sol: Software defects are expensive. Moreover, the cost of finding and correcting
defects represents one of the most expensive software development activities. For the
foreseeable future, it will not be possible to eliminate defects. While defects may be
inevitable, we can minimize their number and impact on our projects. To do this
development teams need to implement a defect management process that focuses on
preventing defects, catching defects as early in the process as possible, and minimizing
the impact of defects. A little investment in this process can yield significant returns.
Mosaic, Inc. served as the project manager for a Quality Assurance Institute (QAI)
research project on defect management. The purpose of the research project was to
develop guidance for software managers in the area of defect management. The
results of the project were published in the QAI research report number 8, Establishing
A Software Defect Management Process. While working on the project, Mosaic, Inc.
developed a framework for the defect management process in the form of a defect
management model.
This defect management model is not intended to be a standard, but rather a starting
point for the development of a customized defect management process within an
organization. Companies using the model can reduce defects and their impacts during
their software development projects. This web site summarizes the results of the
research project and introduces the defect management model.
The defect management process is based on the following general principles:

The primary goal is to prevent defects. Where this is not possible or practical, the
goals are to both find the defect as quickly as possible and minimize the impact of the
defect.
The defect management process should be risk driven -- i.e., strategies, priorities,
and resources should be based on the extent to which risk can be reduced.
Defect measurement should be integrated into the software development process
and be used by the project team to improve the process. In other words, the project
staff, by doing their job, should capture information on defects at the source. It should
not be done after-the-fact by people unrelated to the project or system
As much as possible, the capture and analysis of the information should be
automated.
Defect information should be used to improve the process. This, in fact, is the
primary reason for gathering defect information.
Most defects are caused by imperfect or flawed processes. Thus to prevent
defects, the process must be altered.

You might also like