0% found this document useful (0 votes)
443 views

Manual Testing Documentation

The document discusses manual testing documentation including the software development lifecycle, types of testing, test case writing, testing techniques, bug lifecycles, and other testing topics. It provides details on different types of software development lifecycles and when each is typically used.

Uploaded by

nirmalaentps.ops
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
443 views

Manual Testing Documentation

The document discusses manual testing documentation including the software development lifecycle, types of testing, test case writing, testing techniques, bug lifecycles, and other testing topics. It provides details on different types of software development lifecycles and when each is typically used.

Uploaded by

nirmalaentps.ops
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Manual Testing Documentation

By: Manjunath Rao

Table of Contents
● Software Development lifecycle
● Types of software Development Life Cycle
● What is testing
● Seven Principles of testing
● Types of Testing
● Types and Levels of Black Box Testing
● What is Test cases and how to write it..?
○ Test case header
○ Test case body
○ Test case footer
○ Basic example of Test case
● Testing Design Techniques
○ Equivalence Partition(EP)
○ Boundary Value Analysis(BVA)
○ Error Guessing
○ Decision Table testing
○ State Transition testing
● Bug life Cycle
○ Defect Tracking Tool
○ Seaviority
○ Priority
● Regression Testing
● Globalization and Usability Testing
● RTM
○ How we use in our Organization(U.C Tracker)
Software Development Life Cycle(SDLC):

It is step by step procedure to develop the software and should result in a high quality
system that meets customer expectations, reaches completion within time and cost
estimates, works effectively and efficiently and is inexpensive to maintain and cost
effective to enhance.
SDLC consists of various phases
● Requirements
● Feasibility Study
● Design
● Coding
● Testing
● Deployment/Rollout
● Maintenance

Requirements:
In these phase we collect the complete business needs from the customers in
the form of requirements (In our organization we call it as ​
PRD​
).

Feasibility Study:
In these phase the set of people(Dev and Qa managers) will analysis weather
the project is feasible or durable based on requirements.(In this they will discuss mainly
about Cost, Time, Technical aspects etc...).

Design:
t​
In​hese phase we will create a blueprint of the application which will be in the
form of High Level Document​ (HLD)​ which is again converted in to Low Level
Document​ (LLD)​,(In our organization we call it as ​
Design Document​ ).

Coding:
In these phase once the design is ready a set of developers will start writing the
code. It mean the developers will develop the application according to requirements.

Testing:
In these phase once the development finishes developing the application or part
of application then test engineers will start testing the application.
Deployment/Rollout :
In these phase once the testing is done and after all the bug fixes and once the
application is stable then we install the application at customers place which is called as
deployment or Rollout.

Maintenance:
In these phase once the customer start using the application then some issue
may rises or some change requests may come in the form of complaints then those
issues should be solved and should be given back to customers.

Types of software Development Life Cycle:


● Waterfall model
● Spiral Model
● V – Model / V & V Model (Verification and Validation Model )
● ProtoType Development Model
● Hybrid Model

1.Waterfall Model:
It is a traditional model and It is a sequential design process, often used in SDLC,
in which the progress is seen as flowing steadily downwards ( like a waterfall ), through
the different phases as shown in the figure,
Advantages of waterfall model :​ Requirements do not change nor does design and
code, so we get a stable product.

Drawbacks of Waterfall Model :


In waterfall model, backtracking is not possible i.e, we cannot back and change
requirements once the design stage is reached. Thus the requirements are freezed
once the design of the software product is started. Change in requirements – leads to
change in design – thus bugs enter the design – which leads to change in code which
results in more bugs. Thus the requirements are freezed once the design of the product
is started.
.
Applications of waterfall model :
Used in developing a simple application for short term projects whenever we are sure
that the requirements will not change.
EX: waterfall model can be used in developing a simple calculator as the functions of
addition, subtraction etc and the numbers will not change for a long time.

2.Spiral Model:

In Spiral model, the software product is developed in small modules. Let us consider the
figure shown below in developing a s/w product X. X is built by integrating A,B,C and
D.(A­Proof Concept, B­First Build, C­Second build, D­Final build)
The module A – requirements of the module is collected first and then the module is designed.
The coding of module A is done after which it is tested for defects and bugs.
The module B – once module A has been built, we start the same process for module B. But
while testing module B, we test for 3 conditions – a)test module B b)test integration of module B
with A c)test module A.
The module C – after building module A,B, we start the same process for module C. Here we
test for the following conditions – 1) test module c, b, a 2) test for integration of C and B, C and
A, A and B.
And thus the cycle continues for different modules. Thus in the above example, module B can be
built only after module A has been built correctly and similarly for module C.
Advantages of Spiral Model :
1.Requirement changes are allowed.
2. After we develop one feature / module of the product, then and only then we can go
on to develop the next module of the product.

Drawbacks of Spiral Model : ​


Traditional model and thus developers only did testing
job as well.

Applications of Spiral Model:


1.whenever there is dependency in building the different modules of the software, then
we use Spiral Model.
2. whenever the customer gives the requirements in stages, we develop the product in
stages.
V – MODEL / V & V MODEL (Verification and Validation Model ):

This model came up in order to overcome the drawback of waterfall model – here
testing starts from the requirement stage itself.
The V & V model is shown in the figure in the next page.
1. In the first stage, the client send the CRS both to developers and testers. The
developers translate the CRS to the SRS.
The testers do the following tests on CRS,
1. Review CRS
a. conflicts in the requirements
b. missing requirements
c. wrong requirements
2. Write Acceptance Test plan
3. Write Acceptance Test cases
The testing team reviews the CRS and identifies mistakes and defects and send it to the
development team for correcting the bugs. The development updates the CRS and
continues developing SRS simultaneously.

2. In the next stage, the SRS is sent to the testing team for review and the developers
start building the HLD of the product. The testers do the following tests on SRS,
1. Review SRS against CRS
a. every CRS is converted to SRS
b. CRS not converted properly to SRS
2. Write System Test plan
3. Write System Test case
The testing team reviews every detail of the SRS if the CRS has been converted
properly to SRS.

3. In the next stage, the developers start building the LLD of the product. The testers do
the following tests on HLD,
1. Review HLD
2. Write Integration test plan
3. Write Integration test case

4. In the next stage, the developers start with the coding of the product. The testing
team carries out the following tasks,
1. Review LLD
2. Write Functional test plan
3. Write Functional Test case
After coding, the developers themselves carry out unit testing or also known as white
box testing. Here the developers check each and every line of code and if the code is
correct. After white­box testing, the s/w product is sent to the testing team which tests
the s/w product and carries out functional testing, integration testing, system testing and
acceptance testing and finally deliver the product to the client.
Advantages of V&V model :
1) Testing starts in very early stages of product development which avoids downward
flow of defects which in turn reduces lot of rework
2) Testing is involved in every stage of product development
3) Deliverables are parallel/simultaneous – as developers are building SRS, testers are
testing CRS and also writing ATP and ATC and so on. Thus as the developers give the
finished product to testing team, the testing team is ready with all the test plans and test
cases and thus the project is completed fast.
4) Total investment is less – as there is no downward flow of defects. Thus there is less
or no re­work

Drawbacks of V&V model :


1) Initial investment is more – because right from the beginning testing team is needed
2) More documentation work – because of the test plans and test cases and all other
documents

Applications of V&V model :


We go for V&V model in the following cases,
1) for long term projects
2) for complex applications
3) when customer is expecting a very high quality product within stipulated time frame
because every stage is tested and developers & testing team are working in parallel

ProtoType Development Model :


The requirements are collected from the client in a textual format. The prototype
of the s/w product is developed. The prototype is just an image / picture of the required
s/w product. The customer can look at the prototype and if he is not satisfied, then he
can request more changes in the requirements.
Prototype testing means developers/ testers are checking if all the components
mentioned are existing.
The difference b/w prototype testing and actual testing – in PTT, we are checking if all
the components are existing, whereas, in ATT we check if all components are working.
From “REQUIREMENT COLLECTION” to “CUSTOMER REVIEW”, textual format has
been converted to image format. It is simply extended requirement collection stage.
Actual design starts from “DESIGN” stage.
Prototype development was earlier done by developers. But, now it is done by web
designers/content developers. They develop prototype of the product using simple
ready­made tools. Prototype is simply an image of the actual product to be developed.
Advantages of Prototype model :
1) In the beginning itself, we set the expectation of the client.
2) There is clear communication b/w development team and client as to the
requirements and the final outcome of the project
3) Major advantage is – customer gets the opportunity in the beginning itself to ask for
changes in requirements as it is easy to do requirement changes in prototype rather
than real applications. Thus costs are less and expectations are met.

Drawbacks of Prototype model :


1) There is delay in starting the real project
2) To improve the communication, there is an investment needed in building the
prototype.

Applications :
We use this model when,
1) Customer is new to the s/w
2) When developers are new to the domain
3) When customer is not clear about his own requirement

What is testing…? :
It is a process of finding or identifying defects in application or
software is called software testing. It is verifying the functionality(behavior) of the
application(s/w) against requirements specification.
​or
It is the execution of the s/w with the intention of finding defects. It is checking whether
the application(s/w) works according to the requirements.

Seven principles of testing :


​he seven principles are derived and taken into consideration from the observation of
T
testing of different application form last 40 years.
These all principles are not noticed or used in all kind of applications or projects.

1. Testing shows presence of defects


2. Exhausting testing is impossible
3. Early testing
4. Defects clustering
5. Pesticide paradox
6. Testing is context dependent
7. Absence­of­errors fallacy

Principle 1 – Testing shows presence of defects:


Testing can show that defects are present, but cannot prove that there are no
defects. Testing reduces the probability of undiscovered defects remaining in the
software but, even if no defects are found, it is not a proof of correctness.

Principle 2 – Exhaustive testing is impossible:


Testing everything (all combinations of inputs and preconditions) is not feasible
except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should
be used to focus testing efforts.

Principle 3 – Early testing:


To find defects early, testing activities shall be started as early as possible in the
software or system development life cycle, and shall be focused on defined objectives​ .

Principle 4 – Defect clustering:


Testing effort shall be focused proportionally to the expected and later observed
defect density of modules. A small number of modules usually contains most of the
defects discovered during pre­release testing, or is responsible for most of the
operational failures.

Principle 5 – Pesticide paradox:


If the same tests are repeated over and over again, eventually the same set of
test cases will no longer find any new defects. To overcome this “pesticide paradox”,
test cases need to be regularly reviewed and revised, and new and different tests need
to be written to exercise different parts of the software or system to find potentially more
defects.

Principle 6 – Testing is context dependent:


Testing is done differently in different contexts. For example, safety­critical
software is tested differently from an e­commerce site.

Principle 7 – Absence­of­errors fallacy:


Finding and fixing defects does not help if the system built is unusable and
doesn’t fulfill the user's needs and expectations.

Types of Testing :

There are 3 types of s/w testing, namely


1. White box testing/unit testing/structural testing/glass box testing/transparent
testing/open­box testing
2. Grey box testing
3. Black box testing

White Box Testing (WBT) :


Entire WBT is done by developers. It is the testing of each and every line of code
in the program. Developers do WBT, sends the s/w to testing team. The testing team
does black box testing and checks the s/w against requirements and finds any defects
and sends it to the developer. The developers fixes the defect and does WBT and
sends it to the testing team. Fixing defect means the defect is removed and the feature
is working fine.

Grey box testing :


It is a mixture of both white box as well as black box testing and it is generally
done by the test engineer who has knowledge of both coding and testing.

Black box testing :


It is a type of testing done by the test engineers where he/she checks if the
application(s/w) is working according to the requirement specification.
Now we mainly focus and look into details of black box testing and in that types and
levels of black box testing

Types and Levels of Black Box testing :

Mainly testing is divided into two type


1. Functional testing
2. Non­functional testing
Functional testing :
In functional testing we have many levels of testing depending of the type of
function and flows to be tested

1. Functional testing:
Also called component testing. Testing each and every component thoroughly
(rigorously) against requirement specifications is known as functional testing.

Example: Login page of gmail in these we check each and every labels, text fields,
checkbox

2.Integration testing :
Testing the data flow or interface between two features or modules is known as
integration testing.

Example: Compose mail from user A send to user B, check in send box. Login as user
B and check whether mail is received or not.

There are two types of integration testing



1.​
Incremental Integration Testing
● Top­down Integration Testing
● Bottom­up Integration Testing

2. Non­Incremental Integration Testing

3.System testing :
It is end­to­end testing wherein testing environment is similar to the production
environment.

Here, we navigate through all the features of the software and test if the end business /
end feature works. We just test the end feature and don’t check for data flow or do
functional testing and all.

4.Smoke testing/Sanity testing/Dry run/Skim testing/Build Verification Testing:


Testing the basic or critical features of an application before doing thorough
testing or rigorous testing is called as smoke testing.
It is also called Build Verification Testing because we check whether the build is broken
or not.
Why we do Smoke testing ?
Whenever a new build comes in, we always start with smoke testing, because for every
new build there might be some changes which might have broken a major feature
In smoke testing, we do only positive testing – i.e, we enter only valid data and not
invalid data.

5.User Acceptance Testing :



It is done by end user. Hear they use the s/w for business for a particular period
of time and check weather the s/w can handle all kinds of real­time business scenarios/
situations.

5.Ad­Hoc Testing :
Testing the application randomly is called Ad­hoc testing/Monkey testing/Gorilla
Testing
Why we do Ad­hoc testing ?
● End­users use the application randomly and he may see a defect, but
professional TE/QA uses the application systematically so he may not find the
same defect. In order to avoid this scenario, TE/QA should go and then test the
application randomly (i.e, behave like and end­user and test).
● Ad­hoc is a testing where we don’t follow the requirements (we just randomly
check the application). Since we don’t follow requirements, ​we don’t write test
cases.

Non Functional testing:


Testing of a software application or system for its non­functional requirements
the way a system operates, rather than specific behaviours of that system. This is
contrast to functional testing, which tests against functional requirements that describe
the functions of a system and its components. The names of many non­functional tests
are often used interchangeably because of the overlap in scope between various
non­functional requirements.
For example, software performance is a broad term that includes many specific
requirements like reliability and scalability.

Performance testing :
This topic is covered in Ch:8 sub topic:8.2
What is Test cases :
Test case is a document which covers all possible scenarios to test all the
features and It is a set of input parameters for which the s/w will be tested. The SRS are
numbered so that developers and testing team will not miss out on any feature​ .

Why we write Test Cases?


● To have better test coverage – ​ cover all possible scenarios and document it,
so that we need not remember all the scenarios
● To have consistency in test case execution – ​ seeing the test case and testing
the product
● To avoid training every new engineer on the product –​ when an
engineer leaves, he leaves with lot of knowledge and scenarios.Those scenarios
should be documented, so that new engineer can test with the given scenarios
and also write new scenarios.
● To depend on process rather than on a person

How to Write Test Cases?

Header of Test Case :


We always fill the body of the test case first before filling up the header of the test case.
the header contains the following data
1. Test case name :
2. Requirement number :
3. Module name :
4. Pre­condition :
5. Test data :
6. Severity :
7. Test case type :
8. Brief description :

Body of Test Case:


Footer of a Test Case :
1. Author :
2. Reviewed by
3. Approved by
4. Approval date :

Basic example of Test case:


This how the T.C looks

Testing Design Techniques:


To design the test cases we need to follow the below conventions so that we may have
better coverage

1. Equivalence Partition(EP)
2. Boundary Value Analysis(BVA)
3. Error Guessing
4. Decision Table testing
5. State Transition testing
Equivalence Partition(EP) :
In this we have two types
1. Pressman Method
2. Practice Method

Pressman Method:
If the input is a range of values, then design the test cases for 1 valid and 2 invalid
values.
Ex: ​Amount text field accepts range of values
500 – valid, 90­Invalid, 6000 ­Invalid

If the input is a set of values, then design the test cases for 1 valid and 2 invalid values.

If the input is Boolean, then design the test cases for both true and false values.
Ex – checkboxes, radiobuttons etc.

Practice Methode

Testing the application by deriving the below values,

90 100 1000 2000 3000 4000 5000 6000


Let's see a program. Understand the logic and analyse why we use Practice method,

If (amount <100 or >5000)


{
Error message
}
If (amount between 100 & 2000)
{
Deduct 2%
}
If (amount > 2000)
{
Deduct 3%
}

When Pressman techniques are used, the first 2 programs are tested, but if Practice
method is used, all these are covered.
It is not necessary that for all applications, practice methodology needs to be used.
Sometimes, Pressman is also fine.
But, if the application has any deviation, splits or precision – then we go for Practice
method.
If Practice methodology has to be used, it should be
a) Case specific
b) Product specific
c) Number of divisions depends on the precision (2% or 3% deduction)

BVA – Boundary Value Analysis


If input is a range of values between A – B, then design test case for A, A+1, A­1 and B,
B+1, B – 1.

Thus, a number of bugs can be found when applying BVA because developer tends to
commit mistakes in this area when writing code.

If ( Amount < = 100 )


{
Throw error
}
If ( Amount > = 5000 )
{
…..
}

If ‘equals’ is there, then even 100 value is expected.

When comparing Equivalence Partitioning and BVA, testing values are repeated. if that
is the case, we can neglect Equivalence Partitioning and perform only BVA as it covers
all the values.

References:

http://www.tutorialspoint.com/software_testing_dictionary/test_case_design_tech
nique.htm
Bug Life Cycle:
The following image explains how the entire bug life cycle is

Defect​ :
If a feature is not working according to the requirement, it is called a defect.
or
Deviation from requirement specification is called as defect.

Different status of bug:

Request for Enhancement


Not Reproducible
Postponed or Fixed in the next release
Cannot be fixed
Duplicate bug
Reject bug

Severity of a Bug
Severity is impact of the bug on customer’s business.
Critical : A major issue where a large piece of functionality or major system component
is completely broken. There is no work around & testing cannot continue.

Major : A major issue where a large piece of functionality or major system component
is not working properly. There is a work around,however & testing can continue.

Minor: A minor issue that imposes some loss of functionality, but for which there is an
acceptable & easily reproducible workaround. Testing can proceed without interruption.
Priority of a Bug
It is the importance to fix the bug (OR) how soon the defect should be fixed (OR) which
are the defects to be fixed first.
High: This has a major impact on the customer. This must be fixed immediately.

Medium: This has a major impact on the customer. The problem should be fixed before
release of the current version in development

Low: This has a minor impact on the customer. The flow should be fixed if there is time,
but it can be deferred with the next release.

Blocker Defect
There are 2 types in blocker defect,
Major flow is not working – Login or signup itself is not working in CitiBank application

Major feature is not working – Login to CitiBank. Amount Transfer is not working

Defect Tracking Tool:


While reporting how it looks
While searching a bug how we use filters

Regression Testing:

The process of re­executing the previous features or re­ executing the previous
test cases due to code changes across multiple releases or builds to make sure that
dependent modules are not affected or broken.

Based on changes, we should do different types of regression testing,


1.Unit Regression Testing
2.Regional Regression Testing
3.Full Regression Testing
Software Test Life Cycle:

The below image explains the different phases of software test life cycle.
Globalization Testing :
Developing the application for multiple languages is called globalization and
testing the application which is developed for multiple languages is called globalization
testing.

There are 2 types of globalization testing.


1.Internationalization Testing ( I18N testing ):
Mainly we check whether content is in right language and whether right content is
in right place
Ex: Checking whether the correct Language and content is displayed accordingly or not.

2.Localization Testing ( L10N testing )


Format testing is nothing but Localization testing hear the testing is done for
format specification according to region/country.

Ex: Checking whether the correct time and date, phone no. and country zip codes are
displayed accordingly or not.

Compatibility Testing (CT) or Usability Testing:


Testing the functionality of an application in different software and hardware
environment is called Compatibility testing.

The various Compatibility bugs are,


● Scattered content
● Alignment issues
● Broken frames
● Change in look and feel of the application
● Object overlapping
● Change in font size, style and color
● Object overlapping

Requirement Traceability Matrix (RTM)

It is a document which ensures that every requirement has a test case .


Test cases are written by looking at the requirements and test cases are executed by
looking at the test cases. If any requirement is missed i.e, test cases are not written for
a particular requirement, then that particular feature is not tested which may have some
bugs. Just to ensure that all the requirements are converted, trace­ability matrix is
written.

How we use in our Organization(U.C Tracker)


This is shown below,

Questions for Assessment

1. An input field takes the year of birth between 1900 and 2004. The boundary
values for testing this field are:
A. 0,1900,2004,2005
B. 1900, 2004
C. 1899,1900,2004,2005
D. 1899, 1900, 1901,2003,2004,2005

Answer : C
2. Boundary value testing..?
A. Is the same as equivalence partitioning tests
B. Test boundary conditions on, below and above the edges of input and output
equivalence classes
C. Tests combinations of input circumstances
D. Is used in white box testing strategy

Answer : B

3. Pick the best definition of quality


A. Quality is job one
B. Zero defects
C. Conformance to requirements
D. Work as designed

Answer : C

4. One Key reason why developers have difficulty testing their own work is :
A. Lack of technical documentation
B. Lack of test tools on the market for developers
C. Lack of training
D. Lack of Objectivity

Answer : D

5. During the software development process, at what point can the test process
start?
A. When the code is complete.
B. When the design is complete.
C. When the software requirements have been approved.
D. When the first code module is ready for unit testing

Answer : C

6.How much testing is enough?


A.This question is impossible to answer
B. This question is easy to answer
C.The answer depends on the risk for your industry, contract and special
requirements
D. This answer depends on the maturity of your developers

Answer : C

7. Repeated Testing of an already tested program, after modification, to discover


any defects introduced or uncovered as a result of the changes in the software
being tested or in another related or unrelated software component:
A. ReTesting
B. Confirmation Testing
C. Regression Testing
D. Negative Testing

Answer : C

8.Test Conditions are derived from


A. Specifications
B. Test Cases
C. Test Data
D. Test Design

Answer : A

9.Which of the following will be the best definition for Testing :


A. The goal / purpose of testing is to demonstrate that the program works.
B. The purpose of testing is to demonstrate that the program is defect free
C. The purpose of testing is to demonstrate that the program does what it is
supposed to do
D. Testing is executing Software for the purpose of finding defects

Answer : D

10. In case of Large Systems


A. Only few tests should be run
B. Testing should be on the basis of Risk
C. Only Good Test Cases should be executed
D. Test Cases written by good test engineers should be executed

Answer : B
11. A deviation from the specified or expected behavior that is visible to
end­users is called:
A. An error
B. A fault
C. A failure
D. A defect

Answer : B

12. Regression testing should be performed:


v) Every week
w) After the software has changed
x) As often as possible
y) When the environment has changed
z) When the project manager says

A. v & w are true, x, y & z are false


B. w, x & y are true, v & z are false
C. w & y are true, v, x & z are false
D. w is true, v, x, y & z are false

Answer : C

13. When should testing be stopped?


A. When all the planned tests have been run
B. When time has run out
C. When all faults have been fixed correctly
D. It depends on the risks for the system being tested

Answer : D

14. Which of the following is the main purpose of the integration strategy for
integration testing in the small?
A. To ensure that all of the small modules are tested adequately
B. To ensure that the system interfaces to other systems and networks
C. To specify which modules to combine when, and how many at once
D. To specify how the software should be divided into modules

Answer : D

15. Consider the following statements:


i. An incident may be closed without being fixed.
ii. Incidents may not be raised against documentation.
iii. The final stage of incident tracking is fixing.
iv. The incident record does not include information on test environments.

A. ii is true, i, iii and iv are false


B. i is true, ii, iii and iv are false
C. i and iv are true, ii and iii are false
D. i and ii are true, iii and iv are false

Answer : B

You might also like