Software Testing Framework
Software Testing Framework
Harinath V Pudipeddi
[email protected]
http://www.sqae.com
Table of Contents
Next Version of this framework would include Test Estimation Procedures and More
Metrics.
Through experience t hey det erm ined, t hat t here should be 30 defect s per 1000 lines
of code. I f t est ing does not uncover 30 defect s, a logical solut ion is t hat t he t est
process was not effective.
1.0 Introduction
Test ing plays an im port ant role in t oday’s Syst em Developm ent Life Cycle. During
Testing, we follow a systematic procedure to uncover defects at various stages of the
life cycle.
This fram ework is aim ed at providing t he reader various Test Types, Test Phases,
Test Models and Test Met rics and guide as t o how t o perform effect ive Test ing in t he
project.
All t he definit ions and st andards m ent ioned in t his fram ework are exist ing one’s. I
have not alt ered any definit ions, but where ever possible I t ried t o explain t hem in
sim ple words. Also, t he fram ework, approach and suggest ions are m y experiences.
My int ent ion of t his fram ework is t o help Test Engineers t o underst and t he concept s
of t est ing, various t echniques and apply t hem effect ively in t heir daily work. This
framework is not for publication or for monetary distribution.
I f you have any queries, suggest ions for im provem ent s or any point s found m issing,
kindly write back to me.
Let us look at t he t radit ional Soft ware Developm ent life cycle. The figure below
depicts the same.
Requirements Requirements
Design Design
Te st
Code Code
Test Maintenance
Maintenance
Fig A Fig B
I n t he above diagram ( Fig A) , t he Test ing phase com es aft er t he Coding is com plet e
and before the product is launched and goes into maintenance.
Throughout the entire lifecycle, neither development nor verification is a straight- line
act ivit y. Modificat ions or correct ions t o a st ruct ure at one phase will require
modifications or re- verification of structures produced during previous phases.
The Verificat ion St rat egies, persons / t eam s involved in t he t est ing, and t he
deliverable of that phase of testing is briefed below:
2.1.1 Review’s
The focus of Review is on a work product ( e.g. Requirem ent s docum ent , Code et c.) .
Aft er t he work product is developed, t he Proj ect Leader calls for a Review. The work
product is dist ribut ed t o t he personnel who involves in t he review. The m ain
audience for t he review should be t he Proj ect Manager, Proj ect Leader and t he
Producer of the work product.
Let us discuss in brief about t he above m ent ioned reviews. As per st at ist ics Reviews
uncover over 65% of t he defect s and t est ing uncovers around 30% . So, it ’s very
important to maintain reviews as part of the V&V strategies.
• The Cr it ica l D e sign Re vie w baselines t he det ailed design specificat ion.
Test cases are reviewed and approved.
Pe e r Re vie w is generally a one- to- one m eet ing bet ween t he aut hor of a work
product and a peer, init iat ed as a request for im port regarding a part icular art ifact or
problem . There is no agenda, and result s are not form ally report ed. These reviews
occur on an as needed basis throughout each phase of a project.
2.1.2 Inspections
A knowledgeable individual called a m oderat or, who is not a m em ber of t he t eam or
t he aut hor of t he product under review, facilit at es inspect ions. A recorder who
records the defects found and actions assigned assists the moderator. The meeting is
planned in advance and m at erial is dist ribut ed t o all t he part icipant s and t he
part icipant s are expect ed t o at t end t he m eet ing well prepared. The issues raised
during t he m eet ing are docum ent ed and circulat ed am ong t he m em bers present and
the management.
2.1.3 Walkthroughs
The aut hor of t he m at erial being reviewed facilit at es walk- Through. The part icipant s
are led t hrough t he m at erial in one of t wo form at s; t he present at ion is m ade wit hout
int errupt ions and com m ent s are m ade at t he end, or com m ent s are m ade
t hroughout . I n eit her case, t he issues raised are capt ured and published in a report
dist ribut ed t o t he part icipant s. Possible solut ions for uncovered defect s are not
discussed during the review.
The Validat ion St rat egies, persons / t eam s involved in t he t est ing, and t he
deliverable of that phase of testing is briefed below:
Before t he Proj ect Managem ent decides on t he t est ing act ivit ies t o be perform ed, it
should have decided t he t est t ype t hat it is going t o follow. I f it is t he Black Box,
t hen t he t est cases should be writ t en addressing t he funct ionalit y of t he applicat ion.
I f it is t he Whit e Box, t hen t he Test Cases should be writ t en for t he int ernal and
functional behavior of the system.
Funct ional t est ing ensures t hat t he requirem ent s are properly sat isfied by t he
applicat ion syst em . The funct ions are t hose t asks t hat t he syst em is designed t o
accomplish.
Using white box testing methods, we can derive test cases that:
1) Guarant ee t hat all independent pat hs wit hin a m odule have been exercised at
lease once,
2) Exercise all logical decisions on their true and false sides,
3) Execute all loops at their boundaries and within their operational bounds, and
4) Exercise internal data structures to ensure their validity.
What is Scenario Based Test ing and How/ Where is it useful is an int erest ing
question. I shall explain in brief the above two mentioned points.
Scenario Based Test ing is cat egorized under Black Box Test s and are m ost helpful
when t he t est ing is concent rat ed on t he Business logic and funct ional behavior of t he
applicat ion. Adopt ing SBT is effect ive when t est ing com plex applicat ions. Now, every
applicat ion is com plex, t hen it ’s t he t eam s call as t o im plem ent SBT or not . I would
personally suggest using SBT when t he funct ionalit y t o t est includes various feat ures
and funct ions. A best exam ple would be while t est ing banking applicat ion. As
banking applicat ions require ut m ost care while t est ing, handling various funct ions in
a single scenario would result in effective results.
A sam ple t ransact ion ( scenario) can be, a cust om er logging int o t he applicat ion,
checking his balance, t ransferring am ount t o anot her account , paying his bills,
checking his balance again and logging out.
Exploratory testing is ‘Testing while Exploring’. When you have no idea of how the
application works, exploring the application with the intent of finding errors can be
termed as Exploratory Testing.
Software Requirement
Specification
Functional Specification
Document
Coding
Functional Specification
Document Performance Test Cases
and Scenarios
Performance Criteria
Software Requirement
Specification
Goal of Unit t est ing is t o uncover defect s using form al t echniques like Boundary
Value Analysis ( BVA) , Equivalence Part it ioning, and Error Guessing. Defect s and
deviat ions in Dat e form at s, Special requirem ent s in input condit ions ( for exam ple
Text box where only num eric or alphabet s should be ent ered) , select ion based on
Com bo Box’s, List Box’s, Opt ion but t ons, Check Box’s would be ident ified during t he
Unit Testing phase.
I nt egrat ion t est ing is a syst em at ic t echnique for const ruct ing t he program st ruct ure
while at t he sam e t im e conduct ing t est s t o uncover errors associat ed wit h
int erfacing. The obj ect ive is t o t ake unit t est ed com ponent s and build a program
structure that has been dictated by design.
Usually, the following methods of Integration testing are followed:
1. Top- down Integration approach.
2. Bottom- up Integration approach.
1. A Bot t om - up int egrat ion st rat egy m ay be im plem ent ed wit h t he following
steps:
2. Low level com ponent s are com bined int o clust ers t hat perform a specific
software sub function.
3. A driver is written to coordinate test case input and output.
4. The cluster is tested.
5. Drivers are rem oved and clust ers are com bined m oving upward in t he
program structure.
Syst em t est ing is a series of different t est s whose prim ary purpose is t o fully
exercise t he com put er based syst em . Alt hough each t est has a different purpose, all
work t o verify t hat syst em elem ent s have been properly int egrat ed and perform
allocated functions.
The following tests can be categorized under System testing:
1. Recovery Testing.
2. Security Testing.
3. Stress Testing.
4. Performance Testing.
5.0 Metrics
Met rics are t he m ost im port ant responsibilit y of t he Test Team . Met rics allow for
deeper underst anding of t he perform ance of t he applicat ion and it s behavior. The
fine t uning of t he applicat ion can be enhanced only wit h m et rics. I n a t ypical QA
process, there are many metrics which provide information.
The following can be regarded as the fundamental metric:
IEEE Std 982.2 - 1988 defines a Functional or Test Coverage Metric. It can be used
to measure test coverage prior to software delivery. It provide a measure of the
percentage of the software tested at any point during testing.
It is calculated as follows:
Function Test Coverage = FE/FT
Where
FE is the number of test requirements that are covered by test cases that were
executed against the software
FT is the total number of test requirements
Reliability Metrics
Perry offers the following equation for calculating reliability.
Reliability = 1 - Number of errors (actual or predicted)/Total number of
lines of executable code
This reliability value is calculated for the number of errors during a specified time
interval.
Three other metrics can be calculated during extended testing or after the system is
in production. They are:
MTTFF (Mean Time to First Failure)
MTTFF = The number of time intervals the system is operable until its first failure
MTBF (Mean Time Between Failures)
MTBF = Sum of the time intervals the system is operable
Number of failures for the time period
MTTR (Mean Time To Repair)
MTTR = sum of the time intervals required to repair the system
The number of repairs during the time period
System Tests
Specification
Integration Tests
Architecture
Unit Tests
Detailed Design
Coding
The diagram is self- explanatory. For an easy understanding, look at the following
table:
SDLC Phase Test Phase
1. Requirements 1. Build Test Strategy.
2. Plan for Testing.
3. Acceptance Test Scenarios
Identification.
2. Specification 1. System Test Case Generation.
3. Architecture 1. Integration Test Case Generation.
4. Detailed Design 1. Unit Test Case Generation
Regression Requirements
Requirements
Round 3 Review
Performance
Testing
Regression
Specification Round 2 Specification System
Review Testing
Code Code
Walkthrough
The ‘W’ model depicts that the Testing starts from day one of the initiation of the
project and continues till the end. The following table will illustrate the phases of
activities that happen in the ‘W’ model:
Regression Rounds are performed at regular intervals to check whether the defects,
which have been raised and fixed, are re- tested.
The t est ing act ivit ies for t est ing soft ware product s are preferable t o follow t he
Butterfly Model. The following picture depicts the test methodology.
I n t he Butterfly m odel of Test Developm ent , t he left wing of t he but t erfly depict s
the Te st An a lysis. The right wing depict s t he Te st D e sign , and finally t he body of
t he but t erfly depict s t he Te st Ex e cu t ion . How t his exact ly happens is described
below.
Test Analysis
Test Design
The right wing of t he but t erfly represent s t he act of designing and im plem ent ing t he
t est cases needed t o verify t he design art ifact as replicat ed in t he im plem ent at ion.
Like test analysis, it is a relatively large piece of work. Unlike test analysis, however,
t he focus of t est design is not t o assim ilat e inform at ion creat ed by ot hers, but rat her
t o im plem ent procedures, t echniques, and dat a set s t hat achieve t he t est ’s
objective(s).
The out put s of t he t est analysis phase are t he foundat ion for t est design. Each
requirem ent or design const ruct has had at least one t echnique ( a m easurem ent ,
dem onst rat ion, or analysis) ident ified during t est analysis t hat will validat e or verify
that requirement. The tester must now implement the intended technique.
Soft ware t est design, as a discipline, is an exercise in t he prevent ion, det ect ion, and
elim inat ion of bugs in soft ware. Prevent ing bugs is t he prim ary goal of soft ware
t est ing. Diligent and com pet ent t est design prevent s bugs from ever reaching t he
im plem ent at ion st age. Test design, wit h it s at t endant t est analysis foundat ion, is
t herefore t he prem iere weapon in t he arsenal of developers and t est ers for lim it ing
the cost associated with finding and fixing bugs.
During Test Design, basing on t he Analysis Report t he t est personnel would develop
the following:
1. Test Plan.
2. Test Approach.
3. Test Case documents.
4. Performance Test Parameters.
5. Performance Test Plan.
Test Execution
During t he Test Execut ion phase, keeping t he Proj ect and t he Test schedule, t he t est
cases designed would be execut ed. The following docum ent s will be handled during
the test execution phase:
1. Test Execution Reports.
2. Daily/Weekly/monthly Defect Reports.
3. Person wise defect reports.
After the Test Execution phase, the following documents would be signed off.
The defect tracking process has to be handled carefully and managed efficiently.
The Tester/Developer
finds the Bug.
The concerned
Developer is informed
Defect Classification
This sect ion defines a defect Severit y Scale fram ework for det erm ining defect
crit icalit y and t he associat ed defect Priorit y Levels t o be assigned t o errors found
software.
Classification Description
Critical There is s funct ionalit y block. The applicat ion is not able t o
proceed any further.
Major The applicat ion is not working as desired. There are variat ions in
the functionality.
Minor There is no failure report ed due t o t he defect , but cert ainly needs
to be rectified.
Cosmetic Defects in the User Interface or Navigation.
Suggestion Feature which can be added for betterment.
The priorit y level describes t he t im e for resolut ion of t he defect . The priorit y level
would be classified as follows:
Classification Description
Immediate Resolve the defect with immediate effect.
At the Earliest Resolve the defect at the earliest, on priority at the second level.
Normal Resolve the defect.
Later Could be resolved at the later stages.
I n t his sect ion, I would explain how t o go about planning your t est ing act ivit ies
effect ively and efficient ly. The process is explained in a t abular form at giving t he
phase of testing, activity and person responsible.
For t his, I assum e t hat t he proj ect has been ident ified and t he t est ing t eam consist s
of five personnel: Test Manager, Test Lead, Senior Test Engineer and 2 Test
Engineer’s.
1. Test Strategy.
2. Test Plan.
3. Test Case Documents.
4. Defect Reports.
5. Status Reports (Daily/weekly/Monthly).
6. Test Scripts (if any).
7. Metric Reports.
8. Product Sign off Document.