0% found this document useful (0 votes)
5 views57 pages

Unit - 3

STM material JNTUK R19 Regulation UNIT vise 3

Uploaded by

N Meghana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views57 pages

Unit - 3

STM material JNTUK R19 Regulation UNIT vise 3

Uploaded by

N Meghana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

StaticTesting:

FeaturesofSoftwareTesting:
1. Statictestingtechniquesdonotdemonstratethat thesoftwareisoperationaloronefunction
ofsoftwareis working;
2. They check the software product at each SDLC stage for conformance with the
requiredspecificationsorstandards.Requirements,designspecifications,testplans,sourcecode
,user’smanuals,maintenanceproceduresaresomeofthe items thatcan bestaticallytested.
3. Statictestinghasprovedtobeacost-effectivetechniqueoferrordetection.
4. Another advantage in static testing is that a bug is found at its exact location whereas a bug
foundindynamictestingprovides no indication to theexact sourcecodelocation.

TypesofStaticTesting
-->SoftwareInspections
-->Walkthroughs
-->TechnicalReviews

Inspections:
-->Inspectionprocessisanin-processmanualexaminationofanitemtodetectbugs.
-->Inspection process is carried out by a group of peers. The group of peers first inspects
theproductatindividual level.After this,theydiscusspotential defectsof the product
observedinaformalmeeting.
-->Itis averyformal processto verifyasoftware product.Thedocuments whichcan beinspected
areSRS,SDD,codeandtestplan.
-->Inspectionprocessinvolvestheinteractionofthefollowingelements:
a) Inspectionstepsb)Rolesforparticipantsc)Itembeinginspected

InspectionTeam:

-->Author / Owner / Producer: A program error designer responsible for producing the
programordocument.

-->Inspector: A peer member of the team, i.e he is not a manager or supervisor. He is not
directlyrelatedto theproduct undersupervision and maybeconcerned with someotherproducts.

-->Moderator:Ateam memberwhomanages thewholeinspectionprocess.


Heschedules,leads,andcontrols theinspection session.

-->Recorder:Onewhorecordsalltheresultsoftheinspection meeting.

InspectionProcess:
Planning:Duringthis phasethe followingisexecuted:
-->Theproduct tobeinspected isbeingidentified.
-->Amoderator is assigned.
-->The objective of the inspection is stated i.e whether the inspection is to be conducted for
defectdetectionorsomethingelse.

Duringplanning,themoderatorperformsthe followingactivities:
--Assuresthat theproductis readyforinspection.
--Selectstheinspectionteamandassignstheirroles.
--Schedulesthemeetingvenueandtime.
--Distributestheinspectionmaterialliketheitemtobeinspected,clientlistsetc.
Overview: In this stage, the inspection team is provided with the background information
forinspection. The author presents the rationale of the product, its relationship to the rest of the
productsbeingdeveloped, its function andintended useandtheapproach used to develop it.
Individual Preparation: After the overview, the reviewers individually prepare themselves for
theinspection process by studying the documents provided to them in the overview session. They
pointout potential errors or problems found and record in a log. This log is then submitted to
themoderator.Themoderatorcompilesthelogsofdifferentmembersandgivesacopyofthiscompiledlistto
the author ofthe inspected item.

Inspection Meeting: Once all the initial preparation is complete, the actual inspection meeting
canstart. Theinspection meeting starts with the author of the inspected item who has created it.
Theauthor first discusses every issue raised by different members in the compiled log file. After
thediscussion, all the members arrive at a consensus whether the issues pointed out are in fact
errorsandif theyareerrors, should theybeadmittedbytheauthor.

Rework:Thesummarylist of the bugs that ariseduringthe inspection meetingneeds to


bereworkedbytheauthor.Theauthorfixesallthesebugsandreportsbackto themoderator.

Follow-Up: It is the responsibility of the moderator to check that all the bugs found in the
lastmeetinghavebeen resolved.Thedocumentis then approved for release.

BenefitsofInspectionProcess:

-->Bug Reduction: According to the report that through the inspection process in IBM,
thenumberof bugs perthousand lines of codehas been reduced bytwo thirds.
--
>BugPrevention:Basedontheexperienceofpreviousinspections,analysiscanbemadeforfutureinsp
ectionsorprojects,therebypreventingthebugswhich haveappearedearlier.

--
>Productivity:SinceallphasesofSDLCmaybeinspectedwithoutwaitingforcodedevelopmentanditsexec
ution, thecostof findingbugsdecreases andincreases productivity.

-->Real-time Feedback to Software Engineers: Developers find out the type of mistakes
theymake and what is the error density. Since they get this feedback in the early stage of
thedevelopment,theymayimprovetheircapability.

-->Reduction in Development Resource:Inspections reduce the effort required for


dynamictesting and any rework during design and code, thereby causing an overall net
reduction in thedevelopmentresource.

--
>QualityImprovement:Thedirectconsequenceofstatictestingalsoresultsintheimprovementofqualityof
thefinal product.

-->ProjectManagement
-->CheckingCouplingand Cohesion
-->LearningthroughInspection
-->ProcessImprovement

EffectivenessofInspectionProcess:
In an analysis, the inspection process was found to be effective as compared to
structuraltestingbecausetheinspectionprocess alone found52%errors.So theerrordetection
ratiocanbe
specifiedas:
Errorfound byan inspection
Errordetectionefficiency=-------------------------------------- * 100
Totalerrorsintheteambeforeinspection
VariantsofInspectionProcess:
ReadingTechniques:
A reading technique can be defined as a series of steps or procedures whose purpose is
toguide an inspector toacquireadeepunderstandingof
theinspectedsoftwareproduct.Thusreadingtechnique can be regarded as a mechanism for the
individual inspector to detect defects in theinspectedproduct.Thevarious readingtechniquesare:

Ad-hocMethod:Thewordad-hoconlyrefers tothe factthat notechnical supportonhow todetectdefects


in a software artifact is given them. In this case, defect detection fully depends on the
skills,knowledge,andexperienceof aninspector.

Checklists:Achecklist isa list ofitems that focustheinspectors attentionon specifictopics,


suchascommon defects ororganizational rules, whilereviewingasoftwaredocument.

Scenario–BasedReading:Differentmethodsdevelopedbasedonscenariobasedreadingare:

-->Perspective based Reading: Software item should be inspected from the perspective of
differentstakeholders Inspectors of an inspection team have to check software quality as well as the
softwarequalityfactors ofasoftwareartifactfrom differentperspectives.
-->Usage based Reading: This method given is applied in design inspections.
Designdocumentation is inspected based on use cases, which are documented in
requirementsspecification.
-->Abstraction driven Reading: This method is designed for code inspections. In this method,
aninspector reads a sequence of statements in the code and abstracts the functions these
statementscompute.

-->TaskdrivenReading:Thismethodisalsoforcodeinspections.Inthismethod,theinspectorhasto create a
data dictionary, a complete description of the logic and a cross-reference between thecodeand the
specifications.
requirementsdocuments [103]. The scenarios, designed around function-points are known as the
Function PointScenarios.AFunctionPointScenario
consistsofquestionsanddirectsthefocusofaninspectortoaspecificfunction-point item within
theinspected requirements document.

StructuredWalkthroughs:

-->Itisaless formalandlessrigoroustechniqueascomparedtoinspection.Theverycommontermused in
the literature for static testing is Inspection but it is for very formal process. If you want
togoforaless formalhavingno barsoforganized meeting,then walkthroughs area good option.

-->A review is similar to an inspection or walkthrough, except that the review team also
includesmanagement.Therefore, itisconsideredahigher-leveltechniquethaninspectionorwalkthrough.

-->A technical review team is generally comprised of management-level representatives of the


Userand Project Management.Review agendas should focus less on technical issues and more
onoversightthanan inspection.

-->Atypical structuredwalkthrough team consists of:


--Coordinator:Organizes, moderatesandfollowsupthewalkthrough
--Presenter/Developers:Introducestheitembeinginspected.
--Scribe/Recorder:Notesdownthedefects
--Reviewer/Tester:Findsthedefectsintheitem.
--MaintenanceOracle:Focusesonfuturemaintenanceoftheproject.
--StandardsBearer:Assessesadherencetostandards
--UserRepresentative/AccreditationAgent:Reflectstheneedsoftheuser.

TechnicalReview:
--A review is similar to an inspection or walkthrough, except that the review team also
includesmanagement.Therefore, itisconsideredahigher-leveltechniquethaninspectionorwalkthrough.

--A technical review team is generally comprised of management-level representatives of the


Userand Project Management.Review agendas should focus less on technical issues and more
onoversightthanan inspection.

ValidationActivities
UnitTesting:

A unit is the smallest testable part of an application like functions, classes, procedures,
interfaces.Unit testing is a method by which individual units of source code are tested to determine
if they arefit for use. Unit tests are basically written and executed by software developers to make
sure thatcode meets its design and requirements and behaves as expected. The goal of unit testing
is tosegregate each part of the program and test that the individual parts are working correctly.
Thismeans that for any function or procedure when a set of inputs are given then it should return
thepropervalues.
Drivers:

Driversare alsokindofdummymoduleswhichare knownas"callingprograms‖,whichisused


when main programs are under construction. Suppose a module is to be tested, where in someinputs
are to be received from another module. However this module which passes inputs to themodule
tobetestedisnotreadyandunderdevelopment.Insucha situation,weneedtosimulatethe inputs required in
the module to be tested. This module where the required inputs for the moduleundertest
aresimulated forthe purposeofmoduleorunit testingisknownasdrivermodule.

For example:When wehave modules B and C ready but module A which calls functions
frommoduleBand Cis not readyso developer willwrite adummypiece ofcodefor
moduleAwhichwillreturnvalues tomodule Band C.This dummypieceofcodeis known as driver.

Stubs:
Stubsaredummymoduleswhich areknown as"called programs"whichisusedwhen sub
programs are under construction. The module under testing may also call some other module
whichisnotreadyat thetimeof testing.Therefore,these modules needtobe simulatedfortesting.In
mostcases,dummymodulesinsteadofactual modules,whicharenotready,are preparedforthese
subordinatemodules.Thesedummymodules arecalled Stubs.

Assume you have 3 modules, Module A, Module B and module C. Module A is ready and we
needtotestit,butmoduleAcallsfunctionsfrom Module Band C whicharenotready, so developer
willwrite a dummy module which simulates B and C and returns values to module A. This
dummymodulecodeis known asstub.
BenifitsofusingStubsandDrivers:

--Stubsallowtheprogrammertocallamethodinthecodebeingdeveloped,evenifthemethoddoesnot
havethe desiredbehaviouryet.
--
Byusingstubsanddriverseffectively,wecancutdownourtotaldebuggingandtestingsmallpartsofaprogra
mindividually, helpingus tonarrow downproblemsbeforetheyexpand.
--StubsandDriverscanalsobeaneffectivetoolfordemonstratingprogressinabusinessenvironment.

Example:
main()
{
inta,b,c,sum;scanf(―%d
%d‖,&a,&b);
sum=calsum(a,b);diff=c
aldiff(a,b);mul=calmul(
a,b);
printf(―Thesum is %d:‖,sum);
}
calsum(intx,int y)
{
int
d;d=x+y
;returnd
;
}
*Supposeifthemain()moduleotcaldiff()&calmul()moduelisnotready,thenthedriverandstubmodulecan
bedesignedas:

Solution:

-->Driverfor main()Module:

driver_main()
{
inta,b,c,sum;scanf(―%d
%d‖,&a,&b);
sum=calsum(a,b);
printf(―Thesum is %d:‖,sum);
}
-->Stubforcall_sum()module:

call_sum(intx,int y)
{
printf(―Difference calculatingmodule‖)
return0;
}

IntegrationTesting:
Once all the individual units are createdand tested, westart combiningthose―UnitTested‖
modulesandstartdoingtheintegratedtesting.So themeaningof Integration
testingisquitestraightforward-Integrate/combine the unit tested module one by one and test the
behaviour as a combinedunit.
The main function or goal of Integration testing is to test the interfaces between
theunits/modules.Theindividualmodulesarefirsttestedinisolation.Oncethe modulesareunit tested,they
are integrated one by one, till all the modules are integrated, to check the
combinationalbehaviour,andvalidatewhethertherequirementsareimplemented correctlyornot.

IntegrationTestingisnecessaryforthefollowingreasons:
--Itexposesinconsistencybetweenthemodulessuch asimproper callorreturnsequences.
--Datacanbelostacrossaninterface.
--Onemodulewhen combined withanothermodule maynotgive thedesired result.
--Data types and their valid ranges may mismatch between the

modules.Therearethreeapproaches forintegration testing:


DecompositionBasedIntegration:
In this strategy, we do the decomposition based on the functional characteristics of the system.
Afunctionalcharacteristicisdefinedbywhatthemoduledoes,thatis,actionsoractivities performedby the
module. In this strategy our main goal is to test the interfaces among separately tested
units.Thereare fourapproaches forthis strategy:

-->Non IncrementalIntegrationTesting/Bigbangintegration
-->IncrementalIntegrationTesting
--Top-DownIntegration
--Bottom-upIntegration
-->PracticalApproach forIntegrationTesting/Sandwichintegration.

Briefly, big-bang groups the whole system and test it in a single test phase. Top-down starts at
theroot of the tree and slowly work to lower level of the tree. Bottom-up mirrors top-down, it starts
atthe lower level implementation of the system and work towards the main. Sandwich is an
approachthatcombines both top-down and bottom-up.
NonIncrementalIntegrationTesting/Bigbangintegration:
--Thisisoneoftheeasiest approachestoapplyinintegrationtesting.
--Herewetreatthe whole system asasubsystemandtestit inasingletestphase.
--Normally this means simply integrating all the modules, compile them all at once and then test
theresultingsystem.
--Thisapproachrequireslittleresourcestoexecuteaswe donotneedtoidentifycriticalcomponents (like
interactions, paths between themodules)norrequire extracodingforthe―dummy modules‖.
--Thisapproach maybe usedforverysmall systems, howeverit is stillnotrecommended becauseitisnot
systematic.
--
Inlargersystems,thelowresourcesrequirementinexecutingthistestingiseasilyoffsetbytheresourcesrequ
ired to locate theproblem when it occurs.

Insummary,bigbangintegrationhasthefollowingcharacteristics:
• Considersthewholesystemasasubsystem
• Testsallthemodulesinasingletestsession
• Onlyoneintegrationtestingsession

Advantages:
• Lowresourcesrequirement
• Doesnotrequireextracoding

Disadvantages:
• Notsystematic
• Hardtolocateproblems
• Hardtocreatetestcases

IncrementalIntegrationTesting:
InIncrementalintegrationtesting,thedevelopersintegratethemodulesonebyoneusingstubsor
drivers to uncover the defects. This approach is known as incremental integration testing. To
thecontrary,bigbangisoneotherintegrationtestingtechnique,whereallthemodulesareintegratedinonesh
ot.

IncrementalIntegrationTestingis beneficialforthefollowingreasons:

--EachModuleprovides adefinitiveroleto playinthe project/productstructure

--EachModulehasclearlydefineddependencies someof which canbeknownonlyattheruntime.

--The incremental integration testing's greater advantage is that the defects are found early in
asmaller assemblywhen itis relativelyeasyto detect theroot causeof thesame.

--A disadvantage is that it can be time-consuming since stubs and drivers have to be developed
forperformingthesetests.

TypesofIncrementalintegrationtesting:
-->TopDownintegrationtesting
--> Bottom-upintegrationtesting

TopDownIntegrationTesting:
--Intop-downintegration,westartatthe targetnodeatrootofthefunctionaldecomposition
treeandwork toward theleaves.
--Stubsareusedtoreplacethechildrennodesattachedtothetargetnode.
--A test phase consists in replacing one of the stub modules with the real code and test the
resultingsubsystem.
--Ifno problemis encounteredthen wedo thenext testphase.
--If allthechildrenwerereplacedbyrealcodeatleastonceandmeetthe requirements thenwe
movedown to thenext level.
--Now we can replace the higher level tested modules with real code and continue the
integrationtesting.
--Fortop-downintegrationthenumberofintegrationtestingsessionsis:nodes−leaves+edges.Top-
downintegration hasthe drawback ofrequiringstubs:
--While stubs are simpler than the real code, it is not straightforward to write them; the
emulationmust be complete and realistic, that is, the test cases results ran on the stub should
match with theresultson therealcode.
--Being a throw-away code, it does not reach the final product nor it will increase functionality
ofthesoftwarethus it isextraprogrammingeffort withoutadirect reward.

Modulessubordinate tothe topmoduleareintegrated inthe followingtwoways:

Depth First Integration: In this type, all modules on a major control path of the design hierarchy
areintegratedfirst.Inthefigureshownabove,modulesA,B,andDareintegratedfirst,nextmodulesA,C, E,
FG,h and integrated.

Breadth First Integration: In this type, modules directly subordinate at each level, moving
acrossthe designhierarchy horizontally are integrated first. In the figure shown above, modules
B,C areintegratedfirst,nextmodulesD,E, FandatlastmodulesG, Hintegrated.

Bottom-upIntegrationTesting:
--Bottom-up integration starts at the opposite end of the functional decomposition tree, instead
ofstartingat themain programwestart atthe lower-levelimplementation ofthe software.
--By moving in an opposite direction, the parent nodes are replaced by throw-away codes instead
ofthechildren.
--Thesethrow-awaycodes arealso knownas drivers.
--This approach allow us to start working with simpler and lower level of the
implementation,allowing us to create testing environments more easily because of the simpler
outputs of thosemodules.
--Thisalsoallowsustohandletheexceptionsmoreeasily.
--Conversely,wedonothaveanearlyprototypethusthemainprogramisthelasttobetested. Ifthere is a
design error then it will be just identified at a later stage, which implies high
errorcorrectioncost.
--Bottom-up integration is commonly used for object-oriented systems, real-time systems
andsystemswith strict performancerequirements.
--Forbottom-upintegrationthenumberofintegrationtestingsessionsis:nodes−leaves+edges.
PracticalApproach forIntegrationTesting/Sandwichintegration:
--Sandwichintegrationcombinestop-downintegrationandbottom-upintegration.
--The main concept is to maximize the advantages of top-down and bottom-up and minimizing
theirweaknesses.
--Sandwich integration uses a mixed-up approach where we use stubs at the higher level of the
treeanddrivers at thelower level (Figure).
--Thetestingdirectionstartsfromboth sideoftreeandconvergestothe centre,thus
thetermsandwich.
--This will allow us to test both the top and bottom layers in parallel and decrease the number
ofstubsand drivers required in integration testing.

Advantages:
--Topandbottomlayerscanbedoneinparallel
-- Lessstubsanddriversneeded
--Easyto constructtest cases
--Bettercoveragecontrol
--Integrationisdoneassoonacomponentisimplemented

Disadvantages:
--Stillrequiresthrow-awaycodeprogramming
--Partial bigbangintegration
--Hardto isolateproblems

Call-GraphbasedIntegration:
A call graph is a directed graph, where the nodes are either modules or units, and a
directededge from one node to another node means one module has called another module. The
call graphcanbecaptured inamatrixformwhich is known as the adjacencymatrix.
Therearetwotypes ofintegrationtestingbasedoncallgraph:
-->PairwiseIntegration
-->NeighbourhoodIntegration.

PairwiseIntegration:
--Inpair-wiseintegration,weeliminatetheneedofstubanddriver,byusingtherealcodeinstead.--This is
similar to big bang where has problem isolation problem due to the large amount of
moduleswearetestingat once.
--By pairing up the modules using the edges, we will have a number of test sessions equal to
thenumberof edges that exist in thecall graph.
--Since the edges correspond to functions or procedures invocated, in a standard system,
thisimplies manytest sessions.
--Forpair-wiseintegrationthenumberofintegrationtestingsessionsisthenumberofedges.
NeighbourhoodIntegration:
--Whilepair-wiseintegrationeliminatestheneedofstubanddriver,itstillrequiresmanytest cases.
--Asan attemptofimprovingfrompair-wise,neighbourhoodrequiresfewertestcases.
--Inneighbourhoodintegration, wecreateasubsystem foratestsession byhaving atarget
nodeandgroupingall the nodes near it.
--Nearisdefinedasnodes
thatarelinkedtothetargetnodethatisanimmediatepredecessororsuccessorofit.
--Bydoingthiswewill beable toreduceconsiderablythe amount oftest sessions required.
--The total test sessions in neighbourhood integration can be calculated
as:Neighbourhood=nodes – sink nodes
=20 -10
=10
whereSinkNodeisaninstruction ina moduleatwhich executionterminates.

PathBasedIntegration:
--Bymovingtopath-based integrationwewillbeapproachingintegration
testingfromanewdirection.Here we will try to combine both structural and functional approach in
path-baseintegration.
--
Finally,insteadoftestingtheinterfaces(whicharestructural),wewillbetestingtheinteractions(whichareb
ehavioural).
--Here,whenaunitis executedcertainpathofsourcestatementsistraversed.
--When this unit calls source statements from another unit, the control is passed from the
callingunitto thecalled unit.
--Forintegrationtestingwetreattheseunitcallsasanexitfollowedbyan

entry.Weneedtounderstandthefollowingdefinitionsforpath-basedintegration:

Source Node:Aprogramstatement fragmentat whichprogram execution begins or

resumes.SinkNode:Astatementfragment atwhich program execution terminates

Module executionpath (MEP):Asequence of statementswithin amodulethat beginswith


asourcenode,Ends with a sink nodewith no interveningsink nodes.
Message:Aprogramminglanguagemechanismbywhich one unittransferscontrol
toanotherunit.Usually interpreted as subroutine / function invocations .The unit which receives the
messagealwaysreturnscontrol to the messagesource.
MM-path: A module to module path. It is an interleaved sequence of module execution paths
andmessages which are used to describe sequences of module execution paths that include
transfers ofcontrol among separate units. MM-paths always represent feasible execution paths, and
these pathscrossunit boundaries.

FunctionTesting:
Functiontestingisdefinedas―theprocessofattemptingtodetectdiscrepanciesbetweenthefunctio
nal specificationsofasoftwareanditsactualbehaviour‖.Whenanintegratedsystemistested, all its
specified functions and external interfaces are tested on the software. Everyfunctionality of the
system specified in the functions is tested according to its externalspecifications.Thefunctiontest
must determineif eachcomponentorbusiness event:

--Performsinaccordancetothespecifications
--Responds correctlyto all conditions thatmaybepresented byincomingevents / data,
--Movesdata correctlyfromonebusiness eventto thenext(includingdata stores)
--Business eventsinitiatedintheorderrequiredtomeetthebusinessobjectivesofthesystem.

Aneffectivefunctiontestcyclemusthave
adefinedsetofprocessesanddeliverables.Theprimaryprocesses / deliverables for
requirementsbasedfunction test are:

TestPlanning:Duringplanning,thetestleaderwithassistancefromthetestteamdefinesthescope,schedule
,and deliverables forthefunction test cycle.

Partioning / Functional Decomposition: Functional decomposition of a system is the breakdown


ofasystem into functionalcomponents orfunctionalareas.
RequirementDefinition:Thetestingorganizationneedsspecifiedrequirementsintheformofproperdocumentsto
proceed withthe function test.

Testcase design:Atesterdesigns and implementsatest caseto validate that theproduct performs


inaccordancewith therequirements.

Traceability matrix formation: Test cases need to be traced / mapped back to the
appropriaterequirement. A function coverage matrix is prepared. This matrix is a table, listing
specificfunctions to be tested, the priority for testing each function, and test cases that contain tests
for eachfunction.

Functions/Features Priority TestCases


F1 3 T2,T4,T6
F2 1 T1.T3,T5

Testcaseexecution:Asinallthephasesoftesting,anappropriatesetoftestcasesneedtobeexecuteda
nd the results of thosetest casesrecorded.

SystemTesting:
--System testing is the type of testing to check the behaviour of a complete and fully
integratedsoftwareproduct based on thesoftware requirements specification (SRS)document.
--Themainfocus ofthis testingis toevaluateBusiness/ Functional/End-user requirements.
--Thisisblack boxtype oftestingwhereexternal workingofthesoftwareisevaluated
withthehelpofrequirement documents&itistotallybased onUsers point ofview.
--Forthistypeoftestingdonotrequired knowledgeofinternaldesignorstructureor code.
--Thistestingistobecarriedoutonlyafter System
IntegrationTestingiscompletedwherebothFunctional&Non-Functional requirementsareverified.
--In the integration testing testers are concentrated on finding bugs/defects on integrated
modules.But in the Software System Testing testers are concentrated on finding bugs/defects
based onsoftwareapplication behaviour,softwaredesign andexpectationofenduser.

CategoriesofSystemTesting:
RecoveryTesting:
Recovery is just like the exception handling feature of a programming language. It is a type of non-
functional testing. Recovery testing is done in order to check how fast and better the application
canrecover after it has gone through any type of crash or hardware failure etc. Recovery testing is
theforcedfailureof thesoftwarein avarietyofwaysto verifythatrecoveryisproperlyperformed.
ThusRecoveryTestingis―theactivityoftestinghowwellthesoftwareisabletorecoverfromcrashes,hard
ware failures, andothersimilarproblems‖.

SomeexamplesofRecoverytestingare:
--When an application isreceivingdatafromanetwork, unplugtheconnectingcable.After some
time,plugthecablebackinandanalyzetheapplication’sabilitytocontinue receivingdatafromthepointat
which thenetwork connection was broken.
--Restart the system while a browser has a definite number of sessions and check whether
thebrowseris able torecover all of them or not.

“Biezer” proposes that testers should work on the following areas during recovery
testing:Restart:Testersmustensurethatalltransactionshavebeenreconstructedcorrectlyandthatal
ldevicesarein proper states.

Switchover: Recovery can also be done if there are standby components and in case of failure of
onecomponent,the standbytakes over thecontrol.

SecurityTesting:
--Itisatypeofnon-functionaltesting.
--Securitytestingisbasicallyatypeofsoftwaretestingthat’sdonetocheck whetherthe applicationorthe
product is securedornot.
--Itcheckstoseeiftheapplicationisvulnerable toattacks,ifanyonehackthesystemorloginto
theapplicationwithout anyauthorization.
--It is a process to determine that an information system protects data and maintains functionality
asintended.
--The security testing is performed to check whether there is any information leakage in the
sensebyencryptingtheapplicationorusingwiderangeofsoftware’sandhardware’s andfirewalletc.
--Softwaresecurityis aboutmakingsoftwarebehavein thepresenceofa malicious attack.

TypesofSecurityRequirements:
--SecurityRequirementsshould beassociated with eachfunctionalrequirement.
--In addition to security concerns that are directly related to particular requirements, a
softwareprojecthas securityissues that areglobal in nature.

Howto performsecuritytesting:
Testers must use a risk based approach, grounded in both the systems architectural
realityandtheattackersmindset,togaugesoftwaresecurityadequately.
Byidentifyingrisksandpotentialloss associated with those risks in the system and creating tests
driven by those risks, the tester canproperlyfocus onareas of codein whichan attackis likelyto
succeed.

ElementsofSecurityTesting:
--Confidentiality
--Integrity
--Authentication
--Availability
--Authorization
--Non-repudiation.
PerformanceTesting:
--Softwareperformancetestingisameans of qualityassurance(QA).
--It involves testing software applications to ensure they will perform well under their
expectedworkload.
--Featuresand Functionalitysupported byasoftwaresystem isnot
theonlyconcern.Asoftwareapplication'sperformancelikeitsresponse time, domatter.
--Thegoalofperformancetestingisnottofindbugsbuttoeliminateperformancebottlenecks
--Performance testing is done to provide stakeholders with information about their
applicationregardingspeed, stabilityand scalability.
--Moreimportantly,performancetestinguncoverswhatneedstobeimprovedbeforetheproductgoesto
market.
--Withoutperformancetesting,softwareislikelytosuffer fromissuessuchas:runningslowwhileseveral
users use it simultaneously, inconsistencies across different operating systems and poorusability.
--Performance testing will determine whether or not their software meets speed, scalability
andstabilityrequirements under expected workloads.
--Applications sent to market with poor performance metrics due to non existent or
poorperformancetestingarelikelytogainabadreputationand failtomeetexpectedsalesgoals.
--Also, mission critical applications like space launch programs or life saving medical
equipmentsshould beperformancetested to ensurethat theyrun foralongperiod oftime without
deviations.

Thefollowingtasksmust bedoneforthis thing:


--Develophigh levelplanincludingrequirements,resources,timeliness, andmilestones.
--Developadetailed performancetestplan.
--Specifytest data needed.
--Execute testsprobablyrepeatedlyonorder tosewhetheranyunaccountedfactormight affecttheresults.

LoadTesting:
--Loadtestingisa typeofnon-functionaltesting.
--A load test is type of software testing which is conducted to understand the behaviour of
theapplicationunder aspecificexpected load.
--
Loadtestingisperformedtodetermineasystem’sbehaviourunderbothnormalandatpeakcondition
s.
--It helps to identify the maximum operating capacity of an application as well as any
bottlenecksanddeterminewhichelementiscausingdegradation.E.g.If thenumberof
usersareincreasedthenhowmuch CPU,memorywill beconsumed, whatis thenetworkand
bandwidthresponse time.
--
Loadtestingcanbedoneundercontrolledlabconditionstocomparethecapabilitiesofdifferentsystemsor
to accuratelymeasurethecapabilities ofasinglesystem.
--Loadtestinginvolvessimulatingreal-
lifeuserloadforthetargetapplication.Ithelpsyoudeterminehowyour
applicationbehaveswhenmultipleusershitsitsimultaneously.
 Load testing differs from stress testing, which evaluates the extent to which a system
keepsworking when subjected to extreme work loads or when some of its hardware or software
has beencompromised.
--The primary goal of load testing is to define the maximum amount of work a system can
handlewithoutsignificant performancedegradation.
Examplesofloadtestinginclude:
 Downloadingaseriesoflargefiles fromtheinternet.
 Runningmultipleapplicationsonacomputerorserversimultaneously.
 Assigningmanyjobs toaprinterina queue.
 Subjectingaservertoalarge amountoftraffic.
 Writingandreadingdatatoandfromaharddiskcontinuously.
StressTesting:
 Itisatypeofnon-functionaltesting.
 Itinvolvestestingbeyondnormaloperational
capacity,oftentoabreakingpoint,inordertoobservethe results.
 Itis a form ofsoftwaretestingthat isusedto determinethestabilityof a given system.
 Itputs greateremphasis onrobustness,availability, anderrorhandlingunderaheavyload,ratherthanon
what wouldbe considered correctbehaviour under normal circumstances.
 Thegoals ofsuch testsmaybetoensurethesoftwaredoesnotcrash inconditions
ofinsufficientcomputationalresources(such as memoryor disk space).
 Thus ―StressTesting tries to breakthe system undertest by overwhelming itsresources in order to
find thecircumstances underwhich itwillcrash‖
 Theareasthatmaybestressedinasystemare:InputTransactions,DiskSpace,Output,Commu
nications,Interaction with users.

UsabilityTesting:
--Usabilitytestingis an essential elementof qualityassurance.
--Itisthemeasureofaproduct’spotentialtoaccomplishthegoalsoftheuser.
--Usability testing is a method by which users of a product are asked to perform certain tasks in
anefforttomeasuretheproduct’sease-of-use,tasktime,andtheuser’sperceptionoftheexperience. --This
look as a unique usability practice because it provides direct input on how real users use thesystem.
--Usabilitytestingmeasureshuman-usableproductsto fulfiltheusers purpose.
--The item which takes benefit from usability testing are web sites or web applications,
documents,computerinterfaces,consumerproducts, and devices.
--Usability testing processes the usability of a particular object or group of objects, where
commonhuman-computerinteraction studies tryto formulate universal principles.

Whattheuserwantsor exceptsfromthesystemcan bedetermined usingseveral wayslike:


--AreaExperts,
--Groupmeetings
--Surveys
--Analysesimilarproducts

Usabilitycharacteristics againstwhichtestingisconductedare:
--EaseofUse
--Interfacesteps
--ResponseTime
--HelpSystem
--ErrorMessages

Compatability/Conversion/ConfigurationTesting:
--Compatibilityis anon-functionaltestingtoensurecustomersatisfaction.
--
Itistodeterminewhetheryoursoftwareapplicationorproductisproficientenoughtorunindifferentbr
owsers,database,hardware,operatingsystem,mobiledevicesandnetworks.
--Application could also impact due to different versions, resolution, internet speed
andconfigurationetc.Henceit’simportanttotesttheapplicationinallpossiblemannerstoreducefailuresan
dovercomefrom embarrassmentsof bug’sleakage.
--AsaNon-functional tests, Compatibilitytestingis to endorsethat theapplication runs properlyin
differentbrowsers,versions,OSandnetworkssuccessfully.
--Compatibility test should always perform on real environment instead of virtual
environment.Testthecompatibilityofapplicationwithdifferentbrowsersandoperatingsystemstogua
rantee100%coverage.
TypesofSoftwarecompatibilitytesting:
 Browsercompatibilitytesting
 Hardware
 Networks
 MobileDevices
 OperatingSystem
 Versions

AcceptanceTesting:
--After the system test has corrected all or most defects, the system will be delivered to the user
orcustomerforacceptancetesting.
--Acceptance testing is basically done by the user or customer although other stakeholders may
beinvolvedas well.
--Thegoalofacceptance testingistoestablishconfidenceinthesystem.
--Acceptancetestingismostoftenfocusedonavalidationtypetesting.
--Thus ―AcceptanceTesting is the formal testing conducted to determine whether a softwaresystem
satisfies its acceptance criteria and to enable buyer to determine whether to accept the system
ornot.”
--Thusacceptancetestingisdesignedto:
-Determine whetherthesoftwareis fitfor theuserto use.
-Makingusersconfidentaboutproduct
-Determinewhetherasoftwaresystemsatisfiesits acceptancecriteria.
-Enablethebuyertodeterminewhethertoacceptthesystem.

TypesofAcceptanceTesting:
-->AlphaTesting
-->BetaTesting

AlphaTesting:
--Alpha testing is one of the most common software testing strategyused in software
development.Itsspeciallyused byproduct development organizations.
--Thistesttakesplaceatthedeveloper’ssite.Developersobservetheusers andnoteproblems.
--Alpha testing is testing of an application when development is about to complete. Minor
designchangescan still be madeasa result ofalpha testing.
--Alpha testing is typically performed by a group that is independent of the design team, but
stillwithinthecompany,e.g.in-housesoftwaretestengineers,orsoftwareQAengineers.
--Alpha testing is final testing before the software is released to the general public. It has
twophases:
-->Alphatestingissimulated oractual operationaltestingbypotentialusers/customers
oranindependenttestteam at thedevelopers’site.
-->Alpha testing is often employed for off-the-shelf software as a form of internal
acceptancetesting,beforethe software goes to beta testing.

EntryCriteriaforAlpha:
--Allfeaturesarecomplete/testable
--Highbugsonprimaryplatformarefixed /verified.
--50%ofmedium bugs on primaryplatforms arefixed / verified.
--Allfeaturesaretestedonthe primaryplatforms.
--Performancehasbeenmeasured/compared
--Alphasitesarereadyforinstallation.

ExitcriteriatoAlpha:
--Getresponse/ feedbacksfromthecustomers.
--Prepareareport ofanyserious bugs beingnoticed.
--Notifybug– fixingissues to developers.

BetaTesting:
--In software development, a beta test is the second phase of software testing in which a sampling
oftheintended audiencetries the product out.
--Itisalsoknownasfieldtesting.Ittakesplaceat customer’ssite.Itsendsthesystemtouserswhoinstallit and
useit under real-world workingconditions.
--Betaisthesecondletter oftheGreek alphabet.
--Originally,thetermalphatestmeantthefirstphaseoftestinginasoftwaredevelopmentprocess.Thefirst
phaseincludesunittesting, component testing, and systemtesting.
--Betatestingcanbeconsidered"pre-releasetesting."
--Betatestingisalsosometimesreferredtoasuseracceptancetesting(UAT)orendusertesting.
--Inthisphaseofsoftwaredevelopment,applicationsaresubjectedto
realworldtestingbytheintendedaudienceforthesoftware.
--The experiences of the early users are forwarded back to the developers who make final
changesbefore releasingthesoftware commercially.

EntryCriteriaforBeta:
--Positiveresponsesfrom alphasite.
--Customerbugsinalpha testinghavebeenaddressed.
--Therearenofatalerrors whichcanaffectthe functionalityofthesoftware.
--Betasites arereadyfor installation.

Exitcriteria toBeta:
--Getresponse /feedbacks fromthebetatesters.
--Prepareareportofallseriousbugs.
--Notifybug–fixingissues to developers.

Regressiontesting
ProgressiveVsregressivetesting,Regressiontestability,Objectivesofregressiontesting,Whenregressiontestin
gdone?,Regression testingtypes,Regression testingtechniques

ProgressiveVsRegressivetesting:
--All the test case design methods or testing technique, discussed till now are referred to
asprogressivetestingor development testing.
--The purpose of regression testing is to confirm that a recent program or code change has
notadverselyaffectedexistingfeatures.
--Regression testing is nothing but full or partial selection of already executed test cases which
arere-executedtoensure existingfunctionalities workfine.
--Thistestingisdonetomakesurethat newcode changesshouldnothavesideeffectson
theexistingfunctionalities.
--Itensuresthatoldcode still worksoncethe new codechangesaredone.

NeedofRegressionTesting:
 Changeinrequirementsandcodeismodifiedaccordingtotherequirement
 Newfeatureisadded tothesoftware
 Defectfixing
 Performanceissuefix

Definition:
Regression testing is the selective retesting of a system or component to verify
thatmodifications have not caused unintended effects and that the system or component still
complieswithits specifiedrequirements.

RegressionTestability:
Regression testability refers to the property of a program, modification or test suite that
letsit be effectively and efficiently regression-tested. We can classify a program as regression
testableif most single statement modifications to the program entail(involves) rerunning a small
proportionofthe current test suite.

ObjectivesofRegressionTesting:
--It tests to check that the bug has been addressed: The first objective in bug fix testing is to
checkwhetherthe bug-fixinghas worked or not.
--It finds other related bugs: Regression tests are necessary to validate that the system does
nothaveanyrelated bugs.
--It tests to check the effect on other parts of the program: It may be possible that bug-fixing
hasunwantedconsequencesonotherpartsofaprogram.Therefore,itisnecessarytochecktheinfluenceofch
anges in onepart orother parts oftheprogram.

Whenregressiontestingdone?Software
Maintenance:
--CorrectiveMaintenance:Changesmadetocorrectasystemafterafailure hasbeenobserved.
--AdaptiveMaintenance:Changesmadetoachievecontinuingcompatibilitywiththetargetenvironemntor
other systems.
--PerfectiveMaintenance: Changesmadetoimproveoradd capabilities.
--Preventive Maintenance: Changes made to increase robustness, maintainability, portability,
andotherfeatures.
RapidIterativeDevelopment:Theextremeprogrammingapproachrequiresthatatestbedevelopedfor
each classand thatthis test be re-run everytimetheclass changes.
CompatabilityAssesment and Benchmarking: Some test suites designed to be run on a
widerangeofplatformsandapplicationstoestablishconformancewithastandardortoevaluatetimeandspac
eperformance.

RegressionTestingTypes:

Bug-fixRegression:Thistestingisperformedafterabughasbeenreportedandfixed.

Side-EffectRegression/StabilityRegression:Itinvolvesretestingasubstantialpartoftheproduct. The
goal is to prove that the changes has no detrimental effect on something that wasearlierinorder.

RegressionTestingTechniques:
Therearedifferenttechniquesforregressiontesting.Theyare:
--
>Regressiontestselectiontechnique:Thistechniqueattemptstoreducethetimerequiredtoretestamodifie
d program byselectingsomesubset ofthe existingtest suite.

-->Testcaseprioritizationtechnique:Regressiontestprioritizationattemptstoreorderaregression
test suite so that those tests with the highest priority according to some established criteria,
areexecuted earlier in the regression testing process rather than those lower priority. There are
twotypesof prioritization:

(a) General Test Case Prioritization: For a given program P and test suits T, we prioritize the
testcases in T that will be useful over a succession of subsequent modified versions of P, without
anyknowledgeof themodified version.

(b) Version-SpecificTest case Prioritization:We prioritize thetestcases inT, when


PismodifiedtoP',with theknowledgeofthe changesmadein P.

-->Test Suite Reduction Technique: It reduces testing costs by permanently eliminating


redundanttestcases form test suitesin terms ofcodesoffunctionalitiesexercised.

SelectiveRetestTechnique:

Selective retest technique attempts to reduce the cost of testing by identifying the portions of
P'(modified version of Program) that must be exercised but the regression test suite. Following are
thecharacteristicfeatures ofthe selectiveretest technique:
-->It minimizestheresourcesrequiredtoregressiontestanewversion.
--> It is achievedbyminimizingthenumber oftestcases applied tothenew version.
-->Itanalysestherelationshipbetweenthetestcasesandthesoftwareelementstheycover.
-->Itusestheinformationabout changestoselecttestcases.

StepsinSelectiveretesttechnique:
1. SelectT'subsetofT,a setoftestcasestoexecuteonP'.
2. TestP'withT',establishingcorrectnessofP'withrespecttoT'.
3. Ifnecessary, createT'‖, asetofnewfunctional testcasesfor P'.
4. TestP'withT', establishingcorrectnessofP'withrespecttoT''.
5. CreateT'''. anewtestsuiteandtestexecutionprofileforP',fromT,T'andT''.

StrategyforTestCaseSelection:

For large software systems, there may be thousands of test cases available in its test suite. When
achangeis introducedinto the systemfor next version, rerunning all thetest cases isa costly andtime
consuming task. Therefore a need for selecting a subset of test cases from the original test suiteis
necessary. But the use of multiple criteria should increase the code coverage. So, an effective
testcaseselection strategyisto be designedbased on thecodecoverage.
SelectioncriteriabasedonCode:
-->Faultrevealingtestcases
-->ModificationrevealingTestcases.
-->ModificationtraversingTestcases.
RegressionTestSelectionTechniques:

MinimizationTechniques:Minimization-basedregressiontestselectiontechniquesattempttoselect
minimal sets of test cases from T thatyield coverage of modified or affected portions of P.For
example, this technique uses systems of linear equations to express relationships between
testcasesandbasic blocks(single-entry,single-exit sequencesofstatementsina procedure).
Thetechnique uses a 0-1 integer programming algorithm to identify a subset T' of T that ensures
thatevery segment that is statically reachable from a modified segment is exercised by at least one
testcaseinT9 thatalso exercises the modified segment.

Dataflow Techniques:Dataflow-coverage-basedregressiontestselectiontechniquesselecttestcases
that exercise data interactions that have been affected by modifications. For example,
thetechniquerequires that every definition-use pair that is deleted from P, new in P', or modified for
P'be tested. The technique selects every test case in T that, when executed on P, exercised deleted
ormodifieddefinition-use pairs, orexecutedastatement containingamodifiedpredicate.

Safe Techniques: Most regression test selection techniques—minimization and dataflow


techniquesamong them—are not designed to be safe. Techniques that are not safe can fail to select a
test casethat would have revealed a fault in the modified program. In contrast, when an explicit set
of safetyconditions can be satisfied, safe regression test selection techniques guarantee that the
selectedsubset,T',contains all testcases in theoriginaltestsuiteTthatcan reveal faults inP'

Ad Hoc/Random Techniques: When time constraints prohibit the use of a retest-all approach,
butnotestselectiontoolisavailable,developersoftenselecttestcasesbasedon―hunches,‖orloose
associations of test cases with functionality. Another simple approachisto randomly select
apredeterminednumberoftestcases fromT.

Retest-All Technique: The retest-all technique simply reuses all existing test cases. To test P',
thetechnique effectively―selects‖all test casesinT.

EvaluatingRegressionTestSelectionTechnique:

Inclusiveness: Let M be a regression test selection technique. Inclusiveness measures the extent
towhich M chooses modification revealing tests from T for inclusion in T'. We define
inclusivenessrelativeto aparticular program,modified program, and test suite,as follows:

DEFINITION
Suppose T contains n tests that are modification revealing for P and P', and suppose M selects m
ofthesetests.Theinclusiveness ofM relative toP, P',andTis
1) the percentage given by the expression
(100(m/n))if n# 0 or2)100%if n=0.
For example, if T contains 50 tests of which eight are modification-revealing for P and P', and
Mselects two of these eight tests, then M is 25% inclusive relative to P, P', and T. If T contains
nomodification-revealing tests then every test selection technique is 100% inclusive relative to P,
P",andT.

Precision: Let M be a regression test selection technique. Precision measures the extent to which
Momitstests that arenon modification-revealing. Wedefineprecisionrelativeto aparticularprogram,
modifiedprogram,andtestsuite,asfollows:

DEFINITION
SupposeTcontainsnteststhatarenonmodification-
revealingforPandP'andsupposeMomitsmofthesetests.Theprecision ofMrelative to P,P: andTis
1) the percentage given by the expression (100(m/n))
if2)100%if n =0. n#0,or
For example, if T contains 50 tests of which 44 are non modification-revealing for P and P', and
Momits 33 of these 44 tests, then M is 75% precise relative to P, P', and T. If T contains no non-
modification-revealingtests,theneverytestselectiontechniqueis100%preciserelativetoP,P',andT.

Efficiency: We measure the efficiency of regression test selection techniques in terms of their
spaceand time requirements. Where time is concerned, a test selection technique is more
economical thanthe retest-all technique if the cost of selecting T' is less than the cost of running the
tests in T-T'Space efficiency primarily depends on the test history and program analysis information
a techniquemust store. Thus, both space and time efficiency depend on the size of the test suite that
a techniqueselects,and on thecomputational cost of that technique.

RegressionTestPrioritization:
Theregressiontestprioritizationapproachisdifferentascomparedtoselectiveretesttechniques.
Regression testprioritization attemptstoreordera regressiontestsuite sothat thosetests with the highest
priority, according to some established criterion, are executed earlier in theregressiontestingprocess
thanthose with alowerpriority.

Thesteps forthis approachare:


1. SelectT'fromT, asetoftestcasestoexecuteonP'.
2. ProduceT'P,a permutationofT', suchthatT'Pwillhavea better rateoffault detectionthanT'.
3. TestP'withT'Pinorderto establish thecorrectness of P'wrtT'P.
4. IfnecessarycreateT'',asetofnew functionalorstructuraltestsforP'.
5. TestP'withT''inordertoestablishthecorrectnessofP'wrtT''.
6. CreateT''',anewtestsuiteforP',fromT,T'PandT''.
Validation testing:-

Validation testing is testing where tester performed functional and non-functional testing.
Here functional testing includes Unit Testing (UT), Integration Testing (IT) and System Testing (ST),
and non-functional testing includes User acceptance testing (UAT).

Validation testing is also known as dynamic testing, where we are ensuring that "we have developed the
product right." And it also checks that the software meets the business needs of the client.

Validation testing can be best demonstrated using V-Model. The Software/product under test is evaluated
during this type of testing.

Activities:
 Unit Testing
 Integration Testing
 System Testing
 User Acceptance Testing
Unit Testing:-

Unit testing involves the testing of each unit or an individual component of the software application. It is
the first level of functional testing. The aim behind unit testing is to validate unit components with its
performance.

A unit is a single testable part of a software system and tested during the development phase of the
application software.

The purpose of unit testing is to test the correctness of isolated code. A unit component is an individual
function or code of the application. White box testing approach used for unit testing and usually done by
the developers.

Whenever the application is ready and given to the Test engineer, he/she will start checking every
component of the module or module of the application independently or one by one, and this process is
known as Unit testing or components testing.

Why Unit Testing?

Generally, the software goes under four level of testing: Unit Testing, Integration Testing, System Testing,
and Acceptance Testing but sometimes due to time consumption software testers does minimal unit
testing but skipping of unit testing may lead to higher defects during Integration Testing, System Testing,
and Acceptance Testing or even during Beta Testing which takes place after the completion of software
application.

Some crucial reasons are listed below:

o Unit testing helps tester and developers to understand the base of code that makes them able to
change defect causing code quickly.
o Unit testing helps in the documentation.
o Unit testing fixes defects very early in the development phase that's why there is a possibility to
occur a smaller number of defects in upcoming testing levels.
o It helps with code reusability by migrating code and test cases.

Unit Testing Techniques:


Unit testing uses all white box testing techniques as it uses the code of software application:

o Data flow Testing


o Control Flow Testing
o Branch Coverage Testing
o Statement Coverage Testing
o Decision Coverage Testing

Advantages and disadvantages of unit testing

Advantages

o Unit testing uses module approach due to that any part can be tested without waiting for
completion of another parts testing.
o The developing team focuses on the provided functionality of the unit and how functionality
should look in unit test suits to understand the unit API.
o Unit testing allows the developer to refactor code after a number of days and ensure the module
still working without any defect.
Disadvantages

o It cannot identify integration or broad level error as it works on units of the code.
o In the unit testing, evaluation of all execution paths is not possible, so unit testing is not able to
catch each and every error in a program.
o It is best suitable for conjunction with other testing activities.
o Example of Unit testing
Let us see one sample example for a better understanding of the concept of unit testing:
Integration testing
Integration testing is the second level of the software testing process comes after unit testing. In this
testing, units or individual components of the software are tested in a group. The focus of the integration
testing level is to expose defects at the time of interaction between integrated components or units.

Unit testing

uses modules for testing purpose, and these modules are combined and tested in integration
testing. The Software is developed with a number of software modules that are coded by
different coders or programmers. The goal of integration testing is to check the correctness of
communication among all the modules.

Once all the components or modules are working independently, then we need to check the data flow
between the dependent modules is known as integration testing.

Let us see one sample example of a banking application, as we can see in the below image of amount
transfer.
o First, we will login as a user P to amount transfer and send Rs200 amount, the confirmation
message should be displayed on the screen as amount transfer successfully. Now logout as P
and login as user Q and go to amount balance page and check for a balance in that account =
Present balance + Received Balance. Therefore, the integration test is successful.
o Also, we check if the amount of balance has reduced by Rs200 in P user account.
o Click on the transaction, in P and Q, the message should be displayed regarding the data and
time of the amount transfer.

Guidelines for Integration Testing


o We go for the integration testing only after the functional testing is completed on each module of
the application.
o We always do integration testing by picking module by module so that a proper sequence is
followed, and also we don't miss out on any integration scenarios.
o First, determine the test case strategy through which executable test cases can be prepared
according to test data.
o Examine the structure and architecture of the application and identify the crucial modules to test
them first and also identify all possible scenarios.
o Design test cases to verify each interface in detail.
o Choose input data for test case execution. Input data plays a significant role in testing.
o If we find any bugs then communicate the bug reports to developers and fix defects and retest.
o Perform positive and negative integration testing.
o Here positive testing implies that if the total balance is Rs15, 000 and we are transferring Rs1500
and checking if the amount transfer works fine. If it does, then the test would be a pass.
o And negative testing means, if the total balance is Rs15, 000 and we are transferring Rs20, 000
and check if amount transfer occurs or not, if it does not occur, the test is a pass. If it happens,
then there is a bug in the code, and we will send it to the development team for fixing that bug.

Example of integration testing


o Let us assume that we have a Gmail application where we perform the integration testing.
o First, we will do functional testing on the login page, which includes the various components
such as username, password, submit, and cancel button. Then only we can perform integration
testing.
o The different integration scenarios are as follows:

Scenarios1:

o First, we login as P users and click on the Compose mail and performing the functional testing for
the specific components.
o Now we click on the Send and also check for Save Drafts.
o After that, we send a mail to Q and verify in the Send Items folder of P to check if the send mail
is there.
o Now, we will log out as P and login as Q and move to the Inbox and verify that if the mail has
reached.

Secanrios2: We also perform the integration testing on Spam folders. If the particular contact has been
marked as spam, then any mail sent by that user should go to the spam folder and not in the inbox.

As we can see in the below image, we will perform the functional testing
for all the text fields and every feature. Then we will perform integration testing for the related
functions. We first test the add user, list of users, delete user, edit user, and then search user.

Note:

o There are some features, we might be performing only the functional testing, and there are
some features where we are performing both functional and integration testing based on the
feature's requirements.
o Prioritizing is essential, and we should perform it at all the phases, which means we will open
the application and select which feature needs to be tested first. Then go to that feature and
choose which component must be tested first. Go to those components and determine what
values to be entered first.
And don't apply the same rule everywhere because testing logic varies from feature to feature.
o While performing testing, we should test one feature entirely and then only proceed to another
function.
o Among the two features, we must be performing only positive integrating testing or
both positive and negative integration testing, and this also depends on the features need.

Integration Testing Techniques


Any testing technique (Blackbox, Whitebox, and Greybox) can be used for Integration Testing; some are
listed below:

Black Box Testing


o State Transition technique
o Decision Table Technique
o Boundary Value Analysis
o All-pairs Testing
o Cause and Effect Graph
o Equivalence Partitioning
o Error Guessing

White Box Testing


o Data flow testing
o Control Flow Testing
o Branch Coverage Testing
o Decision Coverage Testing

Types of Integration Testing


o Integration testing can be classified into two parts:
o Incremental integration testing
o Non-incremental integration testing
Incremental Approach

In the Incremental Approach, modules are added in ascending order one by one or according to need.
The selected modules must be logically related. Generally, two or more than two modules are added and
tested to determine the correctness of functions. The process continues until the successful testing of all
the modules.

OR

In this type of testing, there is a strong relationship between the dependent modules. Suppose we take
two or more modules and verify that the data flow between them is working fine. If it is, then add more
modules and test again.
For example: Suppose we have a Flipkart application, we will perform incremental integration testing,
and the flow of the application would like this:

Flipkart→ Login→ Home → Search→ Add cart→Payment → Logout

Incremental integration testing is carried out by further methods:

o Top-Down approach
o Bottom-Up approach

Top-Down Approach

The top-down testing strategy deals with the process in which higher level modules are tested with lower
level modules until the successful completion of testing of all the modules. Major design flaws can be
detected and fixed early because critical modules tested first. In this type of method, we will add the
modules incrementally or one by one and check the data flow in the same order.
In the top-down approach, we will be ensuring that the module we are adding is the child of the
previous one like Child C is a child of Child B and so on as we can see in the below image:
Advantages:

o Identification of defect is difficult.


o An early prototype is possible.

Disadvantages:

o Due to the high number of stubs, it gets quite complicated.


o Lower level modules are tested inadequately.
o Critical Modules are tested first so that fewer chances of defects.

Bottom-Up Method

The bottom to up testing strategy deals with the process in which lower level modules are tested with
higher level modules until the successful completion of testing of all the modules. Top level critical
modules are tested at last, so it may cause a defect. Or we can say that we will be adding the modules
from bottom to the top and check the data flow in the same order.

Bottom-Up Method

The bottom to up testing strategy deals with the process in which lower level modules are tested with
higher level modules until the successful completion of testing of all the modules. Top level critical
modules are tested at last, so it may cause a defect. Or we can say that we will be adding the modules
from bottom to the top and check the data flow in the same order.
In the bottom-up method, we will ensure that the modules we are adding are the parent of the previous
one as we can see in the below image:

Advantages

o Identification of defect is easy.


o Do not need to wait for the development of all the modules as it saves time.

Disadvantages

o Critical modules are tested last due to which the defects can occur.
o There is no possibility of an early prototype.

In this, we have one addition approach which is known as hybrid testing.


Functional Testing:
It is a type of software testing which is used to verify the functionality of the software application, whether
the function is working according to the requirement specification. In functional testing, each function
tested by giving the value, determining the output, and verifying the actual output with the expected
value. Functional testing performed as black-box testing which is presented to confirm that the
functionality of an application or system behaves as we are expecting. It is done to verify the functionality
of the application.

Functional testing also called as black-box testing, because it focuses on application specification rather
than actual code. Tester has to test only the program rather than the system.

Goal of functional testing

The purpose of the functional testing is to check the primary entry function, necessarily usable function,
the flow of screen GUI. Functional testing displays the error message so that the user can easily navigate
throughout the application.

What is the process of functional testing?

Testers follow the following steps in the functional testing:

o Tester does verification of the requirement specification in the software application.


o After analysis, the requirement specification tester will make a plan.
o After planning the tests, the tester will design the test case.
o After designing the test, case tester will make a document of the traceability matrix.
o The tester will execute the test case design.
o Analysis of the coverage to examine the covered testing area of the application.
o Defect management should do to manage defect resolving.
What to test in functional testing? Explain

The main objective of functional testing is checking the functionality of the software system. It
concentrates on:

o Basic Usability: Functional Testing involves the usability testing of the system. It checks whether
a user can navigate freely without any difficulty through screens.
o Accessibility: Functional testing test the accessibility of the function.
o Mainline function: It focuses on testing the main feature.
o Error Condition: Functional testing is used to check the error condition. It checks whether the
error message displayed.

Explain the complete process to perform functional testing.


There are the following steps to perform functional testing:

o There is a need to understand the software requirement.


o Identify test input data
o Compute the expected outcome with the selected input values.
o Execute test cases
o Comparison between the actual and the computed result

Explain the types of functional testing.

The main objective of functional testing is to test the functionality of the component.

Functional testing is divided into multiple parts.

Here are the following types of functional testing.


Unit Testing: Unit testing is a type of software testing, where the individual unit or component of the
software tested. Unit testing, examine the different part of the application, by unit testing functional
testing also done, because unit testing ensures each module is working correctly.

The developer does unit testing. Unit testing is done in the development phase of the application.

Smoke Testing: Functional testing by smoke testing. Smoke testing includes only the basic (feature)
functionality of the system. Smoke testing is known as "Build Verification Testing." Smoke testing aims
to ensure that the most important function work.

For example, Smoke testing verifies that the application launches successfully will check that GUI is
responsive.

Sanity Testing: Sanity testing involves the entire high-level business scenario is working correctly. Sanity
testing is done to check the functionality/bugs fixed. Sanity testing is little advance than smoke testing.

Regression Testing: This type of testing concentrate to make sure that the code changes should not side
effect the existing functionality of the system. Regression testing specifies when bug arises in the system
after fixing the bug, regression testing concentrate on that all parts are working or not. Regression testing
focuses on is there any impact on the system.

Integration Testing: Integration testing combined individual units and tested as a group. The purpose
of this testing is to expose the faults in the interaction between the integrated units.

Developers and testers perform integration testing.

White box testing: White box testing is known as Clear Box testing, code-based testing, structural
testing, extensive testing, and glass box testing, transparent box testing. It is a software testing method in
which the internal structure/design/ implementation tested known to the tester.

The white box testing needs the analysis of the internal structure of the component or system.

Black box testing: It is also known as behavioral testing. In this testing, the internal structure/ design/
implementation not known to the tester. This type of testing is functional testing. Why we called this type
of testing is black-box testing, in this testing tester, can't see the internal code.

For example, A tester without the knowledge of the internal structures of a website tests the web pages
by using the web browser providing input and verifying the output against the expected outcome.

User acceptance testing: It is a type of testing performed by the client to certify the system according to
requirement. The final phase of testing is user acceptance testing before releasing the software to the
market or production environment. UAT is a kind of black-box testing where two or more end-users will
involve.

Retesting: Retesting is a type of testing performed to check the test cases that were unsuccessful in the
final execution are successfully pass after the defects fixed. Usually, tester assigns the bug when they find
it while testing the product or its component. The bug allocated to a developer, and he fixes it. After
fixing, the bug is assigned to a tester for its verification. This testing is known as retesting.

Database Testing: Database testing is a type of testing which checks the schema, tables, triggers, etc. of
the database under test. Database testing may involve creating complex queries to load/stress test the
database and check its responsiveness. It checks the data integrity and consistency.

Example: let us consider a banking application whereby a user makes a transaction. Now from database
testing following, things are important. They are:

o Application store the transaction information in the application database and displays them
correctly to the user.
o No information lost in this process
o The application does not keep partially performed or aborted operation information.
o The user information is not allowed individuals to access by the
Ad-hoc testing: Ad-hoc testing is an informal testing type whose aim is to break the system. This type of
software testing is unplanned activity. It does not follow any test design to create the test cases. Ad-hoc
testing is done randomly on any part of the application; it does not support any structured way of testing.

Recovery Testing: Recovery testing is used to define how well an application can recover from crashes,
hardware failure, and other problems. The purpose of recovery testing is to verify the system's ability to
recover from testing points of failure.

Static Testing: Static testing is a software testing technique by which we can check the defects in
software without actually executing it. Static testing is done to avoid errors in the early stage of the
development as it is easier to find failure in the early stages. Static testing used to detect the mistakes that
may not found in dynamic testing.

Why we use static testing?

Static testing helps to find the error in the early stages. With the help of static testing, this will reduce the
development timescales. It reduces the testing cost and time. Static testing also used for development
productivity.

Component Testing: Component Testing is also a type of software testing in which testing is performed
on each component separately without integrating with other parts. Component testing is also a type of
black-box testing. Component testing also referred to as Unit testing, program testing, or module testing.

Grey Box Testing: Grey Box Testing defined as a combination of both white box and black-box testing.
Grey Box testing is a testing technique which performed with limited information about the internal
functionality of the system.

What are the functional testing tools?

The functional testing can also be executed by various apart from manual testing. These tools simplify the
process of testing and help to get accurate and useful results.

It is one of the significant and top-priority based techniques which were decided and specified before the
development process.

The tools used for functional testing are:

What are the advantages of Functional Testing?

Advantages of functional testing are:


o It produces a defect-free product.
o It ensures that the customer is satisfied.
o It ensures that all requirements met.
o It ensures the proper working of all the functionality of an application/software/product.
o It ensures that the software/ product work as expected.
o It ensures security and safety.
o It improves the quality of the product.

Example: Here, we are giving an example of banking software. In a bank when money transferred from
bank A to bank B. And the bank B does not receive the correct amount, the fee is applied, or the money
not converted into the correct currency, or incorrect transfer or bank A does not receive statement advice
from bank B that the payment has received. These issues are critical and can be avoided by proper
functional testing.

What are the disadvantages of functional testing?

Disadvantages of functional testing are:

o Functional testing can miss a critical and logical error in the system.
o This testing is not a guarantee of the software to go live.
o The possibility of conducting redundant testing is high in functional testing.
What is Acceptance Testing?

User Acceptance Testing (UAT) is a type of testing performed by the end user or the client to
verify/accept the software system before moving the software application to the production
environment. UAT is done in the final phase of testing after functional, integration and system
testing is done.

Purpose of UAT
The main Purpose of UAT is to validate end to end business flow. It does not focus on cosmetic
errors, spelling mistakes or system testing. User Acceptance Testing is carried out in a separate
testing environment with production-like data setup. It is kind of black box testing where two or
more end-users will be involved.

Who Performs UAT?


 Client
 End users
Need of User Acceptance Testing
Need of User Acceptance Testing arises once software has undergone Unit, Integration and System
testing because developers might have built software based on requirements document by their own
understanding and further required changes during development may not be effectively
communicated to them, so for testing whether the final product is accepted by client/end-user, user
acceptance testing is needed.

 Developers code software based on requirements document which is their “own”


understanding of the requirements and may not actually be what the client needs from the
software.
 Requirements changes during the course of the project may not be communicated effectively
to the developers.

Acceptance Testing and V-Model


In VModel, User acceptance testing corresponds to the requirement phase of the Software
Development life cycle(SDLC

Prerequisites of User Acceptance Testing:


Following are the entry criteria for User Acceptance Testing:

 Business Requirements must be available.


 Application Code should be fully developed
 Unit Testing, Integration Testing & System Testing should be completed
 No Showstoppers, High, Medium defects in System Integration Test Phase –
 Only Cosmetic error is acceptable before UAT
 Regression Testing should be completed with no major defects
 All the reported defects should be fixed and tested before UAT
 Traceability matrix for all testing should be completed
 UAT Environment must be ready
 Sign off mail or communication from System Testing Team that the system is ready for UAT
execution

 How to do UAT Testing


 UAT is done by the intended users of the system or software. This type of Software Testing
usually happens at the client location which is known as Beta Testing. Once Entry criteria for
UAT are satisfied, following are the tasks need to be performed by the testers:
UAT Process

 Analysis of Business Requirements


 Creation of UAT test plan
 Identify Test Scenarios
 Create UAT Test Cases
 Preparation of Test Data(Production like Data)
 Run the Test cases
 Record the Results
 Confirm business objectives

Step 1) Analysis of Business Requirements


One of the most important activities in the UAT is to identify and develop test scenarios.
These test scenarios are derived from the following documents:

 Project Charter
 Business Use Cases
 Process Flow Diagrams
 Business Requirements Document(BRD)
 System Requirements Specification(SRS)

Step 2) Creation of UAT Plan:


The UAT test plan outlines the strategy that will be used to verify and ensure an application meets its
business requirements. It documents entry and exit criteria for UAT, Test scenarios and test cases
approach and timelines of testing.
Step 3) Identify Test Scenarios and Test Cases:
Identify the test scenarios with respect to high-level business process and create test cases with clear test
steps. Test Cases should sufficiently cover most of the UAT scenarios. Business Use cases are input for
creating the test cases.

Step 4) Preparation of Test Data:


It is best advised to use live data for UAT. Data should be scrambled for privacy and security reasons.
Tester should be familiar with the database flow.

Step 5) Run and record the results:


Execute test cases and report bugs if any. Re-test bugs once fixed. Test Management tools can be used for
execution.

Step 6) Confirm Business Objectives met:


Business Analysts or UAT Testers needs to send a sign off mail after the UAT testing. After sign-off, the
product is good to go for production. Deliverables for UAT testing are Test Plan, UAT Scenarios and Test
Cases, Test Results and Defect Log

Exit criteria for UAT:


Before moving into production, following needs to be considered:

 No critical defects open


 Business process works satisfactorily
 UAT Sign off meeting with all stakeholders

Qualities of UAT Testers:


UAT Tester should possess good knowledge of the business. He should be independent and think as
an unknown user to the system. Tester should be Analytical and Lateral thinker and combine all
sort of data to make the UAT successful.

Tester or Business Analyst or Subject Matter Experts who understand the business requirements or
flows can prepare test and data which are realistic to the business.
UAT Tools
There are several tools in the market used for User acceptance testing and some are listed for reference:

Fitness tool: It is a java tool used as a testing engine. It is easy to create tests and record results in a table.
Users of the tool enter the formatted input and tests are created automatically. The tests are then executed
and the output is returned back to the user.

Watir : It is toolkit used to automate browser-based tests during User acceptance testing. Ruby is the
programming language used for inter-process communication between ruby and Internet Explorer.

You might also like