Unit - 3
Unit - 3
FeaturesofSoftwareTesting:
1. Statictestingtechniquesdonotdemonstratethat thesoftwareisoperationaloronefunction
ofsoftwareis working;
2. They check the software product at each SDLC stage for conformance with the
requiredspecificationsorstandards.Requirements,designspecifications,testplans,sourcecode
,user’smanuals,maintenanceproceduresaresomeofthe items thatcan bestaticallytested.
3. Statictestinghasprovedtobeacost-effectivetechniqueoferrordetection.
4. Another advantage in static testing is that a bug is found at its exact location whereas a bug
foundindynamictestingprovides no indication to theexact sourcecodelocation.
TypesofStaticTesting
-->SoftwareInspections
-->Walkthroughs
-->TechnicalReviews
Inspections:
-->Inspectionprocessisanin-processmanualexaminationofanitemtodetectbugs.
-->Inspection process is carried out by a group of peers. The group of peers first inspects
theproductatindividual level.After this,theydiscusspotential defectsof the product
observedinaformalmeeting.
-->Itis averyformal processto verifyasoftware product.Thedocuments whichcan beinspected
areSRS,SDD,codeandtestplan.
-->Inspectionprocessinvolvestheinteractionofthefollowingelements:
a) Inspectionstepsb)Rolesforparticipantsc)Itembeinginspected
InspectionTeam:
-->Author / Owner / Producer: A program error designer responsible for producing the
programordocument.
-->Inspector: A peer member of the team, i.e he is not a manager or supervisor. He is not
directlyrelatedto theproduct undersupervision and maybeconcerned with someotherproducts.
-->Recorder:Onewhorecordsalltheresultsoftheinspection meeting.
InspectionProcess:
Planning:Duringthis phasethe followingisexecuted:
-->Theproduct tobeinspected isbeingidentified.
-->Amoderator is assigned.
-->The objective of the inspection is stated i.e whether the inspection is to be conducted for
defectdetectionorsomethingelse.
Duringplanning,themoderatorperformsthe followingactivities:
--Assuresthat theproductis readyforinspection.
--Selectstheinspectionteamandassignstheirroles.
--Schedulesthemeetingvenueandtime.
--Distributestheinspectionmaterialliketheitemtobeinspected,clientlistsetc.
Overview: In this stage, the inspection team is provided with the background information
forinspection. The author presents the rationale of the product, its relationship to the rest of the
productsbeingdeveloped, its function andintended useandtheapproach used to develop it.
Individual Preparation: After the overview, the reviewers individually prepare themselves for
theinspection process by studying the documents provided to them in the overview session. They
pointout potential errors or problems found and record in a log. This log is then submitted to
themoderator.Themoderatorcompilesthelogsofdifferentmembersandgivesacopyofthiscompiledlistto
the author ofthe inspected item.
Inspection Meeting: Once all the initial preparation is complete, the actual inspection meeting
canstart. Theinspection meeting starts with the author of the inspected item who has created it.
Theauthor first discusses every issue raised by different members in the compiled log file. After
thediscussion, all the members arrive at a consensus whether the issues pointed out are in fact
errorsandif theyareerrors, should theybeadmittedbytheauthor.
Follow-Up: It is the responsibility of the moderator to check that all the bugs found in the
lastmeetinghavebeen resolved.Thedocumentis then approved for release.
BenefitsofInspectionProcess:
-->Bug Reduction: According to the report that through the inspection process in IBM,
thenumberof bugs perthousand lines of codehas been reduced bytwo thirds.
--
>BugPrevention:Basedontheexperienceofpreviousinspections,analysiscanbemadeforfutureinsp
ectionsorprojects,therebypreventingthebugswhich haveappearedearlier.
--
>Productivity:SinceallphasesofSDLCmaybeinspectedwithoutwaitingforcodedevelopmentanditsexec
ution, thecostof findingbugsdecreases andincreases productivity.
-->Real-time Feedback to Software Engineers: Developers find out the type of mistakes
theymake and what is the error density. Since they get this feedback in the early stage of
thedevelopment,theymayimprovetheircapability.
--
>QualityImprovement:Thedirectconsequenceofstatictestingalsoresultsintheimprovementofqualityof
thefinal product.
-->ProjectManagement
-->CheckingCouplingand Cohesion
-->LearningthroughInspection
-->ProcessImprovement
EffectivenessofInspectionProcess:
In an analysis, the inspection process was found to be effective as compared to
structuraltestingbecausetheinspectionprocess alone found52%errors.So theerrordetection
ratiocanbe
specifiedas:
Errorfound byan inspection
Errordetectionefficiency=-------------------------------------- * 100
Totalerrorsintheteambeforeinspection
VariantsofInspectionProcess:
ReadingTechniques:
A reading technique can be defined as a series of steps or procedures whose purpose is
toguide an inspector toacquireadeepunderstandingof
theinspectedsoftwareproduct.Thusreadingtechnique can be regarded as a mechanism for the
individual inspector to detect defects in theinspectedproduct.Thevarious readingtechniquesare:
Scenario–BasedReading:Differentmethodsdevelopedbasedonscenariobasedreadingare:
-->Perspective based Reading: Software item should be inspected from the perspective of
differentstakeholders Inspectors of an inspection team have to check software quality as well as the
softwarequalityfactors ofasoftwareartifactfrom differentperspectives.
-->Usage based Reading: This method given is applied in design inspections.
Designdocumentation is inspected based on use cases, which are documented in
requirementsspecification.
-->Abstraction driven Reading: This method is designed for code inspections. In this method,
aninspector reads a sequence of statements in the code and abstracts the functions these
statementscompute.
-->TaskdrivenReading:Thismethodisalsoforcodeinspections.Inthismethod,theinspectorhasto create a
data dictionary, a complete description of the logic and a cross-reference between thecodeand the
specifications.
requirementsdocuments [103]. The scenarios, designed around function-points are known as the
Function PointScenarios.AFunctionPointScenario
consistsofquestionsanddirectsthefocusofaninspectortoaspecificfunction-point item within
theinspected requirements document.
StructuredWalkthroughs:
-->Itisaless formalandlessrigoroustechniqueascomparedtoinspection.Theverycommontermused in
the literature for static testing is Inspection but it is for very formal process. If you want
togoforaless formalhavingno barsoforganized meeting,then walkthroughs area good option.
-->A review is similar to an inspection or walkthrough, except that the review team also
includesmanagement.Therefore, itisconsideredahigher-leveltechniquethaninspectionorwalkthrough.
TechnicalReview:
--A review is similar to an inspection or walkthrough, except that the review team also
includesmanagement.Therefore, itisconsideredahigher-leveltechniquethaninspectionorwalkthrough.
ValidationActivities
UnitTesting:
A unit is the smallest testable part of an application like functions, classes, procedures,
interfaces.Unit testing is a method by which individual units of source code are tested to determine
if they arefit for use. Unit tests are basically written and executed by software developers to make
sure thatcode meets its design and requirements and behaves as expected. The goal of unit testing
is tosegregate each part of the program and test that the individual parts are working correctly.
Thismeans that for any function or procedure when a set of inputs are given then it should return
thepropervalues.
Drivers:
For example:When wehave modules B and C ready but module A which calls functions
frommoduleBand Cis not readyso developer willwrite adummypiece ofcodefor
moduleAwhichwillreturnvalues tomodule Band C.This dummypieceofcodeis known as driver.
Stubs:
Stubsaredummymoduleswhich areknown as"called programs"whichisusedwhen sub
programs are under construction. The module under testing may also call some other module
whichisnotreadyat thetimeof testing.Therefore,these modules needtobe simulatedfortesting.In
mostcases,dummymodulesinsteadofactual modules,whicharenotready,are preparedforthese
subordinatemodules.Thesedummymodules arecalled Stubs.
Assume you have 3 modules, Module A, Module B and module C. Module A is ready and we
needtotestit,butmoduleAcallsfunctionsfrom Module Band C whicharenotready, so developer
willwrite a dummy module which simulates B and C and returns values to module A. This
dummymodulecodeis known asstub.
BenifitsofusingStubsandDrivers:
--Stubsallowtheprogrammertocallamethodinthecodebeingdeveloped,evenifthemethoddoesnot
havethe desiredbehaviouryet.
--
Byusingstubsanddriverseffectively,wecancutdownourtotaldebuggingandtestingsmallpartsofaprogra
mindividually, helpingus tonarrow downproblemsbeforetheyexpand.
--StubsandDriverscanalsobeaneffectivetoolfordemonstratingprogressinabusinessenvironment.
Example:
main()
{
inta,b,c,sum;scanf(―%d
%d‖,&a,&b);
sum=calsum(a,b);diff=c
aldiff(a,b);mul=calmul(
a,b);
printf(―Thesum is %d:‖,sum);
}
calsum(intx,int y)
{
int
d;d=x+y
;returnd
;
}
*Supposeifthemain()moduleotcaldiff()&calmul()moduelisnotready,thenthedriverandstubmodulecan
bedesignedas:
Solution:
-->Driverfor main()Module:
driver_main()
{
inta,b,c,sum;scanf(―%d
%d‖,&a,&b);
sum=calsum(a,b);
printf(―Thesum is %d:‖,sum);
}
-->Stubforcall_sum()module:
call_sum(intx,int y)
{
printf(―Difference calculatingmodule‖)
return0;
}
IntegrationTesting:
Once all the individual units are createdand tested, westart combiningthose―UnitTested‖
modulesandstartdoingtheintegratedtesting.So themeaningof Integration
testingisquitestraightforward-Integrate/combine the unit tested module one by one and test the
behaviour as a combinedunit.
The main function or goal of Integration testing is to test the interfaces between
theunits/modules.Theindividualmodulesarefirsttestedinisolation.Oncethe modulesareunit tested,they
are integrated one by one, till all the modules are integrated, to check the
combinationalbehaviour,andvalidatewhethertherequirementsareimplemented correctlyornot.
IntegrationTestingisnecessaryforthefollowingreasons:
--Itexposesinconsistencybetweenthemodulessuch asimproper callorreturnsequences.
--Datacanbelostacrossaninterface.
--Onemodulewhen combined withanothermodule maynotgive thedesired result.
--Data types and their valid ranges may mismatch between the
-->Non IncrementalIntegrationTesting/Bigbangintegration
-->IncrementalIntegrationTesting
--Top-DownIntegration
--Bottom-upIntegration
-->PracticalApproach forIntegrationTesting/Sandwichintegration.
Briefly, big-bang groups the whole system and test it in a single test phase. Top-down starts at
theroot of the tree and slowly work to lower level of the tree. Bottom-up mirrors top-down, it starts
atthe lower level implementation of the system and work towards the main. Sandwich is an
approachthatcombines both top-down and bottom-up.
NonIncrementalIntegrationTesting/Bigbangintegration:
--Thisisoneoftheeasiest approachestoapplyinintegrationtesting.
--Herewetreatthe whole system asasubsystemandtestit inasingletestphase.
--Normally this means simply integrating all the modules, compile them all at once and then test
theresultingsystem.
--Thisapproachrequireslittleresourcestoexecuteaswe donotneedtoidentifycriticalcomponents (like
interactions, paths between themodules)norrequire extracodingforthe―dummy modules‖.
--Thisapproach maybe usedforverysmall systems, howeverit is stillnotrecommended becauseitisnot
systematic.
--
Inlargersystems,thelowresourcesrequirementinexecutingthistestingiseasilyoffsetbytheresourcesrequ
ired to locate theproblem when it occurs.
Insummary,bigbangintegrationhasthefollowingcharacteristics:
• Considersthewholesystemasasubsystem
• Testsallthemodulesinasingletestsession
• Onlyoneintegrationtestingsession
Advantages:
• Lowresourcesrequirement
• Doesnotrequireextracoding
Disadvantages:
• Notsystematic
• Hardtolocateproblems
• Hardtocreatetestcases
IncrementalIntegrationTesting:
InIncrementalintegrationtesting,thedevelopersintegratethemodulesonebyoneusingstubsor
drivers to uncover the defects. This approach is known as incremental integration testing. To
thecontrary,bigbangisoneotherintegrationtestingtechnique,whereallthemodulesareintegratedinonesh
ot.
IncrementalIntegrationTestingis beneficialforthefollowingreasons:
--The incremental integration testing's greater advantage is that the defects are found early in
asmaller assemblywhen itis relativelyeasyto detect theroot causeof thesame.
--A disadvantage is that it can be time-consuming since stubs and drivers have to be developed
forperformingthesetests.
TypesofIncrementalintegrationtesting:
-->TopDownintegrationtesting
--> Bottom-upintegrationtesting
TopDownIntegrationTesting:
--Intop-downintegration,westartatthe targetnodeatrootofthefunctionaldecomposition
treeandwork toward theleaves.
--Stubsareusedtoreplacethechildrennodesattachedtothetargetnode.
--A test phase consists in replacing one of the stub modules with the real code and test the
resultingsubsystem.
--Ifno problemis encounteredthen wedo thenext testphase.
--If allthechildrenwerereplacedbyrealcodeatleastonceandmeetthe requirements thenwe
movedown to thenext level.
--Now we can replace the higher level tested modules with real code and continue the
integrationtesting.
--Fortop-downintegrationthenumberofintegrationtestingsessionsis:nodes−leaves+edges.Top-
downintegration hasthe drawback ofrequiringstubs:
--While stubs are simpler than the real code, it is not straightforward to write them; the
emulationmust be complete and realistic, that is, the test cases results ran on the stub should
match with theresultson therealcode.
--Being a throw-away code, it does not reach the final product nor it will increase functionality
ofthesoftwarethus it isextraprogrammingeffort withoutadirect reward.
Depth First Integration: In this type, all modules on a major control path of the design hierarchy
areintegratedfirst.Inthefigureshownabove,modulesA,B,andDareintegratedfirst,nextmodulesA,C, E,
FG,h and integrated.
Breadth First Integration: In this type, modules directly subordinate at each level, moving
acrossthe designhierarchy horizontally are integrated first. In the figure shown above, modules
B,C areintegratedfirst,nextmodulesD,E, FandatlastmodulesG, Hintegrated.
Bottom-upIntegrationTesting:
--Bottom-up integration starts at the opposite end of the functional decomposition tree, instead
ofstartingat themain programwestart atthe lower-levelimplementation ofthe software.
--By moving in an opposite direction, the parent nodes are replaced by throw-away codes instead
ofthechildren.
--Thesethrow-awaycodes arealso knownas drivers.
--This approach allow us to start working with simpler and lower level of the
implementation,allowing us to create testing environments more easily because of the simpler
outputs of thosemodules.
--Thisalsoallowsustohandletheexceptionsmoreeasily.
--Conversely,wedonothaveanearlyprototypethusthemainprogramisthelasttobetested. Ifthere is a
design error then it will be just identified at a later stage, which implies high
errorcorrectioncost.
--Bottom-up integration is commonly used for object-oriented systems, real-time systems
andsystemswith strict performancerequirements.
--Forbottom-upintegrationthenumberofintegrationtestingsessionsis:nodes−leaves+edges.
PracticalApproach forIntegrationTesting/Sandwichintegration:
--Sandwichintegrationcombinestop-downintegrationandbottom-upintegration.
--The main concept is to maximize the advantages of top-down and bottom-up and minimizing
theirweaknesses.
--Sandwich integration uses a mixed-up approach where we use stubs at the higher level of the
treeanddrivers at thelower level (Figure).
--Thetestingdirectionstartsfromboth sideoftreeandconvergestothe centre,thus
thetermsandwich.
--This will allow us to test both the top and bottom layers in parallel and decrease the number
ofstubsand drivers required in integration testing.
Advantages:
--Topandbottomlayerscanbedoneinparallel
-- Lessstubsanddriversneeded
--Easyto constructtest cases
--Bettercoveragecontrol
--Integrationisdoneassoonacomponentisimplemented
Disadvantages:
--Stillrequiresthrow-awaycodeprogramming
--Partial bigbangintegration
--Hardto isolateproblems
Call-GraphbasedIntegration:
A call graph is a directed graph, where the nodes are either modules or units, and a
directededge from one node to another node means one module has called another module. The
call graphcanbecaptured inamatrixformwhich is known as the adjacencymatrix.
Therearetwotypes ofintegrationtestingbasedoncallgraph:
-->PairwiseIntegration
-->NeighbourhoodIntegration.
PairwiseIntegration:
--Inpair-wiseintegration,weeliminatetheneedofstubanddriver,byusingtherealcodeinstead.--This is
similar to big bang where has problem isolation problem due to the large amount of
moduleswearetestingat once.
--By pairing up the modules using the edges, we will have a number of test sessions equal to
thenumberof edges that exist in thecall graph.
--Since the edges correspond to functions or procedures invocated, in a standard system,
thisimplies manytest sessions.
--Forpair-wiseintegrationthenumberofintegrationtestingsessionsisthenumberofedges.
NeighbourhoodIntegration:
--Whilepair-wiseintegrationeliminatestheneedofstubanddriver,itstillrequiresmanytest cases.
--Asan attemptofimprovingfrompair-wise,neighbourhoodrequiresfewertestcases.
--Inneighbourhoodintegration, wecreateasubsystem foratestsession byhaving atarget
nodeandgroupingall the nodes near it.
--Nearisdefinedasnodes
thatarelinkedtothetargetnodethatisanimmediatepredecessororsuccessorofit.
--Bydoingthiswewill beable toreduceconsiderablythe amount oftest sessions required.
--The total test sessions in neighbourhood integration can be calculated
as:Neighbourhood=nodes – sink nodes
=20 -10
=10
whereSinkNodeisaninstruction ina moduleatwhich executionterminates.
PathBasedIntegration:
--Bymovingtopath-based integrationwewillbeapproachingintegration
testingfromanewdirection.Here we will try to combine both structural and functional approach in
path-baseintegration.
--
Finally,insteadoftestingtheinterfaces(whicharestructural),wewillbetestingtheinteractions(whichareb
ehavioural).
--Here,whenaunitis executedcertainpathofsourcestatementsistraversed.
--When this unit calls source statements from another unit, the control is passed from the
callingunitto thecalled unit.
--Forintegrationtestingwetreattheseunitcallsasanexitfollowedbyan
entry.Weneedtounderstandthefollowingdefinitionsforpath-basedintegration:
FunctionTesting:
Functiontestingisdefinedas―theprocessofattemptingtodetectdiscrepanciesbetweenthefunctio
nal specificationsofasoftwareanditsactualbehaviour‖.Whenanintegratedsystemistested, all its
specified functions and external interfaces are tested on the software. Everyfunctionality of the
system specified in the functions is tested according to its externalspecifications.Thefunctiontest
must determineif eachcomponentorbusiness event:
--Performsinaccordancetothespecifications
--Responds correctlyto all conditions thatmaybepresented byincomingevents / data,
--Movesdata correctlyfromonebusiness eventto thenext(includingdata stores)
--Business eventsinitiatedintheorderrequiredtomeetthebusinessobjectivesofthesystem.
Aneffectivefunctiontestcyclemusthave
adefinedsetofprocessesanddeliverables.Theprimaryprocesses / deliverables for
requirementsbasedfunction test are:
TestPlanning:Duringplanning,thetestleaderwithassistancefromthetestteamdefinesthescope,schedule
,and deliverables forthefunction test cycle.
Traceability matrix formation: Test cases need to be traced / mapped back to the
appropriaterequirement. A function coverage matrix is prepared. This matrix is a table, listing
specificfunctions to be tested, the priority for testing each function, and test cases that contain tests
for eachfunction.
Testcaseexecution:Asinallthephasesoftesting,anappropriatesetoftestcasesneedtobeexecuteda
nd the results of thosetest casesrecorded.
SystemTesting:
--System testing is the type of testing to check the behaviour of a complete and fully
integratedsoftwareproduct based on thesoftware requirements specification (SRS)document.
--Themainfocus ofthis testingis toevaluateBusiness/ Functional/End-user requirements.
--Thisisblack boxtype oftestingwhereexternal workingofthesoftwareisevaluated
withthehelpofrequirement documents&itistotallybased onUsers point ofview.
--Forthistypeoftestingdonotrequired knowledgeofinternaldesignorstructureor code.
--Thistestingistobecarriedoutonlyafter System
IntegrationTestingiscompletedwherebothFunctional&Non-Functional requirementsareverified.
--In the integration testing testers are concentrated on finding bugs/defects on integrated
modules.But in the Software System Testing testers are concentrated on finding bugs/defects
based onsoftwareapplication behaviour,softwaredesign andexpectationofenduser.
CategoriesofSystemTesting:
RecoveryTesting:
Recovery is just like the exception handling feature of a programming language. It is a type of non-
functional testing. Recovery testing is done in order to check how fast and better the application
canrecover after it has gone through any type of crash or hardware failure etc. Recovery testing is
theforcedfailureof thesoftwarein avarietyofwaysto verifythatrecoveryisproperlyperformed.
ThusRecoveryTestingis―theactivityoftestinghowwellthesoftwareisabletorecoverfromcrashes,hard
ware failures, andothersimilarproblems‖.
SomeexamplesofRecoverytestingare:
--When an application isreceivingdatafromanetwork, unplugtheconnectingcable.After some
time,plugthecablebackinandanalyzetheapplication’sabilitytocontinue receivingdatafromthepointat
which thenetwork connection was broken.
--Restart the system while a browser has a definite number of sessions and check whether
thebrowseris able torecover all of them or not.
“Biezer” proposes that testers should work on the following areas during recovery
testing:Restart:Testersmustensurethatalltransactionshavebeenreconstructedcorrectlyandthatal
ldevicesarein proper states.
Switchover: Recovery can also be done if there are standby components and in case of failure of
onecomponent,the standbytakes over thecontrol.
SecurityTesting:
--Itisatypeofnon-functionaltesting.
--Securitytestingisbasicallyatypeofsoftwaretestingthat’sdonetocheck whetherthe applicationorthe
product is securedornot.
--Itcheckstoseeiftheapplicationisvulnerable toattacks,ifanyonehackthesystemorloginto
theapplicationwithout anyauthorization.
--It is a process to determine that an information system protects data and maintains functionality
asintended.
--The security testing is performed to check whether there is any information leakage in the
sensebyencryptingtheapplicationorusingwiderangeofsoftware’sandhardware’s andfirewalletc.
--Softwaresecurityis aboutmakingsoftwarebehavein thepresenceofa malicious attack.
TypesofSecurityRequirements:
--SecurityRequirementsshould beassociated with eachfunctionalrequirement.
--In addition to security concerns that are directly related to particular requirements, a
softwareprojecthas securityissues that areglobal in nature.
Howto performsecuritytesting:
Testers must use a risk based approach, grounded in both the systems architectural
realityandtheattackersmindset,togaugesoftwaresecurityadequately.
Byidentifyingrisksandpotentialloss associated with those risks in the system and creating tests
driven by those risks, the tester canproperlyfocus onareas of codein whichan attackis likelyto
succeed.
ElementsofSecurityTesting:
--Confidentiality
--Integrity
--Authentication
--Availability
--Authorization
--Non-repudiation.
PerformanceTesting:
--Softwareperformancetestingisameans of qualityassurance(QA).
--It involves testing software applications to ensure they will perform well under their
expectedworkload.
--Featuresand Functionalitysupported byasoftwaresystem isnot
theonlyconcern.Asoftwareapplication'sperformancelikeitsresponse time, domatter.
--Thegoalofperformancetestingisnottofindbugsbuttoeliminateperformancebottlenecks
--Performance testing is done to provide stakeholders with information about their
applicationregardingspeed, stabilityand scalability.
--Moreimportantly,performancetestinguncoverswhatneedstobeimprovedbeforetheproductgoesto
market.
--Withoutperformancetesting,softwareislikelytosuffer fromissuessuchas:runningslowwhileseveral
users use it simultaneously, inconsistencies across different operating systems and poorusability.
--Performance testing will determine whether or not their software meets speed, scalability
andstabilityrequirements under expected workloads.
--Applications sent to market with poor performance metrics due to non existent or
poorperformancetestingarelikelytogainabadreputationand failtomeetexpectedsalesgoals.
--Also, mission critical applications like space launch programs or life saving medical
equipmentsshould beperformancetested to ensurethat theyrun foralongperiod oftime without
deviations.
LoadTesting:
--Loadtestingisa typeofnon-functionaltesting.
--A load test is type of software testing which is conducted to understand the behaviour of
theapplicationunder aspecificexpected load.
--
Loadtestingisperformedtodetermineasystem’sbehaviourunderbothnormalandatpeakcondition
s.
--It helps to identify the maximum operating capacity of an application as well as any
bottlenecksanddeterminewhichelementiscausingdegradation.E.g.If thenumberof
usersareincreasedthenhowmuch CPU,memorywill beconsumed, whatis thenetworkand
bandwidthresponse time.
--
Loadtestingcanbedoneundercontrolledlabconditionstocomparethecapabilitiesofdifferentsystemsor
to accuratelymeasurethecapabilities ofasinglesystem.
--Loadtestinginvolvessimulatingreal-
lifeuserloadforthetargetapplication.Ithelpsyoudeterminehowyour
applicationbehaveswhenmultipleusershitsitsimultaneously.
Load testing differs from stress testing, which evaluates the extent to which a system
keepsworking when subjected to extreme work loads or when some of its hardware or software
has beencompromised.
--The primary goal of load testing is to define the maximum amount of work a system can
handlewithoutsignificant performancedegradation.
Examplesofloadtestinginclude:
Downloadingaseriesoflargefiles fromtheinternet.
Runningmultipleapplicationsonacomputerorserversimultaneously.
Assigningmanyjobs toaprinterina queue.
Subjectingaservertoalarge amountoftraffic.
Writingandreadingdatatoandfromaharddiskcontinuously.
StressTesting:
Itisatypeofnon-functionaltesting.
Itinvolvestestingbeyondnormaloperational
capacity,oftentoabreakingpoint,inordertoobservethe results.
Itis a form ofsoftwaretestingthat isusedto determinethestabilityof a given system.
Itputs greateremphasis onrobustness,availability, anderrorhandlingunderaheavyload,ratherthanon
what wouldbe considered correctbehaviour under normal circumstances.
Thegoals ofsuch testsmaybetoensurethesoftwaredoesnotcrash inconditions
ofinsufficientcomputationalresources(such as memoryor disk space).
Thus ―StressTesting tries to breakthe system undertest by overwhelming itsresources in order to
find thecircumstances underwhich itwillcrash‖
Theareasthatmaybestressedinasystemare:InputTransactions,DiskSpace,Output,Commu
nications,Interaction with users.
UsabilityTesting:
--Usabilitytestingis an essential elementof qualityassurance.
--Itisthemeasureofaproduct’spotentialtoaccomplishthegoalsoftheuser.
--Usability testing is a method by which users of a product are asked to perform certain tasks in
anefforttomeasuretheproduct’sease-of-use,tasktime,andtheuser’sperceptionoftheexperience. --This
look as a unique usability practice because it provides direct input on how real users use thesystem.
--Usabilitytestingmeasureshuman-usableproductsto fulfiltheusers purpose.
--The item which takes benefit from usability testing are web sites or web applications,
documents,computerinterfaces,consumerproducts, and devices.
--Usability testing processes the usability of a particular object or group of objects, where
commonhuman-computerinteraction studies tryto formulate universal principles.
Usabilitycharacteristics againstwhichtestingisconductedare:
--EaseofUse
--Interfacesteps
--ResponseTime
--HelpSystem
--ErrorMessages
Compatability/Conversion/ConfigurationTesting:
--Compatibilityis anon-functionaltestingtoensurecustomersatisfaction.
--
Itistodeterminewhetheryoursoftwareapplicationorproductisproficientenoughtorunindifferentbr
owsers,database,hardware,operatingsystem,mobiledevicesandnetworks.
--Application could also impact due to different versions, resolution, internet speed
andconfigurationetc.Henceit’simportanttotesttheapplicationinallpossiblemannerstoreducefailuresan
dovercomefrom embarrassmentsof bug’sleakage.
--AsaNon-functional tests, Compatibilitytestingis to endorsethat theapplication runs properlyin
differentbrowsers,versions,OSandnetworkssuccessfully.
--Compatibility test should always perform on real environment instead of virtual
environment.Testthecompatibilityofapplicationwithdifferentbrowsersandoperatingsystemstogua
rantee100%coverage.
TypesofSoftwarecompatibilitytesting:
Browsercompatibilitytesting
Hardware
Networks
MobileDevices
OperatingSystem
Versions
AcceptanceTesting:
--After the system test has corrected all or most defects, the system will be delivered to the user
orcustomerforacceptancetesting.
--Acceptance testing is basically done by the user or customer although other stakeholders may
beinvolvedas well.
--Thegoalofacceptance testingistoestablishconfidenceinthesystem.
--Acceptancetestingismostoftenfocusedonavalidationtypetesting.
--Thus ―AcceptanceTesting is the formal testing conducted to determine whether a softwaresystem
satisfies its acceptance criteria and to enable buyer to determine whether to accept the system
ornot.”
--Thusacceptancetestingisdesignedto:
-Determine whetherthesoftwareis fitfor theuserto use.
-Makingusersconfidentaboutproduct
-Determinewhetherasoftwaresystemsatisfiesits acceptancecriteria.
-Enablethebuyertodeterminewhethertoacceptthesystem.
TypesofAcceptanceTesting:
-->AlphaTesting
-->BetaTesting
AlphaTesting:
--Alpha testing is one of the most common software testing strategyused in software
development.Itsspeciallyused byproduct development organizations.
--Thistesttakesplaceatthedeveloper’ssite.Developersobservetheusers andnoteproblems.
--Alpha testing is testing of an application when development is about to complete. Minor
designchangescan still be madeasa result ofalpha testing.
--Alpha testing is typically performed by a group that is independent of the design team, but
stillwithinthecompany,e.g.in-housesoftwaretestengineers,orsoftwareQAengineers.
--Alpha testing is final testing before the software is released to the general public. It has
twophases:
-->Alphatestingissimulated oractual operationaltestingbypotentialusers/customers
oranindependenttestteam at thedevelopers’site.
-->Alpha testing is often employed for off-the-shelf software as a form of internal
acceptancetesting,beforethe software goes to beta testing.
EntryCriteriaforAlpha:
--Allfeaturesarecomplete/testable
--Highbugsonprimaryplatformarefixed /verified.
--50%ofmedium bugs on primaryplatforms arefixed / verified.
--Allfeaturesaretestedonthe primaryplatforms.
--Performancehasbeenmeasured/compared
--Alphasitesarereadyforinstallation.
ExitcriteriatoAlpha:
--Getresponse/ feedbacksfromthecustomers.
--Prepareareport ofanyserious bugs beingnoticed.
--Notifybug– fixingissues to developers.
BetaTesting:
--In software development, a beta test is the second phase of software testing in which a sampling
oftheintended audiencetries the product out.
--Itisalsoknownasfieldtesting.Ittakesplaceat customer’ssite.Itsendsthesystemtouserswhoinstallit and
useit under real-world workingconditions.
--Betaisthesecondletter oftheGreek alphabet.
--Originally,thetermalphatestmeantthefirstphaseoftestinginasoftwaredevelopmentprocess.Thefirst
phaseincludesunittesting, component testing, and systemtesting.
--Betatestingcanbeconsidered"pre-releasetesting."
--Betatestingisalsosometimesreferredtoasuseracceptancetesting(UAT)orendusertesting.
--Inthisphaseofsoftwaredevelopment,applicationsaresubjectedto
realworldtestingbytheintendedaudienceforthesoftware.
--The experiences of the early users are forwarded back to the developers who make final
changesbefore releasingthesoftware commercially.
EntryCriteriaforBeta:
--Positiveresponsesfrom alphasite.
--Customerbugsinalpha testinghavebeenaddressed.
--Therearenofatalerrors whichcanaffectthe functionalityofthesoftware.
--Betasites arereadyfor installation.
Exitcriteria toBeta:
--Getresponse /feedbacks fromthebetatesters.
--Prepareareportofallseriousbugs.
--Notifybug–fixingissues to developers.
Regressiontesting
ProgressiveVsregressivetesting,Regressiontestability,Objectivesofregressiontesting,Whenregressiontestin
gdone?,Regression testingtypes,Regression testingtechniques
ProgressiveVsRegressivetesting:
--All the test case design methods or testing technique, discussed till now are referred to
asprogressivetestingor development testing.
--The purpose of regression testing is to confirm that a recent program or code change has
notadverselyaffectedexistingfeatures.
--Regression testing is nothing but full or partial selection of already executed test cases which
arere-executedtoensure existingfunctionalities workfine.
--Thistestingisdonetomakesurethat newcode changesshouldnothavesideeffectson
theexistingfunctionalities.
--Itensuresthatoldcode still worksoncethe new codechangesaredone.
NeedofRegressionTesting:
Changeinrequirementsandcodeismodifiedaccordingtotherequirement
Newfeatureisadded tothesoftware
Defectfixing
Performanceissuefix
Definition:
Regression testing is the selective retesting of a system or component to verify
thatmodifications have not caused unintended effects and that the system or component still
complieswithits specifiedrequirements.
RegressionTestability:
Regression testability refers to the property of a program, modification or test suite that
letsit be effectively and efficiently regression-tested. We can classify a program as regression
testableif most single statement modifications to the program entail(involves) rerunning a small
proportionofthe current test suite.
ObjectivesofRegressionTesting:
--It tests to check that the bug has been addressed: The first objective in bug fix testing is to
checkwhetherthe bug-fixinghas worked or not.
--It finds other related bugs: Regression tests are necessary to validate that the system does
nothaveanyrelated bugs.
--It tests to check the effect on other parts of the program: It may be possible that bug-fixing
hasunwantedconsequencesonotherpartsofaprogram.Therefore,itisnecessarytochecktheinfluenceofch
anges in onepart orother parts oftheprogram.
Whenregressiontestingdone?Software
Maintenance:
--CorrectiveMaintenance:Changesmadetocorrectasystemafterafailure hasbeenobserved.
--AdaptiveMaintenance:Changesmadetoachievecontinuingcompatibilitywiththetargetenvironemntor
other systems.
--PerfectiveMaintenance: Changesmadetoimproveoradd capabilities.
--Preventive Maintenance: Changes made to increase robustness, maintainability, portability,
andotherfeatures.
RapidIterativeDevelopment:Theextremeprogrammingapproachrequiresthatatestbedevelopedfor
each classand thatthis test be re-run everytimetheclass changes.
CompatabilityAssesment and Benchmarking: Some test suites designed to be run on a
widerangeofplatformsandapplicationstoestablishconformancewithastandardortoevaluatetimeandspac
eperformance.
RegressionTestingTypes:
Bug-fixRegression:Thistestingisperformedafterabughasbeenreportedandfixed.
Side-EffectRegression/StabilityRegression:Itinvolvesretestingasubstantialpartoftheproduct. The
goal is to prove that the changes has no detrimental effect on something that wasearlierinorder.
RegressionTestingTechniques:
Therearedifferenttechniquesforregressiontesting.Theyare:
--
>Regressiontestselectiontechnique:Thistechniqueattemptstoreducethetimerequiredtoretestamodifie
d program byselectingsomesubset ofthe existingtest suite.
-->Testcaseprioritizationtechnique:Regressiontestprioritizationattemptstoreorderaregression
test suite so that those tests with the highest priority according to some established criteria,
areexecuted earlier in the regression testing process rather than those lower priority. There are
twotypesof prioritization:
(a) General Test Case Prioritization: For a given program P and test suits T, we prioritize the
testcases in T that will be useful over a succession of subsequent modified versions of P, without
anyknowledgeof themodified version.
SelectiveRetestTechnique:
Selective retest technique attempts to reduce the cost of testing by identifying the portions of
P'(modified version of Program) that must be exercised but the regression test suite. Following are
thecharacteristicfeatures ofthe selectiveretest technique:
-->It minimizestheresourcesrequiredtoregressiontestanewversion.
--> It is achievedbyminimizingthenumber oftestcases applied tothenew version.
-->Itanalysestherelationshipbetweenthetestcasesandthesoftwareelementstheycover.
-->Itusestheinformationabout changestoselecttestcases.
StepsinSelectiveretesttechnique:
1. SelectT'subsetofT,a setoftestcasestoexecuteonP'.
2. TestP'withT',establishingcorrectnessofP'withrespecttoT'.
3. Ifnecessary, createT'‖, asetofnewfunctional testcasesfor P'.
4. TestP'withT', establishingcorrectnessofP'withrespecttoT''.
5. CreateT'''. anewtestsuiteandtestexecutionprofileforP',fromT,T'andT''.
StrategyforTestCaseSelection:
For large software systems, there may be thousands of test cases available in its test suite. When
achangeis introducedinto the systemfor next version, rerunning all thetest cases isa costly andtime
consuming task. Therefore a need for selecting a subset of test cases from the original test suiteis
necessary. But the use of multiple criteria should increase the code coverage. So, an effective
testcaseselection strategyisto be designedbased on thecodecoverage.
SelectioncriteriabasedonCode:
-->Faultrevealingtestcases
-->ModificationrevealingTestcases.
-->ModificationtraversingTestcases.
RegressionTestSelectionTechniques:
MinimizationTechniques:Minimization-basedregressiontestselectiontechniquesattempttoselect
minimal sets of test cases from T thatyield coverage of modified or affected portions of P.For
example, this technique uses systems of linear equations to express relationships between
testcasesandbasic blocks(single-entry,single-exit sequencesofstatementsina procedure).
Thetechnique uses a 0-1 integer programming algorithm to identify a subset T' of T that ensures
thatevery segment that is statically reachable from a modified segment is exercised by at least one
testcaseinT9 thatalso exercises the modified segment.
Dataflow Techniques:Dataflow-coverage-basedregressiontestselectiontechniquesselecttestcases
that exercise data interactions that have been affected by modifications. For example,
thetechniquerequires that every definition-use pair that is deleted from P, new in P', or modified for
P'be tested. The technique selects every test case in T that, when executed on P, exercised deleted
ormodifieddefinition-use pairs, orexecutedastatement containingamodifiedpredicate.
Ad Hoc/Random Techniques: When time constraints prohibit the use of a retest-all approach,
butnotestselectiontoolisavailable,developersoftenselecttestcasesbasedon―hunches,‖orloose
associations of test cases with functionality. Another simple approachisto randomly select
apredeterminednumberoftestcases fromT.
Retest-All Technique: The retest-all technique simply reuses all existing test cases. To test P',
thetechnique effectively―selects‖all test casesinT.
EvaluatingRegressionTestSelectionTechnique:
Inclusiveness: Let M be a regression test selection technique. Inclusiveness measures the extent
towhich M chooses modification revealing tests from T for inclusion in T'. We define
inclusivenessrelativeto aparticular program,modified program, and test suite,as follows:
DEFINITION
Suppose T contains n tests that are modification revealing for P and P', and suppose M selects m
ofthesetests.Theinclusiveness ofM relative toP, P',andTis
1) the percentage given by the expression
(100(m/n))if n# 0 or2)100%if n=0.
For example, if T contains 50 tests of which eight are modification-revealing for P and P', and
Mselects two of these eight tests, then M is 25% inclusive relative to P, P', and T. If T contains
nomodification-revealing tests then every test selection technique is 100% inclusive relative to P,
P",andT.
Precision: Let M be a regression test selection technique. Precision measures the extent to which
Momitstests that arenon modification-revealing. Wedefineprecisionrelativeto aparticularprogram,
modifiedprogram,andtestsuite,asfollows:
DEFINITION
SupposeTcontainsnteststhatarenonmodification-
revealingforPandP'andsupposeMomitsmofthesetests.Theprecision ofMrelative to P,P: andTis
1) the percentage given by the expression (100(m/n))
if2)100%if n =0. n#0,or
For example, if T contains 50 tests of which 44 are non modification-revealing for P and P', and
Momits 33 of these 44 tests, then M is 75% precise relative to P, P', and T. If T contains no non-
modification-revealingtests,theneverytestselectiontechniqueis100%preciserelativetoP,P',andT.
Efficiency: We measure the efficiency of regression test selection techniques in terms of their
spaceand time requirements. Where time is concerned, a test selection technique is more
economical thanthe retest-all technique if the cost of selecting T' is less than the cost of running the
tests in T-T'Space efficiency primarily depends on the test history and program analysis information
a techniquemust store. Thus, both space and time efficiency depend on the size of the test suite that
a techniqueselects,and on thecomputational cost of that technique.
RegressionTestPrioritization:
Theregressiontestprioritizationapproachisdifferentascomparedtoselectiveretesttechniques.
Regression testprioritization attemptstoreordera regressiontestsuite sothat thosetests with the highest
priority, according to some established criterion, are executed earlier in theregressiontestingprocess
thanthose with alowerpriority.
Validation testing is testing where tester performed functional and non-functional testing.
Here functional testing includes Unit Testing (UT), Integration Testing (IT) and System Testing (ST),
and non-functional testing includes User acceptance testing (UAT).
Validation testing is also known as dynamic testing, where we are ensuring that "we have developed the
product right." And it also checks that the software meets the business needs of the client.
Validation testing can be best demonstrated using V-Model. The Software/product under test is evaluated
during this type of testing.
Activities:
Unit Testing
Integration Testing
System Testing
User Acceptance Testing
Unit Testing:-
Unit testing involves the testing of each unit or an individual component of the software application. It is
the first level of functional testing. The aim behind unit testing is to validate unit components with its
performance.
A unit is a single testable part of a software system and tested during the development phase of the
application software.
The purpose of unit testing is to test the correctness of isolated code. A unit component is an individual
function or code of the application. White box testing approach used for unit testing and usually done by
the developers.
Whenever the application is ready and given to the Test engineer, he/she will start checking every
component of the module or module of the application independently or one by one, and this process is
known as Unit testing or components testing.
Generally, the software goes under four level of testing: Unit Testing, Integration Testing, System Testing,
and Acceptance Testing but sometimes due to time consumption software testers does minimal unit
testing but skipping of unit testing may lead to higher defects during Integration Testing, System Testing,
and Acceptance Testing or even during Beta Testing which takes place after the completion of software
application.
o Unit testing helps tester and developers to understand the base of code that makes them able to
change defect causing code quickly.
o Unit testing helps in the documentation.
o Unit testing fixes defects very early in the development phase that's why there is a possibility to
occur a smaller number of defects in upcoming testing levels.
o It helps with code reusability by migrating code and test cases.
Advantages
o Unit testing uses module approach due to that any part can be tested without waiting for
completion of another parts testing.
o The developing team focuses on the provided functionality of the unit and how functionality
should look in unit test suits to understand the unit API.
o Unit testing allows the developer to refactor code after a number of days and ensure the module
still working without any defect.
Disadvantages
o It cannot identify integration or broad level error as it works on units of the code.
o In the unit testing, evaluation of all execution paths is not possible, so unit testing is not able to
catch each and every error in a program.
o It is best suitable for conjunction with other testing activities.
o Example of Unit testing
Let us see one sample example for a better understanding of the concept of unit testing:
Integration testing
Integration testing is the second level of the software testing process comes after unit testing. In this
testing, units or individual components of the software are tested in a group. The focus of the integration
testing level is to expose defects at the time of interaction between integrated components or units.
Unit testing
uses modules for testing purpose, and these modules are combined and tested in integration
testing. The Software is developed with a number of software modules that are coded by
different coders or programmers. The goal of integration testing is to check the correctness of
communication among all the modules.
Once all the components or modules are working independently, then we need to check the data flow
between the dependent modules is known as integration testing.
Let us see one sample example of a banking application, as we can see in the below image of amount
transfer.
o First, we will login as a user P to amount transfer and send Rs200 amount, the confirmation
message should be displayed on the screen as amount transfer successfully. Now logout as P
and login as user Q and go to amount balance page and check for a balance in that account =
Present balance + Received Balance. Therefore, the integration test is successful.
o Also, we check if the amount of balance has reduced by Rs200 in P user account.
o Click on the transaction, in P and Q, the message should be displayed regarding the data and
time of the amount transfer.
Scenarios1:
o First, we login as P users and click on the Compose mail and performing the functional testing for
the specific components.
o Now we click on the Send and also check for Save Drafts.
o After that, we send a mail to Q and verify in the Send Items folder of P to check if the send mail
is there.
o Now, we will log out as P and login as Q and move to the Inbox and verify that if the mail has
reached.
Secanrios2: We also perform the integration testing on Spam folders. If the particular contact has been
marked as spam, then any mail sent by that user should go to the spam folder and not in the inbox.
As we can see in the below image, we will perform the functional testing
for all the text fields and every feature. Then we will perform integration testing for the related
functions. We first test the add user, list of users, delete user, edit user, and then search user.
Note:
o There are some features, we might be performing only the functional testing, and there are
some features where we are performing both functional and integration testing based on the
feature's requirements.
o Prioritizing is essential, and we should perform it at all the phases, which means we will open
the application and select which feature needs to be tested first. Then go to that feature and
choose which component must be tested first. Go to those components and determine what
values to be entered first.
And don't apply the same rule everywhere because testing logic varies from feature to feature.
o While performing testing, we should test one feature entirely and then only proceed to another
function.
o Among the two features, we must be performing only positive integrating testing or
both positive and negative integration testing, and this also depends on the features need.
In the Incremental Approach, modules are added in ascending order one by one or according to need.
The selected modules must be logically related. Generally, two or more than two modules are added and
tested to determine the correctness of functions. The process continues until the successful testing of all
the modules.
OR
In this type of testing, there is a strong relationship between the dependent modules. Suppose we take
two or more modules and verify that the data flow between them is working fine. If it is, then add more
modules and test again.
For example: Suppose we have a Flipkart application, we will perform incremental integration testing,
and the flow of the application would like this:
o Top-Down approach
o Bottom-Up approach
Top-Down Approach
The top-down testing strategy deals with the process in which higher level modules are tested with lower
level modules until the successful completion of testing of all the modules. Major design flaws can be
detected and fixed early because critical modules tested first. In this type of method, we will add the
modules incrementally or one by one and check the data flow in the same order.
In the top-down approach, we will be ensuring that the module we are adding is the child of the
previous one like Child C is a child of Child B and so on as we can see in the below image:
Advantages:
Disadvantages:
Bottom-Up Method
The bottom to up testing strategy deals with the process in which lower level modules are tested with
higher level modules until the successful completion of testing of all the modules. Top level critical
modules are tested at last, so it may cause a defect. Or we can say that we will be adding the modules
from bottom to the top and check the data flow in the same order.
Bottom-Up Method
The bottom to up testing strategy deals with the process in which lower level modules are tested with
higher level modules until the successful completion of testing of all the modules. Top level critical
modules are tested at last, so it may cause a defect. Or we can say that we will be adding the modules
from bottom to the top and check the data flow in the same order.
In the bottom-up method, we will ensure that the modules we are adding are the parent of the previous
one as we can see in the below image:
Advantages
Disadvantages
o Critical modules are tested last due to which the defects can occur.
o There is no possibility of an early prototype.
Functional testing also called as black-box testing, because it focuses on application specification rather
than actual code. Tester has to test only the program rather than the system.
The purpose of the functional testing is to check the primary entry function, necessarily usable function,
the flow of screen GUI. Functional testing displays the error message so that the user can easily navigate
throughout the application.
The main objective of functional testing is checking the functionality of the software system. It
concentrates on:
o Basic Usability: Functional Testing involves the usability testing of the system. It checks whether
a user can navigate freely without any difficulty through screens.
o Accessibility: Functional testing test the accessibility of the function.
o Mainline function: It focuses on testing the main feature.
o Error Condition: Functional testing is used to check the error condition. It checks whether the
error message displayed.
The main objective of functional testing is to test the functionality of the component.
The developer does unit testing. Unit testing is done in the development phase of the application.
Smoke Testing: Functional testing by smoke testing. Smoke testing includes only the basic (feature)
functionality of the system. Smoke testing is known as "Build Verification Testing." Smoke testing aims
to ensure that the most important function work.
For example, Smoke testing verifies that the application launches successfully will check that GUI is
responsive.
Sanity Testing: Sanity testing involves the entire high-level business scenario is working correctly. Sanity
testing is done to check the functionality/bugs fixed. Sanity testing is little advance than smoke testing.
Regression Testing: This type of testing concentrate to make sure that the code changes should not side
effect the existing functionality of the system. Regression testing specifies when bug arises in the system
after fixing the bug, regression testing concentrate on that all parts are working or not. Regression testing
focuses on is there any impact on the system.
Integration Testing: Integration testing combined individual units and tested as a group. The purpose
of this testing is to expose the faults in the interaction between the integrated units.
White box testing: White box testing is known as Clear Box testing, code-based testing, structural
testing, extensive testing, and glass box testing, transparent box testing. It is a software testing method in
which the internal structure/design/ implementation tested known to the tester.
The white box testing needs the analysis of the internal structure of the component or system.
Black box testing: It is also known as behavioral testing. In this testing, the internal structure/ design/
implementation not known to the tester. This type of testing is functional testing. Why we called this type
of testing is black-box testing, in this testing tester, can't see the internal code.
For example, A tester without the knowledge of the internal structures of a website tests the web pages
by using the web browser providing input and verifying the output against the expected outcome.
User acceptance testing: It is a type of testing performed by the client to certify the system according to
requirement. The final phase of testing is user acceptance testing before releasing the software to the
market or production environment. UAT is a kind of black-box testing where two or more end-users will
involve.
Retesting: Retesting is a type of testing performed to check the test cases that were unsuccessful in the
final execution are successfully pass after the defects fixed. Usually, tester assigns the bug when they find
it while testing the product or its component. The bug allocated to a developer, and he fixes it. After
fixing, the bug is assigned to a tester for its verification. This testing is known as retesting.
Database Testing: Database testing is a type of testing which checks the schema, tables, triggers, etc. of
the database under test. Database testing may involve creating complex queries to load/stress test the
database and check its responsiveness. It checks the data integrity and consistency.
Example: let us consider a banking application whereby a user makes a transaction. Now from database
testing following, things are important. They are:
o Application store the transaction information in the application database and displays them
correctly to the user.
o No information lost in this process
o The application does not keep partially performed or aborted operation information.
o The user information is not allowed individuals to access by the
Ad-hoc testing: Ad-hoc testing is an informal testing type whose aim is to break the system. This type of
software testing is unplanned activity. It does not follow any test design to create the test cases. Ad-hoc
testing is done randomly on any part of the application; it does not support any structured way of testing.
Recovery Testing: Recovery testing is used to define how well an application can recover from crashes,
hardware failure, and other problems. The purpose of recovery testing is to verify the system's ability to
recover from testing points of failure.
Static Testing: Static testing is a software testing technique by which we can check the defects in
software without actually executing it. Static testing is done to avoid errors in the early stage of the
development as it is easier to find failure in the early stages. Static testing used to detect the mistakes that
may not found in dynamic testing.
Static testing helps to find the error in the early stages. With the help of static testing, this will reduce the
development timescales. It reduces the testing cost and time. Static testing also used for development
productivity.
Component Testing: Component Testing is also a type of software testing in which testing is performed
on each component separately without integrating with other parts. Component testing is also a type of
black-box testing. Component testing also referred to as Unit testing, program testing, or module testing.
Grey Box Testing: Grey Box Testing defined as a combination of both white box and black-box testing.
Grey Box testing is a testing technique which performed with limited information about the internal
functionality of the system.
The functional testing can also be executed by various apart from manual testing. These tools simplify the
process of testing and help to get accurate and useful results.
It is one of the significant and top-priority based techniques which were decided and specified before the
development process.
Example: Here, we are giving an example of banking software. In a bank when money transferred from
bank A to bank B. And the bank B does not receive the correct amount, the fee is applied, or the money
not converted into the correct currency, or incorrect transfer or bank A does not receive statement advice
from bank B that the payment has received. These issues are critical and can be avoided by proper
functional testing.
o Functional testing can miss a critical and logical error in the system.
o This testing is not a guarantee of the software to go live.
o The possibility of conducting redundant testing is high in functional testing.
What is Acceptance Testing?
User Acceptance Testing (UAT) is a type of testing performed by the end user or the client to
verify/accept the software system before moving the software application to the production
environment. UAT is done in the final phase of testing after functional, integration and system
testing is done.
Purpose of UAT
The main Purpose of UAT is to validate end to end business flow. It does not focus on cosmetic
errors, spelling mistakes or system testing. User Acceptance Testing is carried out in a separate
testing environment with production-like data setup. It is kind of black box testing where two or
more end-users will be involved.
Project Charter
Business Use Cases
Process Flow Diagrams
Business Requirements Document(BRD)
System Requirements Specification(SRS)
Tester or Business Analyst or Subject Matter Experts who understand the business requirements or
flows can prepare test and data which are realistic to the business.
UAT Tools
There are several tools in the market used for User acceptance testing and some are listed for reference:
Fitness tool: It is a java tool used as a testing engine. It is easy to create tests and record results in a table.
Users of the tool enter the formatted input and tests are created automatically. The tests are then executed
and the output is returned back to the user.
Watir : It is toolkit used to automate browser-based tests during User acceptance testing. Ruby is the
programming language used for inter-process communication between ruby and Internet Explorer.