0% found this document useful (0 votes)
413 views

Software Testing Fundamentals

The document provides an overview of software testing fundamentals and the software development process. It discusses the different phases of testing like verification, validation, unit testing, integration testing and system testing. It also describes different testing techniques like white box testing and black box testing. The key steps in the software testing life cycle are requirements analysis, test case design, test execution and result reporting.

Uploaded by

siddu_goli
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
413 views

Software Testing Fundamentals

The document provides an overview of software testing fundamentals and the software development process. It discusses the different phases of testing like verification, validation, unit testing, integration testing and system testing. It also describes different testing techniques like white box testing and black box testing. The key steps in the software testing life cycle are requirements analysis, test case design, test execution and result reporting.

Uploaded by

siddu_goli
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 116

Software Testing Fundamentals

MADHAVA REDDY Technical Associate Tech Mahindra Ltd Sharada Centre Pune-41104 [email protected]

Fundamentals of Software Testing

CONTENTS 1 Software e!elo"ment Process # % & ( ( ( 1# 1( 1( #1 #1 4( 4( 4( %2 %-

! Software testin$ " Software Test e!elo"ment life Cycle

# 'erification Phase 4.1 )ns"ection 4.# *al+throu$h $ Testin$ Ty"es and Techni,ues %.1 *hite .o/ Testin$ %.# .loc+ .o/ Testin$ % 'alidation Phase % % % % 1 ! " # 0nit Testin$ )nte$ration Testin$ System Testin$ 0ser Acce"tance Testin$ e!elo"ment

& Test *are

2.1 Test Strate$y & ! Test Case & " Test Plan ' efect Mana$ement ' 1 *hat is efect3 ' ! efect Ta/onomies ' " Life Cycle of efect ( Metrics for Testin$ 1) 4eneral Conce"ts 11 CMM Le!els 1! Client5Ser!er Architecture
MADHAVA REDDY ANNADI

&# (1 -( 10&
2 of 11

Fundamentals of Software Testing

Software De*elo+ment ,ro-ess

1 Software De*elo+ment ,ro-ess This article is "art of the Software A-ti*ities and Ste+s 6e,uirements 7 Architecture 7 )m"lementation 7 Testin$ 7 Models A$ile 7 Clean room 7 )terati!e 7 6A Su++orting Dis-i+lines Confi$uration Mana$ement 7 esi$n A software de*elo+ment +ro-ess is a structure im"osed on the de!elo"ment of a software "roduct. Synonyms include software life-.-le and software +ro-ess. There are se!eral models for such "rocesses; each descri<in$ a""roaches to a !ariety of tas+s or acti!ities that ta+e "lace durin$ the "rocess. Process activities/steps Software Elements Anal.sis/ The most im"ortant tas+ in creatin$ a software "roduct is e/tractin$ the re,uirements. Customers ty"ically +now what they want; <ut not what software should do; while incom"lete; am<i$uous or contradictory re,uirements are reco$ni=ed <y s+illed and e/"erienced software en$ineers. >re,uently demonstratin$ li!e code may hel" reduce the ris+ that the re,uirements are incorrect. S+e-ifi-ation/ S"ecification is the tas+ of "recisely descri<in$ the software to <e written; "ossi<ly in a ri$orous way. )n "ractice; most successful s"ecifications are written to understand and fine-tune a""lications that were already well de!elo"ed; althou$h safety-critical software systems are often
MADHAVA REDDY ANNADI
! of 11

e!elo"ment Process series

e"loyment

7 60P 7 S"iral 7 *aterfall 7 8P

ocumentation 7 Pro9ect Mana$ement 7 0ser :/"erience

Fundamentals of Software Testing

carefully s"ecified "rior to a""lication de!elo"ment. Specifications are most important for external interfaces that must remain stable. Software ar-0ite-ture/ The architecture of a software system refers to an a<stract re"resentation of that system. Architecture is concerned with ma+in$ sure the software system will meet the re,uirements of the "roduct; as well as ensurin$ that future re,uirements can <e addressed. The architecture ste" also addresses interfaces <etween the software system and other software "roducts; as well as the underlyin$ hardware or the host o"eratin$ system. 1m+lementation 2or -oding3/ 6educin$ a desi$n to code may <e the most o<!ious "art of the software en$ineerin$ 9o<; <ut it is not necessarily the lar$est "ortion. Testing/ Testin$ of "arts of software; es"ecially where code <y two different en$ineers must wor+ to$ether; falls to the software en$ineer. Do-umentation/ An im"ortant ?and often o!erloo+ed@ tas+ is documentin$ the internal desi$n of software for the "ur"ose of future maintenance and enhancement. Documentation is most important for external interfaces. Software Training and Su++ort/ A lar$e "ercenta$e of software "ro9ects fail <ecause the de!elo"ers fail to reali=e that it doesnAt matter how much time and "lannin$ a de!elo"ment team "uts into creatin$ software if no<ody in an or$ani=ation ends u" usin$ it. Peo"le are occasionally resistant to chan$e and a!oid !enturin$ into an unfamiliar area; so as a "art of the de"loyment "hase; its !ery im"ortant to ha!e trainin$ classes for the most enthusiastic software users ?<uild e/citement and confidence@; shiftin$ the trainin$ towards the neutral users intermi/ed with the a!id su""orters; and finally incor"orate the rest of the or$ani=ation into ado"tin$ the new software. 0sers will ha!e lots of ,uestions and software "ro<lems which leads to the ne/t "hase of software. Maintenan-e/ Maintainin$ and enhancin$ software to co"e with newly disco!ered "ro<lems or new re,uirements can ta+e far more time than the initial de!elo"ment of the software. Bot only may it <e necessary to add code that does not fit the ori$inal desi$n <ut 9ust determinin$ how software wor+s at some "oint after it is com"leted may re,uire si$nificant effort <y a software
MADHAVA REDDY ANNADI
" of 11

Fundamentals of Software Testing

en$ineer. A<out of all software en$ineerin$ wor+ is maintenance; <ut this statistic can <e misleadin$. A small "art of that is fi/in$ <u$s. Most maintenance is e/tendin$ systems to do new thin$s; which in many ways can <e considered new wor+. )n com"arison; a<out of all ci!il en$ineerin$; architecture; and construction wor+ is maintenance in a similar way. Process models A decades-lon$ $oal has <een to find re"eata<le; "redicta<le "rocesses or methodolo$ies that im"ro!e "roducti!ity and ,uality. Some try to systemati=e or formali=e the seemin$ly unruly tas+ of writin$ software. Cthers a""ly "ro9ect mana$ement techni,ues to writin$ software. *ithout "ro9ect mana$ement; software "ro9ects can easily <e deli!ered late or o!er <ud$et. *ith lar$e num<ers of software "ro9ects not meetin$ their e/"ectations in terms of functionality; cost; or deli!ery schedule; effecti!e "ro9ect mana$ement is "ro!in$ difficult. 4aterfall +ro-esses The <est-+nown and oldest "rocess is the waterfall model; where de!elo"ers ?rou$hly@ follow these ste"s in orderD State re,uirements Analy=e them esi$n a solution a""roach Architect a software framewor+ for that solution e!elo" code Test ?"erha"s unit tests then system tests@ e"loy; and Maintain.

After each ste" is finished; the "rocess "roceeds to the ne/t ste"; 9ust as <uilders donAt re!ise the foundation of a house after the framin$ has <een erected.
MADHAVA REDDY ANNADI
# of 11

Fundamentals of Software Testing

Software Testing

Software Testing is a critical element of software ,uality assurance and

re"resents the ultimate re!iew of s"ecification; desi$n and code $eneration. Cnce source code has <een $enerated software must <e tested to unco!er as many errors as "ossi<le <efore deli!ery to your customer. Eour $oal is to desi$n a series of test cases that ha!e a hi$h li+elihood of findin$ errors. That is where software testin$ techni,ues enter the "icture. These techni,ues "ro!ide systematic $uidance for desi$nin$ tests that ?1@ e/ercise the internal lo$ic of software com"onents and ?#@ e/ercise the in"ut and out"ut domains of the "ro$ram to unco!er errors in "ro$ram function; <eha!ior and "erformance. 40. is it 1m+ortant5 6e!iews and other SFA acti!ities can and do unco!er errors <ut they are not sufficient. Therefore; you ha!e to e/ecute the "ro$ram <efore it $ets to the customer with the s"ecific intent of findin$ and remo!in$ all errors. 40at are t0e ste+s5 Software is tested from two different ways ?1@ internal "ro$ram lo$ic is e/ercised usin$ Gwhite <o/H test case desi$n techni,ues. Software re,uirements are e/ercised usin$ G<loc+ <o/H test case desi$n techni,ues. )n <oth cases the intent is to find the ma/imum num<er of errors with the minimum amount of effort and time. Testin$ <e$ins in the small and "ro$ress to the lar$e. .y this we mean that early testin$ focuses on a sin$le com"onent and a""lies white and <loc+ <o/ tests to unco!er errors in "ro$ram lo$ic and function. After indi!idual com"onents are tested they must <e inte$rated. Testin$ continues as the software is constructed. >inally a series of hi$h order tests are e/ecuted once the full "ro$ram is o"erational. Testing O67e-ti*es/ Testin$ is "rocess of e/ecutin$ a "ro$ram with the intent of findin$ an error A $ood test case is one that has a hi$h "ro<a<ility of findin$ errors.

MADHAVA REDDY ANNADI

of 11

Fundamentals of Software Testing

Testing +rin-i+les/ All tests should <e "lanned to customer re,uirements Tests should <e "lanned lon$ <efore testin$ <e$ins Testin$ should <e$in Gin the smallH and "ro$ress toward testin$ Gin the lar$eH :/hausti!e testin$ is not "ossi<le To most effecti!e; testin$ should <e conducted <y an inde"endent third "arty.

Software Testa6ilit. is sim"ly how easily a com"uter "ro$ram can <e tested. C"era<ility 7 A<sor<a<ility 7 Controlla<ility 7 0nderstanda<ility ecom"osa<ility 7 Sim"licity 7 Sta<ility 7

" T0e Test De*elo+ment 8ife C.-le 2TD8C3 0sually; Testin$ is considered as a "art of the System our "ractical e/"erience; we framed this Test e!elo"ment Life Cycle. *ith

e!elo"ment Life Cycle.

The dia$ram does not de"ict where and when you write your Test Plan and Strate$y documents. .ut; it is understood that <efore you <e$in your testin$ acti!ities these documents should <e ready. )deally; when the Pro9ect Plan and Pro9ect Strate$y are <ein$ made; this is the time when the Test Plan and Test Strate$y documents are also made.

MADHAVA REDDY ANNADI

$ of 11

Fundamentals of Software Testing

Test De*elo+ment 8ife C.-le 2TD8C3


Re9uirement Stud. Re9uirement C0e-:list

Software Re9uirement S+e-ifi-ation

Software Re9uirement S+e-ifi-ation

Fun-tional S+e-ifi-ation C0e-:list

Fun-tional S+e-ifi-ation Do-ument

Fun-tional S+e-ifi-ation Do-ument

Ar-0ite-ture Design

Ar-0ite-ture Design

Detailed Design Do-ument

Coding Fun-tional S+e-ifi-ation Do-ument ;nit Test Case Do-uments

;nit Test Case Do-ument Design Do-ument Fun-tional S+e-ifi-ation Do-ument S.stem Test Case Do-ument 1ntegration Test Case Do-ument Regression Test Case Do-ument

;nit<1ntegration<S.stem Test Case Do-uments

Fun-tional S+e-ifi-ation Do-ument ,erforman-e Criteria Software Re9uirement S+e-ifi-ation Regression Test Case Do-ument ,erforman-e Test Cases and S-enarios

,erforman-e Test Cases and S-enarios

;ser A--e+tan-e Test Case Do-uments<S-enarios

MADHAVA REDDY ANNADI

% of 11

Fundamentals of Software Testing

Verifi-ation = Validation
Verification refers to the set of acti!ities that ensure that software correctly im"lements a s"ecific function. Validation refers to a different set of acti!ities that ensure that the software that has <een <uilt it tracea<le to customer re,uirements. 'erificationDH Are we <uildin$ the "roduct ri$htH 'alidationD Are we <uildin$ the ri$ht "roductH

# Verifi-ation ,0ase
'erification "rocess hel"s in detectin$ defects early; and "re!entin$ their lea+a$e downstream. Thus; the hi$her cost of later detection and rewor+ is eliminated. # 1 1ns+e-tion A static analysis techni,ue that relies on !isual e/amination of de!elo"ment "roducts to detect errors; !iolations of de!elo"ment standards; and other "ro<lems. Ty"es include code ins"ectionI desi$n ins"ection; Architectural ins"ections; Test ware ins"ections etc. # ! 4al:t0roug0 A ste"-<y-ste" re!iew of a s"ecification; usa<ility features or desi$n <efore it is handed off to the technical team for de!elo"ment. ,eer re*iew Ja!in$ one or more "ro$rammers re!iew the source code of a "ro$ram written <y someone else. Peer re!iew "roduces much <etter code; since others ha!e already tried to understand it. )ndeci"hera<le routines are eliminated <efore they $et <uried into the fa<ric of the "ro$ram.

MADHAVA REDDY ANNADI

& of 11

Fundamentals of Software Testing

$ Testing T.+es and Te-0ni9ues


$ 1 40ite >o? Testing 40ite >o? Testing is sometimes called Structural or 4lass .o/ Testin$. 0sin$ *.T methods the software en$ineer can deri!e test case that ?1@ $uarantee that all inde"endent "aths within a module ha!e <een e/ercised at least once. ?#@ :/ercise all lo$ical decisions on their true and false sides. ?1@ :/ecute all loo"s at their <oundaries and within their o"erational <ounds. ?4@ :/ercise internal data structures to ensure their !alidity. >asis ,at0 Testing .asis "ath testin$ is a white box testing techni,ue first "ro"osed <y Tom McCa<e. The .asis "ath method ena<les to deri!e a lo$ical com"le/ity measure of a "rocedural desi$n and use this measure as a $uide for definin$ a <asis set of e/ecution "aths. Test Cases deri!ed to e/ercise the <asis set are $uaranteed to e/ecute e!ery statement in the "ro$ram at least one time durin$ testin$. Flow @ra+0 Notation The flow $ra"h de"icts lo$ical control flow usin$ a dia$rammatic notation. :ach structured construct has a corres"ondin$ flow $ra"h sym<ol. C.-lomati- Com+le?it. Cyclomatic complexity is software metric that provides a quantitative measure of the logical complexity of a program. *hen used in the conte/t of a <asis "ath testin$ method; the !alue com"uted for Cyclomatic com"le/ity defines the num<er for inde"endent "aths in the <asis set of a "ro$ram and "ro!ides us an u""er <ound for the num<er of tests that must <e conducted to ensure that all statements ha!e <een e/ecuted at least once. Com+uting C.-lomati- Com+le?it. Cyclomatic com"le/ity has a foundation in $ra"h theory and "ro!ides us with e/tremely useful software metric. Com"le/ity is com"uted in one of the three waysD
MADHAVA REDDY ANNADI
1' of 11

Fundamentals of Software Testing

1. The num<er of re$ions of the flow $ra"h corres"onds to the Cyclomatic com"le/ity. #. Cyclomatic com"le/ity; '?4@; for a flow $ra"h; 4 is defined as ' ?4@ K :-BL# *here :; is the num<er of flow $ra"h ed$es; B is the num<er of flow $ra"h nodes. 1. Cyclomatic com"le/ity; ' ?4@ for a flow $ra"h; 4 is also defined asD ' ?4@ K PL1 *here P is the num<er of "redicate nodes contained in the flow $ra"h 4. @ra+0 Matri-es The "rocedure for deri!in$ the flow $ra"h and e!en determinin$ a set of <asis "aths is amena<le to mechani=ation. To de!elo" a software tool that assists in <asis "ath testin$; a data structure; called a graph matrix can <e ,uite useful. A Graph Matrix is a s,uare matri/ whose si=e is e,ual to the num<er of nodes on the flow $ra"h. :ach row and column corres"onds to an identified node; and matri/ entries corres"ond to connections <etween nodes. Control Stru-ture Testing escri<ed <elow are some of the !ariations of Control Structure Testin$. Condition Testing Condition testin$ is a test case desi$n method that e/ercises the lo$ical conditions contained in a "ro$ram module. Data Flow Testing The data flow testin$ method selects test "aths of a "ro$ram accordin$ to the locations of definitions and uses of !aria<les in the "ro$ram.

MADHAVA REDDY ANNADI

11 of 11

Fundamentals of Software Testing

8oo+ Testing Loo" Testin$ is a white <o/ testin$ techni,ue that focuses e/clusi!ely on the !alidity of loo" constructs. >our classes of loo"s can <e definedD Sim"le loo"s; Concatenated loo"s; nested loo"s; and unstructured loo"s. Sim+le 8oo+s The followin$ sets of tests can <e a""lied to sim"le loo"s; where MnN is the ma/imum num<er of allowa<le "asses throu$h the loo". 1. S+i" the loo" entirely. #. Cnly one "asses throu$h the loo". 1. Two "asses throu$h the loo". 4. MmN "asses throu$h the loo" where mOn. %. n-1; n; nL1 "asses throu$h the loo". Nested 8oo+s )f we e/tend the test a""roach from sim"le loo"s to nested loo"s; the num<er of "ossi<le tests would $row $eometrically as the le!el of nestin$ increases. 1. Start at the innermost loo". Set all other loo"s to minimum !alues. #. Conduct sim"le loo" tests for the innermost loo" while holdin$ the outer loo"s at their minimum iteration "arameter !alues. Add other tests for out-of-ran$e or e/clude !alues. 1. *or+ outward; conductin$ tests for the ne/t loo"; <ut +ee" all other outer loo"s at minimum !alues and other nested loo"s to Gty"icalH !alues. 4. Continue until all loo"s ha!e <een tested. Con-atenated 8oo+s Concatenated loo"s can <e tested usin$ the a""roach defined for sim"le loo"s; if each of the loo"s is inde"endent of the other. Jowe!er; if two loo"s are
MADHAVA REDDY ANNADI
12 of 11

Fundamentals of Software Testing

concatenated and the loo" counter for loo" 1 is used as the initial !alue for loo" #; then the loo"s are not inde"endent. ;nstru-tured 8oo+s *hene!er "ossi<le; this class of loo"s should <e redesi$ned to reflect the use of the structured "ro$rammin$ constructs. $ ! >la-: >o? Testing >la-: >o? Testing is also called <eha!ioral testin$I focuses on the functional re,uirements of the software. Software en$ineer deri!e sets of in"ut conditions that will fully e/ercise all functional re,uirements of a "ro$ram. .lac+ .o/ Testin$ attem"ts to find errors in the followin$ cate$oriesI 1@ incorrect or missin$ functions ?#@ interface errors ?1@ errors in data structures or e/ternal data<ase access ?4@ <eha!ior or "erformance errors ?%@ initiali=ation and termination errors. Ad*antages of >la-: >o? Testing -Tester can <e non-technical. -This testin$ is most li+ely to find those <u$s as the user would find. -Testin$ hel"s to identify the !a$ueness and contradiction in functional

s"ecifications. -Test cases can <e desi$ned as soon as the functional s"ecifications are com"lete Disad*antages of >la-: >o? Testing - Chances of ha!in$ re"etition of tests that are already done <y "ro$rammer. - The test in"uts needs to <e from lar$e sam"le s"ace. - )t is difficult to identify all "ossi<le in"uts in limited testin$ time. So writin$ test cases is slow and difficult Chances of ha!in$ unidentified "aths durin$ this testin$

MADHAVA REDDY ANNADI

1! of 11

Fundamentals of Software Testing

@ra+0 >ased Testing Met0ods Software testin$ <e$ins <y creatin$ a $ra"h of im"ortant o<9ects and their relationshi"s and then de!isin$ a series of tests that will co!er the $ra"h so that each o<9ects and their relationshi"s and then de!isin$ a series of tests that will co!er the $ra"h so that each o<9ect and relationshi" is e/ercised and error is unco!ered. Error @uessing :rror 4uessin$ comes with e/"erience with the technolo$y and the "ro9ect. :rror 4uessin$ is the art of $uessin$ where errors can <e hidden. There are no s"ecific tools and techni,ues for this; <ut you can write test cases de"endin$ on the situationD :ither when readin$ the functional documents or when you are testin$ and find an error that you ha!e not documented. >oundar. Value Anal.sis >oundar. *alue anal.sis is a software testin$ related techni,ue to determine test cases co!erin$ +nown areas of fre,uent "ro<lems at the <oundaries of software com"onent in"ut ran$es. Introduction Testin$ e/"erience has shown that es"ecially the <oundaries of in"ut ran$es to a software com"onent are lia<le to defects. A "ro$rammer who has to im"lement e.$. the ran$e 1 to 1# at an in"ut; which e.$. stands for the month Panuary to in a date; has in his code a line chec+in$ for this ran$e. This may loo+ li+eD if ?month Q 0 RR month O 11@ .ut a common "ro$rammin$ error may chec+ a wron$ ran$e e.$. startin$ the ran$e at 0 <y writin$D if ?month QK 0 RR month O 11@ >or more com"le/ ran$e chec+s in a "ro$ram this may <e a "ro<lem which is not so easily s"otted as in the a<o!e sim"le e/am"le. ecem<er

MADHAVA REDDY ANNADI

1" of 11

Fundamentals of Software Testing

Applying boundary value analysis To set u" <oundary !alue analysis test cases you first ha!e to determine which <oundaries you ha!e at the interface of a software com"onent. This has to <e done <y a""lyin$ the e,ui!alence "artitionin$ techni,ue. .oundary !alue analysis and e,ui!alence "artitionin$ are ine!ita<ly lin+ed to$ether. >or the e/am"le of the month in a date you would ha!e the followin$ "artitionsD ... -# -1 0 1 .............. 1# 11 14 1% ..... --------------7-------------------7--------------------)n!alid "artition 1 !alid "artition in!alid "artition #

A""lyin$ <oundary !alue analysis you ha!e to select now a test case at each side of the <oundary <etween two "artitions. )n the a<o!e e/am"le this would <e 0 and 1 for the lower <oundary as well as 1# and 11 for the u""er <oundary. :ach of these "airs consists of a ScleanS and a SdirtyS test case. A ScleanS test case should $i!e you a !alid o"eration result of your "ro$ram. A SdirtyS test case should lead to a correct and s"ecified in"ut error treatment such as the limitin$ of !alues; the usa$e of a su<stitute !alue; or in case of a "ro$ram with a user interface; it has to lead to warnin$ and re,uest to enter correct data. The <oundary !alue analysis can ha!e & test cases; n-1; nL1 for the u""er limit and n; n-1; nL1 for the lower limit. A further set of <oundaries has to <e considered when you set u" your test cases. A solid testin$ strate$y also has to consider the natural <oundaries of the data ty"es used in the "ro$ram. )f you are wor+in$ with si$ned !alues this is es"ecially the ran$e around =ero ?-1; 0; L1@. Similar to the ty"ical ran$e chec+ faults "ro$rammers tend to ha!e wea+nesses in their "ro$rams in this ran$e. :.$. this could <e a di!ision <y =ero "ro<lems when a =ero !alue is "ossi<le to occur althou$h the "ro$rammer always thou$ht the ran$e startin$ at 1. )t could <e a si$n "ro<lem when a !alue turns out to <e ne$ati!e in some rare cases; althou$h the "ro$rammer always e/"ected it to <e "ositi!e. :!en if this critical natural <oundary is clearly within an e,ui!alence "artition it should lead to additional test cases chec+in$ the ran$e around =ero. A further natural <oundary is the natural lower and u""er limit of the data ty"e itself. :.$. an unsi$ned (-<it !alue has the ran$e of 0 to #%%. A $ood test strate$y would also chec+ how the "ro$ram reacts at an in"ut of -1 and 0 as well as #%% and #%&.
MADHAVA REDDY ANNADI
1# of 11

Fundamentals of Software Testing

The tendency is to relate <oundary !alue analysis more to the so called <lac+ <o/ testin$ which is strictly chec+in$ a software com"onent at its interfaces; without consideration of internal structures of the software. .ut ha!in$ a closer loo+ on the su<9ect there are cases where it a""lies also to white <o/ testin$. After determinin$ the necessary test cases with e,ui!alence "artitionin$ and the su<se,uent <oundary !alue analysis it is necessary to define the com<inations of the test cases in case of multi"le in"uts to a software com"onent. .oundary 'alue Analysis ?.'A@ is a test data selection techni,ue ?>unctional Testin$ techni,ue@ where the e/treme !alues are chosen. .oundary !alues include ma/imum; minimum; 9ust inside5outside <oundaries; ty"ical !alues; and error !alues. The ho"e is that; if a system wor+s correctly for these s"ecial !alues then it will wor+ correctly for all !alues in <etween. :/tends e,ui!alence "artitionin$ Test <oth sides of each <oundary Loo+ at out"ut <oundaries for test cases too Test min; min-1; ma/; ma/L1; ty"ical !alues .'A focuses on the <oundary of the in"ut s"ace to identify test cases 6ational is that errors tend to occur near the e/treme !alues of an in"ut !aria<le T0ere are two wa.s to generaliAe t0e >VA te-0ni9ues/ 1. .y the num<er of !aria<les o >or n !aria<lesD .'A yields 4n L 1 test cases.

#. .y the +inds of ran$es o 4enerali=in$ ran$es de"ends on the nature or ty"e of !aria<les Be/t ate has a !aria<le Month and the ran$e could <e defined as TPan; >e<; U ecV
MADHAVA REDDY ANNADI
1 of 11

Fundamentals of Software Testing

Min K Pan; Min L1 K >e<; etc. Trian$le had a declared ran$e of T1; #0;000V .oolean !aria<les ha!e e/treme !alues True and >alse <ut there is no clear choice for the remainin$ three !alues

Ad*antages of >oundar. Value Anal.sis 1. 6o<ustness Testin$ - .oundary 'alue Analysis "lus !alues that $o <eyond the limits #. Min - 1; Min; Min L1; Bom; Ma/ -1; Ma/; Ma/ L1 1. >orces attention to e/ce"tion handlin$ 4. >or stron$ly ty"ed lan$ua$es ro<ust testin$ results in run-time errors that a<ort normal e/ecution 8imitations of >oundar. Value Anal.sis .'A wor+s <est when the "ro$ram is a function of se!eral inde"endent !aria<les that re"resent <ounded "hysical ,uantities 1. )nde"endent 'aria<les o Be/t ate test cases deri!ed from .'A would <e inade,uateD focusin$ on the <oundary would not lea!e em"hasis on >e<ruary or lea" years o o e"endencies e/ist with Be/t ateAs ay; Month and Eear

Test cases deri!ed without consideration of the function

#. Physical Fuantities o An e/am"le of "hysical !aria<les <ein$ tested; tele"hone num<ers - what faults mi$ht <e re!ealed <y num<ers of 000-0000; 0000001; %%%-%%%%; -------(; --------3

MADHAVA REDDY ANNADI

1$ of 11

Fundamentals of Software Testing

E9ui*alen-e ,artitioning :,ui!alence "artitionin$ is a <lac+ <o/ testin$ method that di!ides the in"ut domain of a "ro$ram into classes of data from which test cases can <e deri!ed. 1. )f an in"ut condition s"ecifies a ran$e; one !alid and one two in!alid classes are defined. #. )f an in"ut condition re,uires a s"ecific !alue; one !alid and two in!alid e,ui!alence classes are defined. 1. )f an in"ut condition s"ecifies a mem<er of a set; one !alid and one in!alid e,ui!alence class is defined. 4. )f an in"ut condition is .oolean; one !alid and one in!alid class is defined. The e,ui!alence "artitions are usually deri!ed from the s"ecification of the com"onentAs <eha!ior. An in"ut has certain ran$es which are !alid and other ran$es which are in!alid. This may <e <est e/"lained at the followin$ e/am"le of a function which has the "ass "arameter SmonthS of a date. The !alid ran$e for the month is 1 to 1#; standin$ for Panuary to ecem<er. This !alid ran$e is called a "artition. )n this e/am"le there are two further "artitions of in!alid ran$es. The first in!alid "artition would <e OK 0 and the second in!alid "artition would <e QK 11. .... -# -1 0 1 .............. 1# 11 14 1% ..... --------------7-------------------7--------------------in!alid "artition 1 !alid "artition in!alid "artition #

:,ui!alence "artitionin$ is no stand alone method to determine test cases. )t has to <e su""lemented <y <oundary !alue analysis. Ja!in$ determined the "artitions of "ossi<le in"uts the method of <oundary !alue analysis has to <e a""lied to select the most effecti!e test cases out of these "artitions.

MADHAVA REDDY ANNADI

1% of 11

Fundamentals of Software Testing

% Validation ,0ase
The 'alidation Phase falls into "icture after the software is ready or when the code is <ein$ written. There are !arious techni,ues and testin$ ty"es that can <e a""ro"riately used while "erformin$ the testin$ acti!ities. Let us e/amine a few of them. % 1 ;nit Testing This is a ty"ical scenario of Manual 0nit Testin$ acti!ityA 0nit is allocated to a Pro$rammer for "ro$rammin$. Pro$rammer has to use M>unctional S"ecificationsN document as in"ut for his wor+. Pro$rammer "re"ares MPro$ram S"ecificationsN for his 0nit from the >unctional S"ecifications. Pro$ram S"ecifications descri<e the "ro$rammin$ a""roach; codin$ ti"s for the 0nitNs codin$. 0sin$ these MPro$ram s"ecificationsN as in"ut; Pro$rammer "re"ares M0nit Test CasesN document for that "articular 0nit. A M0nit Test Cases Chec+listN may <e used to chec+ the com"leteness of 0nit Test Cases document. MPro$ram S"ecificationsN and M0nit Test CasesN are re!iewed and a""ro!ed <y Fuality Assurance Analyst or <y "eer "ro$rammer. The "ro$rammer im"lements some functionality for the system to <e de!elo"ed. The same is tested <y referrin$ the unit test cases. *hile testin$ that functionality if any defects ha!e <een found; they are recorded usin$ the defect lo$$in$ tool whiche!er is a""lica<le. The "ro$rammer fi/es the <u$s found and tests the same for any errors. Stu6s and Dri*ers A software a""lication is made u" of a num<er of M0nitsN; where out"ut of one M0nitN $oes as an M)n"utN of another 0nit. e.$. A MSales Crder Printin$N "ro$ram ta+es a MSales CrderN as an in"ut; which is actually an out"ut of MSales Crder CreationN "ro$ram.
MADHAVA REDDY ANNADI
1& of 11

Fundamentals of Software Testing

ue to such interfaces; inde"endent testin$ of a 0nit <ecomes im"ossi<le. .ut that is what we want to doI we want to test a 0nit in isolationW So here we use MStu<N and M ri!er. A M ri!erN is a "iece of software that dri!es ?in!o+es@ the 0nit <ein$ tested. A dri!er creates necessary M)n"utsN re,uired for the 0nit and then in!o+es the 0nit. A 0nit may reference another 0nit in its lo$ic. A MStu<N ta+es "lace of such su<ordinate unit durin$ the 0nit Testin$. A MStu<N is a "iece of software that wor+s similar to a unit which is referenced <y the 0nit <ein$ tested; <ut it is much sim"ler that the actual unit. A Stu< wor+s as a MStand-inN for the su<ordinate unit and "ro!ides the minimum re,uired <eha!ior for that unit. Pro$rammer needs to create such M ri!ersN and MStu<sN for carryin$ out 0nit Testin$. .oth the ri!er and the Stu< are +e"t at a minimum le!el of com"le/ity; so that they

do not induce any errors while testin$ the 0nit in ,uestion. :/am"le - >or 0nit Testin$ of MSales Crder Printin$N "ro$ram; a M ri!erN "ro$ram will ha!e the code which will create Sales Crder records usin$ hardcode data and then call MSales Crder Printin$N "ro$ram. Su""ose this "rintin$ "ro$ram uses another unit which calculates Sales discounts <y some com"le/ calculations. Then call to this unit will <e re"laced <y a MStu<N; which will sim"ly return fi/ discount data. Example Let us say we want to write 0TC for a ata :ntry >orm <elowD

Item No. Item No.

Item Master Form

Item Name Item No. Item Price Item No.

MADHAVA REDDY ANNADI

2' of 11

Fundamentals of Software Testing

4i!en <elow are some of the 0nit Test Cases for the a<o!e >ormD Test Case No 1 )tem M.N. no. to 1. Create a new #;1. Should $et Test +ur+ose Case ,ro-edure E?+e-ted Result A-tual result

start <y MAN or

record. #. Ty"e )tem no.

acce"ted and control should mo!e to ne/t field. 4. Should not An $et error <e

startin$ with MAN. 1. Ty"e item no.

startin$ with M.N. 4. Ty"e item with no. any

acce"ted. messa$e

should

dis"layed and control should remain in )tem no. field.

startin$ MAN and M.N. #. )tem Price to <e if <etween )tem no. 1000 to #000 starts with MAN. 1.

character other than

Create

new

#; 1.:rror should $et dis"layed and control should remain in Price field. 4; %; &.Should $et

record with )tem no. startin$ with MAN. #.S"ecify "rice O 1000 1. S"ecify "rice

Q#000. 4. S"ecify "rice K

acce"ted and control should mo!e to ne/t field.

1000. %. S"ecify "rice K

#000. &. #000.


MADHAVA REDDY ANNADI
21 of 11

S"ecify 1000

"rice and

<etween

Fundamentals of Software Testing

;TC C0e-:list 0TC chec+list may <e used while re!iewin$ the 0TC "re"ared <y the "ro$rammer. As any other chec+list; it contains a list of ,uestions; which can <e answered as either a MEesN or a MBoN. The MAs"ectsN list $i!en in Section 4.1 a<o!e can <e referred to while "re"arin$ 0TC chec+list. :.$. 4i!en <elow is some of the chec+"oints in 0TC chec+list X 1. Are test cases "resent for all form field !alidations3 #. Are <oundary conditions considered3 1. Are :rror messa$es "ro"erly "hrased3 % ! 1ntegration Testing )nte$ration testin$ is a systematic techni,ue for constructin$ the "ro$ram structure while at the same time conductin$ tests to unco!er errors associated with interfacin$. The o<9ecti!e is to ta+e unit tested com"onents and <uild a "ro$ram structure that has <een dictated <y desi$n. 0sually; the followin$ methods of )nte$ration testin$ are followedD 1. To"-down )nte$ration a""roach. #. .ottom-u" )nte$ration a""roach. To+BDown 1ntegration To"-down inte$ration testin$ is an incremental a""roach to construction of "ro$ram structure. Modules are inte$rated <y mo!in$ downward throu$h the control hierarchy; <e$innin$ with the main control module. Modules su<ordinate to the main control module are incor"orated into the structure in either a de"th-first or <readthfirst manner.

MADHAVA REDDY ANNADI

22 of 11

Fundamentals of Software Testing

1. The )nte$ration "rocess is "erformed in a series of fi!e ste"sD #. The main control module is used as a test dri!er and stu<s are su<stituted for all com"onents directly su<ordinate to the main control module. 1. e"endin$ on the inte$ration a""roach selected su<ordinate stu<s are re"laced one at a time with actual com"onents. 4. Tests are conducted as each com"onent is inte$rated. %. Cn com"letion of each set of tests; stu< is re"laced with the real com"onent. &. 6e$ression testin$ may <e conducted to ensure that new errors ha!e not <een introduced. >ottomB;+ 1ntegration .ottom-u" inte$ration testin$ <e$ins construction and testin$ with atomic modules ?i.e. com"onents at the lowest le!els in the "ro$ram structure@. .ecause com"onents are inte$rated from the <ottom u"; "rocessin$ re,uired for com"onents su<ordinate to a $i!en le!el is always a!aila<le and the need for stu<s is eliminated. 1. A .ottom-u" inte$ration strate$y may <e im"lemented with the followin$ ste"sD #. Low le!el com"onents are com<ined into clusters that "erform a s"ecific software su< function. 1. A dri!er is written to coordinate test case in"ut and out"ut. 4. The cluster is tested. ri!ers are remo!ed and clusters are com<ined mo!in$ u"ward in the "ro$ram structure.

MADHAVA REDDY ANNADI

2! of 11

Fundamentals of Software Testing

% " S.stem Testing


S.stem Testing is actually a series of different tests whose "rimary "ur"ose is to fully e/ercise the com"uter <ased system. Althou$h each test has a different "ur"ose all wor+ to !erify that system elements ha!e <een "ro"erly inte$rated. As a rule; system testin$ ta+es; as its in"ut; all of the Sinte$ratedS software com"onents that ha!e successfully "assed inte$ration testin$ and also the software system itself inte$rated with any a""lica<le hardware system?s@. The "ur"ose of inte$ration testin$ is to detect any inconsistencies <etween the software units that are inte$rated to$ether ?called assemblages@ or <etween any of the assemblages and the hardware. System testin$ is a more limitin$ ty"e of testin$I it see+s to detect defects <oth within the Sinter-assem<la$esS and also within the system as a whole. Testing the whole system System testin$ is actually done to the entire system a$ainst the >unctional 6e,uirement S"ecification?s@ ?>6S@ and5or the System 6e,uirement S"ecification ?S6S@. Moreo!er; the System testin$ is an investigatory testin$ "hase; where the focus is to ha!e almost a destructi!e attitude and test not only the desi$n; <ut also the <eha!ior and e!en the <elie!ed e/"ectations of the customer. )t is also intended to test u" to and <eyond the <ounds defined in the software5hardware re,uirements s"ecification?s@. Cne could !iew System testin$ as the final destructive testin$ "hase <efore Acce"tance testin$. Types of system testing The followin$ e/am"les are different ty"es of testin$ that should <e considered durin$ System testin$D >unctional testin$ 0ser interface testin$ 0sa<ility testin$ Com"ati<ility testin$

MADHAVA REDDY ANNADI

2" of 11

Fundamentals of Software Testing

Model <ased testin$ :rror e/it testin$ 0ser hel" testin$ Security testin$ Ca"acity testin$ Performance testin$ Sanity testin$ Smo+e testin$ :/"loratory testin$ Adhoc testin$ 6e$ression testin$ 6elia<ility testin$ 6eco!ery testin$ )nstallation testin$ Maintenance testin$

Monkey Test is a unit test that runs with no s"ecific test in mind. The mon+ey in this case is the "roducer of any in"ut. >or e/am"le; a mon+ey test can enter random strin$s into te/t <o/es to ensure handlin$ of all "ossi<le user in"ut or "ro!ide $ar<a$e files to chec+ for loadin$ routines that ha!e <lind faith in their data sability testing etermines how well the user will <e a<le to understand and

interact with the a""lication. 0sa<ility testin$ is a means for measurin$ how well "eo"le can use some humanmade o<9ect ?such as a we< "a$e; a com"uter interface; a document; or a de!ice@ for its intended "ur"ose; i.e. usa<ility testin$ measures the usa6ilit. of the o<9ect. 0sa<ility testin$ focuses on a "articular o<9ect or a small set of o<9ects; whereas
MADHAVA REDDY ANNADI
2# of 11

Fundamentals of Software Testing

$eneral

human-com"uter

interaction

studies

attem"t

to

formulate

uni!ersal

"rinci"les. )f usa<ility testin$ unco!ers difficulties; such as "eo"le ha!in$ difficulty

understandin$ instructions; mani"ulatin$ "arts; or inter"retin$ feed<ac+; then de!elo"ers should im"ro!e the desi$n and test it a$ain. disco!er errors and areas of im"ro!ement. urin$ usa<ility testin$; the aim is to o<ser!e "eo"le usin$ the "roduct in as realistic a situation as "ossi<le; to esi$ners commonly focus e/cessi!ely on creatin$ desi$ns that loo+ ScoolS; com"romisin$ usa<ility and functionality. Volume Testing <elon$s to the $rou" of non-functional tests; which are often misunderstood and5or used interchan$ea<ly. 'olume testin$ refers to testin$ a software a""lication for a certain data !olume. This !olume can in $eneric terms <e the data<ase si=e or it could also <e the si=e of an interface file that is the su<9ect of !olume testin$. >or e/am"le; if you want to !olume test your a""lication with a s"ecific data<ase si=e; you will e/"lode your data<ase to that si=e and then test the a""licationAs "erformance on it. Another e/am"le could <e when there is a re,uirement for your a""lication to interact with an interface file ?could <e any file such as .dat; ./ml@I this interaction could <e readin$ and5or writin$ on to5from the file. Eou will create a sam"le file of the si=e you want and then test the a""licationAs functionality with that file to chec+ "erformance. Memory leakage! Memory con$ested situations occurs. This is ha""ens when $ar<a$e collection is not "ro"erly done. "egression testing 6e$ression Testin$ is retestin$ unchan$ed se$ments of a""lication. )t in!ol!es rerunnin$ tests that ha!e <een "re!iously e/ecuted to ensure that the same results can <e achie!ed currently as were achie!ed when the se$ment was last tested. The selecti!e retestin$ of a software system that has <een modified to ensure that any <u$s ha!e <een fi/ed and that no other "re!iously wor+in$ functions ha!e failed as a result of the re"arations and that newly added features ha!e not created "ro<lems with "re!ious !ersions of the software. Also referred to as !erification testin$; re$ression testin$ is initiated after a "ro$rammer has attem"ted to fi/ a reco$ni=ed "ro<lem or has added source code to a "ro$ram that may ha!e inad!ertently introduced errors. )t is a ,uality control measure to ensure that the
MADHAVA REDDY ANNADI
2 of 11

Fundamentals of Software Testing

newly modified code still com"lies with its s"ecified re,uirements and that unmodified code has not <een affected <y the maintenance acti!ity. 40at do .ou do during Regression testing5 o o o 6erunnin$ of "re!iously conducted tests 6e!iewin$ "re!iously "re"ared manual "rocedures Com"arin$ the current test results with the "re!iously e/ecuted test results 40at are t0e end goals of Regression testing5 o o To ensure that the unchan$ed system se$ments function "ro"erly To ensure that the "re!iously "re"ared manual "rocedures remain correct after the chan$es ha!e <een made to the a""lication system o To !erify that the data dictionary of data elements that ha!e <een chan$ed is correct #ompatibility testing concentrates on testin$ whether the $i!en a""lication $oes well with third "arty tools; software or hardware "latform. >or e/am"le; you ha!e de!elo"ed a we< a""lication. The ma9or com"ati<ility issue is; the we< site should wor+ well in !arious <rowsers. Similarly when you de!elo" a""lications on one "latform; you need to chec+ if the a""lication wor+s on other o"eratin$ systems as well. This is the main $oal of Com"ati<ility Testin$. .efore you <e$in com"ati<ility tests; our sincere su$$estion is that you should ha!e a cross-reference matri/ <etween !arious softwareNs; hardware <ased on the a""lication re,uirements. >or e/am"le; let us su""ose you are testin$ a we< a""lication. A sam"le list can <e as followsD

MADHAVA REDDY ANNADI

2$ of 11

Fundamentals of Software Testing

Hardware Pentium X )); 1#( M. 6AM Pentium X ))); #%& M. 6AM Pentium X )'; %1# M. 6AM

Software ): 4./; C"era; Betsca"e ): %./; Betsca"e

O+erating S.stem *indows -% *indows 8P

Mo=illa

Linu/

Com"ati<ility tests are also "erformed for !arious client5ser!er <ased a""lications where the hardware chan$es from client to client. Com"ati<ility Testin$ is !ery crucial to or$ani=ations de!elo"in$ their own "roducts. The "roducts ha!e to <e chec+ed for com"liance with the com"etitors of the third "arty tools; hardware; or software "latform. :.$. A Call center "roduct has <een <uilt for a solution with 8 "roduct <ut there is a client interested in usin$ it with E "roductI then the issue of com"ati<ility arises. )t is of im"ortance that the "roduct is com"ati<le with !aryin$ "latforms. *ithin the same "latform; the or$ani=ation has to <e watchful that with each new release the "roduct has to <e tested for com"ati<ility. "ecovery Testing is the acti!ity of testin$ how well the software is a<le to reco!er from crashes; hardware failures and other similar "ro<lems. A sanity test or sanity check is a <asic test to ,uic+ly e!aluate the !alidity of a claim or calculation; s"ecifically a !ery <rief run-throu$h of the functionality of a -om+uter +rogram; system; calculation; or other analysis; to assure that the system or methodolo$y wor+s as e/"ected; often "rior to a more e/hausti!e round of testin$. Sanity tests are sometimes mista+enly e,uated to smo+e tests. *here a distinction is made <etween sanity testin$ and smo+e testin$; itAs usually in one of two directions. :ither sanity testin$ is a focused <ut limited form of re$ression testin$ X narrow and dee"; <ut cursoryI or itAs <road and shallow; li+e a smo+e test; <ut concerned more with the "ossi<ility of Sinsane <eha!iorS such as slowin$ the entire system to a crawl; or destroyin$ the data<ase; <ut is not as thorou$h as a true smo+e test.
MADHAVA REDDY ANNADI
2% of 11

Fundamentals of Software Testing

A sanity test can refer to !arious order of ma$nitude and other sim"le rule of thum< de!ices a""lied to crosschec+ mathematical calculations. >or e/am"leD )f one were to e!aluate 21(# and came u" with the answer %1;(24; a ,uic+ sanity chec+ would show this to <e wron$ since the s,uare of %00; a smaller num<er to start with; is #%0;000; which is $reater than the incorrect %1;(24. )n multi"lication; -1( / 1%% is not 14#11% since -1( is di!isi<le <y three <ut 14#11% is not ?di$its do not add u" to a multi"le of three@. *hen tal+in$ a<out ,uantities in "hysics; the "ower out"ut of a car cannot <e 200 +P since that is a unit of ener$y; not "ower ?ener$y "er unit time@. $moke testing is done <y de!elo"ers <efore the <uild is released or <y testers <efore acce"tin$ a <uild for further testin$. )n software en$ineerin$; a smo!e test $enerally consists of a collection of tests that can <e a""lied to a newly created or re"aired com"uter "ro$ram. Sometimes the tests are "erformed <y the automated system that <uilds the final software. )n this sense a smo+e test is the "rocess of !alidatin$ code chan$es <efore the chan$es are chec+ed into the lar$er "roductNs official source code collection. Be/t after code re!iews; smo!e testing is the most cost effecti!e method for identifyin$ and fi/in$ defects in softwareI some e!en <elie!e that it is the most effecti!e of all. %&ploratory testing is an a""roach in software testin$ with simultaneous learnin$; test desi$n and test e/ecution. *hile the software is <ein$ tested; the tester learns thin$s that to$ether with e/"erience and creati!ity $enerates new $ood tests to run. 'enefits and drawbacks The main ad!anta$e of e/"loratory testin$ is that less "re"aration is needed; im"ortant <u$s are found fast; and is more intellectually stimulatin$ than scri"ted testin$. An e/am"le of e/"loratory testin$ in "ractice is MicrosoftNs !erification of *indows com"ati<ility.

MADHAVA REDDY ANNADI

2& of 11

Fundamentals of Software Testing

$ecurity Testing "rotects data and Assurance 4lossary

?The@ Process to determine that an )S ?)nformation System@ maintains functionality as intended. Bational )nformation

The si/ <asic security conce"ts that need to <e co!ered <y security testin$ areD confidentiality; re"udiation. Confidentialit. A security measure which "rotects a$ainst the disclosure of information to "arties other than the intended reci"ient?s@. Cften ensured <y means of encodin$ the information usin$ a defined al$orithm and some secret information +nown only to the ori$inator of the information and the intended reci"ient?s@ ?a "rocess +nown as cry"to$ra"hy@ <ut that is <y no means the only way of ensurin$ confidentiality. 1ntegrit. A measure intended to allow the recei!er to determine that the information which it recei!es has not <een altered in transit or <y other than the ori$inator of the information. )nte$rity schemes often use some of the same underlyin$ technolo$ies as confidentiality schemes; <ut they usually in!ol!e addin$ additional information to a communication to form the <asis of an al$orithmic chec+ rather than the encodin$ all of the communication. Aut0enti-ation A measure desi$ned to esta<lish the !alidity of a transmission; messa$e; or ori$inator. Allows a recei!er to ha!e confidence that information is recei!es ori$inated from a s"ecific +nown source. inte$rity; authentication; authori=ation; a!aila<ility and non-

MADHAVA REDDY ANNADI

!' of 11

Fundamentals of Software Testing

Aut0oriAation The "rocess of determinin$ that a re,uestor is allowed to recei!e a ser!ice or "erform an o"eration. Access control is an e/am"le of authori=ation.

A*aila6ilit. Assurin$ information and communications ser!ices will <e ready for use when e/"ected. )nformation must <e +e"t a!aila<le to authori=ed "ersons when they need it.

NonBre+udiation A measure intended to "re!ent the later denial that an action ha""ened; or a communication that too+ "lace etc. )n communication terms this often in!ol!es the interchan$e of authentication information com<ined with some form of "ro!a<le time stam". Security testin$ attem"ts to !erify that "rotection mechanisms <uilt into a system will; in fact; "rotect it from im"ro"er "enetration. consideration. $tress testing is another industry term of "erformance testin$. Thou$h load testin$ R Stress testin$ are used synonymously for "erformanceXrelated efforts; their $oal is different. Stress testin$ is a form of testin$ that is used to determine the sta<ility of a $i!en system or entity. )t in!ol!es testin$ <eyond normal o"erational ca"acity; often to a <rea+in$ "oint; in order to o<ser!e the results. Stress testin$ may ha!e a more s"ecific meanin$ in certain industries. )n software testin$; stress testin$ often refers to tests that "ut a $reater em"hasis on ro<ustness; a!aila<ility; and error handlin$ under a hea!y load; than on what would <e considered correct <eha!ior under normal circumstances. )n "articular; the $oals of such tests may <e to ensure the software doesnAt crash in conditions of
MADHAVA REDDY ANNADI
!1 of 11

urin$ Security testin$; "assword

crac+in$; unauthori=ed entry into the software; networ+ security are all ta+en into

Fundamentals of Software Testing

insufficient com"utational resources ?such as memory or dis+ s"ace@; unusually hi$h concurrency; or denial of ser!ice attac+s. :/am"lesD A we< ser!er may <e stress tested usin$ scri"ts; <ots; and !arious denial of ser!ice tools to o<ser!e the "erformance of a we< site durin$ "ea+ loads Stress testin$ e/ecutes a system in a manner that demands resources in a<normal ,uantity; fre,uency; or !olume. The followin$ ty"es of tests may <e conducted durin$ stress testin$I S"ecial tests may <e desi$ned that $enerate ten interru"ts "er second; when one or two is the a!era$e rate. )n"ut data rates may <e increases <y an order of ma$nitude to determine how in"ut functions will res"ond. Test Cases that re,uire ma/imum memory or other resources. Test Cases that may cause e/cessi!e huntin$ for dis+-resident data. Test Cases that my cause thrashin$ in a !irtual o"eratin$ system.

0nli+e load testin$ where testin$ is conducted for s"ecified num<er of users; stress testin$ is conducted for the num<er of concurrent users <eyond the s"ecified limit. The o<9ecti!e is to identify the ma/imum num<er of users the system can handle <efore <rea+in$ down or de$radin$ drastically. Since the aim is to "ut more stress on system; thin+ time of the user is i$nored and the system is e/"osed to e/cess load. The $oals of load and stress testin$ are listed in Ta<le #. 6efer to ta<le 1 for the inference drawn throu$h the Performance Testin$ :fforts. Let us ta+e the same e/am"le of online sho""in$ a""lication to illustrate the o<9ecti!e of stress testin$. )t determines the ma/imum num<er of concurrent users an online system can ser!ice that can <e <eyond 1000 users ?s"ecified limit@. Jowe!er; there is a "ossi<ility that the ma/imum load that can <e handled <y the system may found to <e same as the antici"ated limit. The Ta<leOYYQillustrates the e/am"le s"ecified. Stress testin$ also determines the <eha!ior of the system as user

MADHAVA REDDY ANNADI

!2 of 11

Fundamentals of Software Testing

<ase increases. )t chec+s whether the system is $oin$ to de$rade $racefully or crash at a shot when the load $oes <eyond the s"ecified limit Ta6le 1/ 8oad and stress testing of illustrati*e e?am+le T.+es of Testing Num6er of Con-urrent users Duration

Load Testin$

1 0ser %0 0sers 100 0sers #%0 0sers %00 0sersUUUU. 10000sers

1# Jours

Stress Testin$

1 0ser %0 0sers 100 0sers #%0 0sers %00 0sersUUUU. 10000sers .eyond 1000 0sersUUU.. Ma/imum 0sers Ta6le !/ @oals of load and stress testing

1# Jours

T.+es of testing Load testin$ Testin$ <ase 'alidates

@oals for antici"ated user

whether

system

is

ca"a<le of handlin$ load under s"ecified limit Stress testin$ Testin$ <eyond the antici"ated user <ase )dentifies the ma/imum load a system can handle Chec+s whether the system

de$rades $racefully or crashes


MADHAVA REDDY ANNADI
!! of 11

Fundamentals of Software Testing

Ta6le "/ 1nferen-e drawn 6. load and stress testing T.+e of Testing Load Testin$ 1nferen-e *hether system A!aila<le3 )f yes; is the a!aila<le system is sta<le3 Stress Testin$ *hether system is A!aila<le3 )f yes; is the a!aila<le system is sta<le3 )f Ees; is it mo!in$ towards 0nsta<le state3 *hen the system is $oin$ to <rea+ down or de$rade drastically3 Conductin$ "erformance testin$ manually is almost im"ossi<le. Load and stress tests are carried out with the hel" of automated tools. Performance test is desi$ned to test the run-time "erformance of software within the conte/t of an inte$rated system. Performance testin$ is testin$ that is "erformed; from one "ers"ecti!e; to determine how fast some as"ect of a system "erforms under a "articular wor+load. )t can also ser!e to !alidate and !erify other ,uality attri<utes of the system; such as scala<ility and relia<ility. ZPerformance testin$ is a su<set of Performance en$ineerin$; an emer$in$ com"uter science "ractice which stri!es to <uild "erformance into the desi$n and architecture of a system; "rior to the onset of actual codin$ effort.[ Performance testin$ can ser!e different "ur"oses. )t can demonstrate that the system meets "erformance criteria. )t can com"are two systems to find which "erforms <etter. Cr it can measure what "arts of the system or wor+load cause the system to "erform <adly. )n the dia$nostic case; software en$ineers use tools such as "rofilers to measure what "arts of a de!ice or software contri<ute most to the "oor "erformance or to esta<lish throu$h"ut le!els ?and thresholds@ for maintained acce"ta<le res"onse time. )t is critical to the cost "erformance of a new system; that "erformance test efforts <e$in at the ince"tion of the de!elo"ment "ro9ect and
MADHAVA REDDY ANNADI
!" of 11

Fundamentals of Software Testing

e/tend throu$h to de"loyment. The later a "erformance defect is detected; the hi$her the cost of remediation. This is true in the case of functional testin$; <ut e!en more so with "erformance testin$; due to the end-to-end nature of its sco"e. )n "erformance testin$; it is often crucial ?and often difficult to arran$e@ for the test conditions to <e similar to the e/"ected actual use. This is; howe!er; not entirely "ossi<le in actual "ractice. The reason is that "roduction systems ha!e a random nature of the wor+load and while the test wor+loads do their <est to mimic what may ha""en in the "roduction en!ironment; it is im"ossi<ly to e/actly re"licate this wor+load !aria<ility - e/ce"t in the sim"lest system. Technology Performance testin$ technolo$y em"loys one or more PCs or 0ni/ ser!ers to act as in9ectors X each emulatin$ the "resence of num<ers of users and each runnin$ an automated se,uence of interactions ?recorded as a scri"t; or as a series of scri"ts to emulate different ty"es of user interaction@ with the host whose "erformance is <ein$ tested. 0sually; a se"arate PC acts as a test conductor; coordinatin$ and $atherin$ metrics from each of the in9ectors and collatin$ "erformance data for re"ortin$ "ur"oses. The usual se,uence is to ram" u" the load X startin$ with a small num<er of !irtual users and increasin$ the num<er o!er a "eriod to some ma/imum. The test result shows how the "erformance !aries with the load; $i!en as num<er of users !s res"onse time. 'arious tools; are a!aila<le to "erform such tests. Tools in this cate$ory usually e/ecute a suite of tests which will emulate real users a$ainst the system. Sometimes the results can re!eal oddities; e.$.; that while the a!era$e res"onse time mi$ht <e acce"ta<le; there are outliers of a few +ey transactions that ta+e considera<ly lon$er to com"lete X somethin$ that mi$ht <e caused <y inefficient data<ase ,ueries; etc. Performance testin$ can <e com<ined with stress testin$; in order to see what ha""ens when an acce"ta<le load is e/ceeded Xdoes the system crash3 Jow lon$ does it ta+e to reco!er if a lar$e load is reduced3 collateral dama$e3 Performance specifications )t is critical to detail "erformance s"ecifications ?re,uirements@ and document them in any "erformance test "lan. )deally; this is done durin$ the re,uirements
MADHAVA REDDY ANNADI
!# of 11

oes it fail in a way that causes

Fundamentals of Software Testing

de!elo"ment "hase of any system de!elo"ment "ro9ect; "rior to any desi$n effort. See Performance :n$ineerin$ for more details. Jowe!er; "erformance testin$ is fre,uently not "erformed a$ainst a s"ecification; i.e. no one will ha!e e/"ressed what the ma/imum acce"ta<le res"onse time for a $i!en "o"ulation of users should <e. Performance testin$ is fre,uently used as "art of the "rocess of "erformance "rofile tunin$. The idea is to identify the Gwea+est lin+H X there is ine!ita<ly a "art of the system; which; if it is made to res"ond faster; will result in the o!erall system runnin$ faster. )t is sometimes a difficult tas+ to identify which "art of the system re"resents this critical "ath; and some test tools include ?or can ha!e add-ons that "ro!ide@ instrumentation that runs on the ser!er ?a$ents@ and re"ort transaction times; data<ase access times; networ+ o!erhead; and other ser!er monitors; which can <e analy=ed to$ether with the raw "erformance statistics. *ithout such instrumentation one mi$ht ha!e to ha!e someone crouched o!er *indows Tas+ Mana$er at the ser!er to see how much CP0 load the "erformance tests are $eneratin$ ?assumin$ a *indows system under test@. There is an a"ocry"hal story of a com"any that s"ent a lar$e amount o"timi=in$ their software without ha!in$ "erformed a "ro"er analysis of the "ro<lem. They ended u" rewritin$ the systemNs Midle loo"N; where they had found the system s"ent most of its time; <ut e!en ha!in$ the most efficient idle loo" in the world o<!iously didnNt im"ro!e o!erall "erformance one iotaW Performance testin$ almost in!aria<ly concludes that it is the software ?rather than hardware@ that contri<ute most to delays ?<ottlenec+s@ in "rocessin$ data. Performance testin$ can <e "erformed across the we<; and e!en done in different "arts of the country; since it is +nown that the res"onse times of the internet itself !ary re$ionally. )t can also <e done in-house; althou$h routers would then need to <e confi$ured to introduce the la$ what would ty"ically occur on "u<lic networ+s. Loads should <e introduced to the system from realistic "oints. >or e/am"le; if %0\ of a systemAs user <ase will <e accessin$ the system !ia a %&] modem connection and the other half o!er a T1; then the load in9ectors ?com"uters that simulate real users@ should either in9ect load o!er the same connections ?ideal@ or simulate the networ+ latency of such connections; followin$ the same user "rofile. )t is always hel"ful to ha!e a statement of the li+ely "ea+ num<ers of users that mi$ht <e e/"ected to use the system at "ea+ times. )f there can also <e a statement
MADHAVA REDDY ANNADI
! of 11

Fundamentals of Software Testing

of what constitutes the ma/imum allowa<le -% "ercentile res"onse time; then an in9ector confi$uration could <e used to test whether the "ro"osed system met that s"ecification. Performance s"ecifications should as+ the followin$ ,uestions; at a minimumD )n detail; what is the "erformance test sco"e3 *hat su<systems; interfaces; com"onents; etc are in and out of sco"e for this test3 >or the user interfaces ?0)As@ in!ol!ed; how many concurrent users are e/"ected for each ?s"ecify "ea+ !s. nominal@3 *hat does the tar$et system ?hardware@ loo+ li+e ?s"ecify all ser!er and networ+ a""liance confi$urations@3 *hat is the A""lication *or+load Mi/ of each a""lication com"onent3 ?for e/am"leD #0\ lo$in; 40\ search; 10\ item select; 10\ chec+out@. *hat is the System *or+load Mi/3 ZMulti"le wor+loads may <e simulated in a sin$le "erformance test[ ?for e/am"leD 10\ *or+load A; #0\ *or+load .; %0\ *or+load C@ *hat are the time re,uirements for any5all <ac+end <atch "rocesses ?s"ecify "ea+ !s. nominal@3 Tasks to undertake Tas+s to "erform such a test would includeD ecide whether to use internal or e/ternal resources to "erform the tests; de"endin$ on in-house e/"ertise ?or lac+ thereof@ 4ather or elicit "erformance re,uirements ?s"ecifications@ from users and5or <usiness analysts e!elo" a hi$h-le!el "lan ?or "ro9ect charter@; includin$ re,uirements; resources; timelines and milestones e!elo" a detailed "erformance test "lan ?includin$ detailed scenarios and test cases; wor+loads; en!ironment info; etc@
MADHAVA REDDY ANNADI
!$ of 11

Fundamentals of Software Testing

Choose test tool?s@ S"ecify test data needed and charter effort ?often o!erloo+ed; <ut often the death of a !alid "erformance test@

e!elo" "roof-of-conce"t scri"ts for each a""lication5com"onent under test; usin$ chosen test tools and strate$ies

e!elo" detailed "erformance test "ro9ect "lan; includin$ all de"endencies and associated timelines

)nstall and confi$ure in9ectors5controller Confi$ure the test en!ironment ?ideally identical hardware to the "roduction "latform@; router confi$uration; ,uiet networ+ ?we donNt want results u"set <y other users@; de"loyment of ser!er instrumentation; data<ase test sets de!elo"ed; etc.

:/ecute tests X "ro<a<ly re"eatedly ?iterati!ely@ in order to see whether any unaccounted for factor mi$ht affect the results

Analy=e the results - either "ass5fail; or in!esti$ation of critical "ath and recommendation of correcti!e action

Performance testin$ of a *e< site is <asically the "rocess of understandin$ how the *e< a""lication and its o"eratin$ en!ironment res"ond at !arious user load le!els. )n $eneral; we want to measure the Res+onse TimeC T0roug0+ut; and ;tiliAation of the *e< site while simulatin$ attem"ts <y !irtual users to simultaneously access the site. Cne of the main o<9ecti!es of "erformance testin$ is to maintain a *e< site with low res"onse time; hi$h throu$h"ut; and low utili=ation. Res+onse Time 6es"onse Time is the delay e/"erienced when a re,uest is made to the ser!er and the ser!erAs res"onse to the client is recei!ed. )t is usually measured in units of time; such as seconds or milliseconds. 4enerally s"ea+in$; 6es"onse Time increases as the in!erse of unutili=ed ca"acity. )t increases slowly at low le!els of user load; <ut increases ra"idly as ca"acity is utili=ed. >i$ure 1 demonstrates such ty"ical characteristics of 6es"onse Time !ersus user load.
MADHAVA REDDY ANNADI
!% of 11

Fundamentals of Software Testing

Figure1 T.+i-al -0ara-teristi-s of laten-. *ersus user load The sudden increase in res"onse time is often caused <y the ma/imum utili=ation of one or more system resources. >or e/am"le; most *e< ser!ers can <e confi$ured to start u" a fi/ed num<er of threads to handle concurrent user re,uests. )f the num<er of concurrent re,uests is $reater than the num<er of threads a!aila<le; any incomin$ re,uests will <e "laced in a ,ueue and will wait for their turn to <e "rocessed. Any time s"ent in a ,ueue naturally adds e/tra wait time to the o!erall 6es"onse Time. To <etter understand what 6es"onse Time means in a ty"ical *e< farm; we can di!ide res"onse time into many se$ments and cate$ori=e these se$ments into two ma9or ty"esD networ+ res"onse time and a""lication res"onse time. Betwor+ res"onse time refers to the time it ta+es for data to tra!el from one ser!er to another. A""lication res"onse time is the time re,uired for data to <e "rocessed within a ser!er. >i$ure # shows the different res"onse time in the entire "rocess of a ty"ical *e< re,uest.

>i$ure # shows the different res"onse time in the entire "rocess of a ty"ical *e< re,uest. Total 6es"onse Time K ?B1 L B# L B1 L B4@ L ?A1 L A# L A1@

MADHAVA REDDY ANNADI

!& of 11

Fundamentals of Software Testing

*here Bx re"resents the networ+ 6es"onse Time and A x re"resents the a""lication 6es"onse Time. )n $eneral; the 6es"onse Time is mainly constrained <y B1 and B4. This 6es"onse Time re"resents the method your clients are usin$ to access the )nternet. )n the most common scenario; e-commerce clients access the )nternet usin$ relati!ely slow dial-u" connections. Cnce )nternet access is achie!ed; a clientAs re,uest will s"end an indeterminate amount of time in the )nternet cloud shown in >i$ure # as re,uests and res"onses are funneled from router to router across the )nternet. To reduce these networ+s 6es"onse Time ?B1 and B4@; one common solution is to mo!e the ser!ers and5or *e< contents closer to the clients. This can <e achie!ed <y hostin$ your farm of ser!ers or re"licatin$ your *e< contents with ma9or )nternet hostin$ "ro!iders who ha!e redundant hi$h-s"eed connections to ma9or "u<lic and "ri!ate )nternet e/chan$e "oints; thus reducin$ the num<er of networ+ routin$ ho"s <etween the clients and the ser!ers. Betwor+ 6es"onse Times B# and B1 usually de"end on the "erformance of the switchin$ e,ui"ment in the ser!er farm. *hen traffic to the <ac+-end data<ase $rows; consider u"$radin$ the switches and networ+ ada"ters to <oost "erformance. 6educin$ a""lication 6es"onse Times ?A1; A#; and A1@ is an art form unto itself <ecause the com"le/ity of ser!er a""lications can ma+e analy=in$ "erformance data and "erformance tunin$ ,uite challen$in$. Ty"ically; multi"le software com"onents interact on the ser!er to ser!ice a $i!en re,uest. 6es"onse time can <e introduced <y any of the com"onents. That said; there are ways you can a""roach the "ro<lemD >irst; your a""lication desi$n should minimi=e round tri"s where!er "ossi<le. Multi"le round tri"s ?client to ser!er or a""lication to data<ase@ multi"ly transmission and resource ac,uisition 6es"onse time. 0se a sin$le round tri" where!er "ossi<le. Eou can o"timi=e many ser!er com"onents to im"ro!e "erformance for your confi$uration. ata<ase tunin$ is one of the most im"ortant areas on which to focus. C"timi=e stored "rocedures and inde/es. Loo+ for contention amon$ threads or com"onents com"etin$ for common resources. There are se!eral methods you can use to identify contention
MADHAVA REDDY ANNADI
"' of 11

Fundamentals of Software Testing

<ottlenec+s.

e"endin$

on

the

s"ecific

"ro<lem;

eliminatin$

resource

contention <ottlenec+ may in!ol!e restructurin$ your code; a""lyin$ ser!ice "ac+s; or u"$radin$ com"onents on your ser!er. Bot all resource contention "ro<lems can <e com"letely eliminated; <ut you should stri!e to reduce them where!er "ossi<le. They can <ecome <ottlenec+s for the entire system. >inally; to increase ca"acity; you may want to u"$rade the ser!er hardware ?scalin$ u"@; if system resources such as CP0 or memory are stretched out and ha!e <ecome the <ottlenec+. 0sin$ multi"le ser!ers as a cluster ?scalin$ out@ may hel" to lessen the load on an indi!idual ser!er; thus im"ro!in$ system "erformance and reducin$ a""lication latencies. T0roug0+ut Throu$h"ut refers to the num<er of client re,uests "rocessed within a certain unit of time. Ty"ically; the unit of measurement is re,uests "er second or "a$es "er second. >rom a mar+etin$ "ers"ecti!e; throu$h"ut may also <e measured in terms of !isitors "er day or "a$e !iews "er day; althou$h smaller time units are more useful for "erformance testin$ <ecause a""lications ty"ically see "ea+ loads of se!eral times the a!era$e load in a day. As one of the most useful metrics; the throu$h"ut of a *e< site is often measured and analy=ed at different sta$es of the desi$n; de!elo"; and de"loy cycle. >or e/am"le; in the "rocess of ca"acity "lannin$; throu$h"ut is one of the +ey "arameters for determinin$ the hardware and system re,uirements of a *e< site. Throu$h"ut also "lays an im"ortant role in identifyin$ "erformance <ottlenec+s and im"ro!in$ a""lication and system "erformance. *hether a *e< farm uses a sin$le ser!er or multi"le ser!ers; throu$h"ut statistics show similar characteristics in reactions to !arious user load le!els. >i$ure 1 demonstrates such ty"ical characteristics of throu$h"ut !ersus user load.

MADHAVA REDDY ANNADI

"1 of 11

Fundamentals of Software Testing

Figure " T.+i-al -0ara-teristi-s of t0roug0+ut *ersus user load As >i$ure 1 illustrates; the throu$h"ut of a ty"ical *e< site increases "ro"ortionally at the initial sta$es of increasin$ load. Jowe!er; due to limited system resources; throu$h"ut cannot <e increased indefinitely. )t will e!entually reach a "ea+; and the o!erall "erformance of the site will start de$radin$ with increased load. Ma/imum throu$h"ut; illustrated <y the "ea+ of the $ra"h in >i$ure 1; is the ma/imum num<er of user re,uests that can <e su""orted concurrently <y the site in the $i!en unit of time. Bote that it is sometimes confusin$ to com"are the throu$h"ut metrics for your *e< site to the "u<lished metrics of other sites. The !alue of ma/imum throu$h"ut !aries from site to site. )t mainly de"ends on the com"le/ity of the a""lication. >or e/am"le; a *e< site consistin$ lar$ely of static JTML "a$es may <e a<le to ser!e many more re,uests "er second than a site ser!in$ dynamic "a$es. As with any statistic; throu$h"ut metrics can <e mani"ulated <y selecti!ely i$norin$ some of the data. >or e/am"le; in your measurements; you may ha!e included se"arate data for all the su""ortin$ files on a "a$e; such as $ra"hic files. Another siteAs "u<lished measurements mi$ht consider the o!erall "a$e as one unit. As a result; throu$h"ut !alues are most useful for com"arisons within the same site; usin$ a common measurin$ methodolo$y and set of metrics. )n many ways; throu$h"ut and 6es"onse time are related; as different a""roaches to thin+in$ a<out the same "ro<lem. )n $eneral; sites with hi$h latency will ha!e low throu$h"ut. )f you want to im"ro!e your throu$h"ut; you should analy=e the same criteria as you would to reduce latency. Also; measurement of throu$h"ut without consideration of latency is misleadin$ <ecause latency often rises under load <efore throu$h"ut "ea+s. This means that "ea+ throu$h"ut may occur at a latency that is unacce"ta<le from an a""lication usa<ility stand"oint. This su$$ests
"2 of 11

that

MADHAVA REDDY ANNADI

Fundamentals of Software Testing

Performance re"orts include a cut-off !alue for 6es"onse time; such asD#%0 re,uests5second @ % seconds ma/imum 6es"onse time ;tiliAation 0tili=ation refers to the usa$e le!el of different system resources; such as the ser!erAs CP0?s@; memory; networ+ <andwidth; and so forth. )t is usually measured as a "ercenta$e of the ma/imum a!aila<le le!el of the s"ecific resource. 0tili=ation !ersus user load for a *e< ser!er ty"ically "roduces a cur!e; as shown in >i$ure 4.

Figure # T.+i-al -0ara-teristi-s of utiliAation *ersus user load As >i$ure 4 illustrates; utili=ation usually increases "ro"ortionally to increasin$ user load. Jowe!er; it will to" off and remain at a constant when the load continues to <uild u". )f the s"ecific system resource to"s off at 100-"ercent utili=ation; itAs !ery li+ely that this resource has <ecome the "erformance <ottlenec+ of the site. 0"$radin$ the resource with hi$her ca"acity would allow $reater throu$h"ut and lower latency^ thus <etter "erformance. )f the measured resource does not to" off close to 100"ercent utili=ation; it is "ro<a<ly <ecause one or more of the other system resources ha!e already reached their ma/imum usa$e le!els. They ha!e <ecome the "erformance <ottlenec+ of the site. To locate the <ottlenec+; you may need to $o throu$h a lon$ and "ainsta+in$ "rocess of runnin$ "erformance tests a$ainst each of the sus"ected resources; and then !erifyin$ if "erformance is im"ro!ed <y increasin$ the ca"acity of the resource. )n many cases; "erformance of the site will start deterioratin$ to an unacce"ta<le le!el well <efore the ma9or system resources; such as CP0 and memory; are ma/imi=ed. >or e/am"le; >i$ure % illustrates a case where res"onse time rises shar"ly to 4% seconds when CP0 utili=ation has reached only &0 "ercent.
MADHAVA REDDY ANNADI
"! of 11

Fundamentals of Software Testing

Figure $ An e?am+le of Res+onse Time *ersus utiliAation As >i$ure % demonstrates; monitorin$ the CP0 or memory utili=ation alone may not always indicate the true ca"acity le!el of the ser!er farm with acce"ta<le "erformance. A++li-ations *hile most traditional a""lications are desi$ned to res"ond to a sin$le user at any time; most *e< a""lications are e/"ected to su""ort a wide ran$e of concurrent users; from a do=en to a cou"le thousand or more. As a result; "erformance testin$ has <ecome a critical com"onent in the "rocess of de"loyin$ a *e< a""lication. )t has "ro!en to <e most useful in ?<ut not limited to@ the followin$ areasD Ca"acity "lannin$ .u$ fi/in$

Ca+a-it. ,lanning Jow do you +now if your ser!er confi$uration is sufficient to su""ort two million !isitors "er day with a!era$e res"onse time of less than fi!e seconds3 )f your com"any is "ro9ectin$ a <usiness $rowth of #00 "ercent o!er the ne/t two months; how do you +now if you need to u"$rade your ser!er or add more ser!ers to the *e< farm3 Can your ser!er and a""lication su""ort a si/-fold traffic increase durin$ the Christmas sho""in$ season3 Ca"acity "lannin$ is a<out <ein$ "re"ared. Eou need to set the hardware and software re,uirements of your a""lication so that youAll ha!e sufficient ca"acity to meet antici"ated and unantici"ated user load.

MADHAVA REDDY ANNADI

"" of 11

Fundamentals of Software Testing

Cne a""roach in ca"acity "lannin$ is to load-test your a""lication in a testin$ ?sta$in$@ ser!er farm. .y simulatin$ different load le!els on the farm usin$ a *e< a""lication "erformance-testin$ tool such as *AS; you can collect and analy=e the test results to <etter understand the "erformance characteristics of the a""lication. Performance charts such as those shown in >i$ures 1; 1; and 4 can then <e $enerated to show the e/"ected 6es"onse Time; throu$h"ut; and utili=ation at these load le!els. )n addition; you may also want to test the scala<ility of your a""lication with different hardware confi$urations. >or e/am"le; load testin$ your a""lication on ser!ers with one; two; and four CP0s res"ecti!ely would hel" to determine how well the a""lication scales with symmetric multi"rocessor ?SMP@ ser!ers. Li+ewise; you should load test your a""lication with different num<ers of clustered ser!ers to confirm that your a""lication scales well in a cluster en!ironment. Althou$h "erformance testin$ is as im"ortant as functional testin$; itNs often o!erloo+ed. Since the re,uirements to ensure the "erformance of the system is not as strai$htforward as the functionalities of the system; achie!in$ it correctly is more difficult. The effort of "erformance testin$ is addressed in two waysD Load testin$ Stress testin$

(oad testing is the "rocess of creatin$ demand on a system or de!ice and measurin$ its res"onse. )n mechanical systems it refers to the testin$ of a system to certify it under the a""ro"riate re$ulations ?LCL:6 in the 0] - Liftin$ C"erations and Liftin$ :,ui"ment 6e$ulations@. Load testin$ is usually carried out to a load 1.%/ the S*L ?Safe *or+in$ Load@ "eriodic recertification is re,uired. )n software en$ineerin$ it is a <lan+et term that is used in many different ways across the "rofessional software testin$ community. Load testin$ $enerally refers to the "ractice of modelin$ the e/"ected usa$e of a software "ro$ram <y simulatin$ multi"le users accessin$ the "ro$ramAs ser!ices
MADHAVA REDDY ANNADI
"# of 11

Fundamentals of Software Testing

concurrently. As such; this testin$ is most rele!ant for multi-user systems; often one <uilt usin$ a client5ser!er model; such as we< ser!ers. Jowe!er; other ty"es of software systems can <e load-tested also. >or e/am"le; a word "rocessor or $ra"hics editor can <e forced to read an e/tremely lar$e documentI or a financial "ac+a$e can <e forced to $enerate a re"ort <ased on se!eral yearsA worth of data. The most accurate load testin$ occurs with actual; rather than theoretical; results. *hen the load "laced on the system is raised <eyond normal usa$e "atterns; in order to test the systemAs res"onse at unusually hi$h or "ea+ loads; it is +nown as stress testin$. The load is usually so $reat that error conditions are the e/"ected result; althou$h no clear <oundary e/ists when an acti!ity ceases to <e a load test and <ecomes a stress test. There is little a$reement on what the s"ecific $oals of load testin$ are. The term is often used synonymously with "erformance testin$; relia<ility testin$; and !olume testin$ Load testin$ is a much used industry term for the effort of "erformance testin$. Jere load means the num<er of users or the traffic for the system. Load testin$ is defined as the testin$ to determine whether the system is ca"a<le of handlin$ antici"ated num<er of users or not. )n Load Testin$; the !irtual users are simulated to e/hi<it the real user <eha!ior as much as "ossi<le. :!en the user thin+ time such as how users will ta+e time to thin+ <efore in"uttin$ data will also <e emulated. )t is carried out to 9ustify whether the system is "erformin$ well for the s"ecified limit of load. >or e/am"le; Let us say an online-sho""in$ a""lication is antici"atin$ 1000 concurrent user hits at "ea+ "eriod. )n addition; the "ea+ "eriod is e/"ected to stay for 1# hrs. Then the system is load tested with 1000 !irtual users for 1# hrs. These +inds of tests are carried out in le!elsD first 1 user; %0 users; and 100 users; #%0 users; %00 users and so on till the antici"ated limit are reached. The testin$ effort is closed e/actly for 1000 concurrent users. The o<9ecti!e of load testin$ is to chec+ whether the system can "erform well for s"ecified load. The system may <e ca"a<le of accommodatin$ more than 1000 concurrent users. .ut; !alidatin$ that is not under the sco"e of load testin$. Bo

MADHAVA REDDY ANNADI

" of 11

Fundamentals of Software Testing

attem"t is made to determine how many more concurrent users the system is ca"a<le of ser!icin$. Ta<le 1 illustrates the e/am"le s"ecified. ser Acceptance Test The final testin$ sta$es <y users of a new or chan$ed information system. )f successful; it si$nals the a""ro!al to im"lement the system li!e. Cosmetic and other small chan$es may still <e re,uired as a result of the test; <ut the system is considered sta<le and "rocessin$ data accordin$ to re,uirements. 0ser Acce"tance Testin$ ?0AT@ is a "rocess to o<tain confirmation <y a Su<9ect Matter :/"ert ?SM:@; "refera<ly the owner or client of the o<9ect under test; throu$h trial or re!iew; that the modification or addition meets mutually a$reed-u"on re,uirements. )n software de!elo"ment; 0AT is one of the final sta$es of a "ro9ect and often occurs <efore a client or customer acce"ts a new system. Alpha Testing Alpha Test is conducted at the de!elo"erNs site <y a customer. The software is used in a natural settin$ with de!elo"erH loo+in$ o!er the shoulderH of the user and recordin$ errors and usa$e "ro<lems. Al"ha tests are conducted in a controlled en!ironment. A software "rototy"e sta$e when the software is first a!aila<le for run. Jere the software has the core functionalities in it <ut com"lete functionality is not aimed at. )t would <e a<le to acce"t in"uts and $i!e out"uts. 0sually the most used functionalities ?"arts of code@ are de!elo"ed more. The test is conducted at the de!elo"erNs site only. )n a software de!elo"ment cycle; de"endin$ on the functionalities the num<er of al"ha "hases re,uired is laid down in the "ro9ect "lan itself. urin$ this; the testin$ is not a throu$h one; since only the "rototy"e of the software is a!aila<le. .asic installation X uninstallation tests; the com"leted core functionalities are tested. The functionality com"lete area of the Al"ha sta$e is $ot from the "ro9ect "lan document.

MADHAVA REDDY ANNADI

"$ of 11

Fundamentals of Software Testing

Aim is to identify any serious errors to 9ud$e if the indented functionalities are im"lemented to "ro!ide to the customer the feel of the software urin$ this "hase; the test "lan

A throu$h understandin$ of the "roduct is done now.

and test cases for the <eta "hase ?the ne/t sta$e@ is created. The errors re"orted are documented internally for the testers and de!elo"ers reference. Bo issues are usually re"orted and recorded in any of the defect mana$ement5<u$ trac+ers Role of t0e tester to "ro!ide in"ut while there is still time to ma+e si$nificant chan$es as the desi$n e!ol!es. 6e"ort errors to de!elo"ers

'eta Testing 'eta Test is conducted at one or more customer sites <y the end user of the software. 0nli+e al"ha testin$ the de!elo"er is $enerally not "resent. Therefore; the <eta test is a Gli!eH a""lication of the software in an en!ironment that cannot control <y the de!elo"er. The customer records all "ro<lems that are encountered durin$ <eta testin$ and re"orts these to the de!elo"er at re$ular inter!als The .eta testin$ is conducted at one or more customer sites <y the end-user of the software. The <eta test is a li!e a""lication of the software in an en!ironment that cannot <e controlled <y the de!elo"er. The Software reaches <eta sta$e when most of the functionalities are o"eratin$. The software is tested in customerNs en!ironment; $i!in$ user the o""ortunity to e/ercise the software; find the errors so that they could <e fi/ed <efore "roduct release. .eta testin$ is a detailed testin$ and needs to co!er all the functionalities of the "roduct and also the de"endent functionality testin$. )t also in!ol!es the 0) testin$
MADHAVA REDDY ANNADI
"% of 11

Fundamentals of Software Testing

and documentation testin$. Jence it is essential that this is "lanned well and the tas+ accom"lished. The test "lan document has to <e "re"ared <efore the testin$ "hase is started; which clearly lays down the o<9ecti!es; sco"e of test; tas+s to <e "erformed and the test matri/ which de"icts the schedule of testin$. >eta Testing O67e-ti*es :!aluate software technical content :!aluate software ease of use :!aluate user documentation draft )dentify errors 6e"ort errors5findin$s

Role of a tester 0nderstand the software re,uirements and the testin$ o<9ecti!es. Carry out the test cases

& Test 4are De*elo+ment & ) A strategy for software testin$ inte$rates software test case desi$n methods in to a well "lanned series of ste"s that result in the successful construction of software. The strate$y "ro!ides a road ma" that descri<es the ste"s to <e conducted as "art of testin$ when these ste"s are "lanned and then underta+en and how much effort; time and resources will <e re,uired. *.+ Test #ase is set of in"ut and out"ut of a "ro$ram and e/ecution conditions. Test Cases consist of three main "arts with su<sectionsD 1nformation contains $eneral information a<out Test case. o 1dentifier is uni,ue identifier of test case for further references; for e/am"le; while descri<in$ found defect.

MADHAVA REDDY ANNADI

"& of 11

Fundamentals of Software Testing

Test -ase owner<-reator is name of tester or test desi$ner; who created test or is res"onsi<le for its de!elo"ment

o o

Version of current Test case definition Name of test case should <e human-oriented title which allows to ,uic+ly understand test case "ur"ose and sco"e.

)dentifier of re9uirement which is co!ered <y test case. Also here could <e identifier of use case or functional s"ecification item.

,ur+ose contains short descri"tion of test "ur"ose; what functionality it chec+s.

De+enden-ies

Test case a-ti*it. o Testing en*ironment<-onfiguration contains information a<out confi$uration of hardware or software which must <e met while e/ecutin$ test case o 1nitialiAation descri<es actions; which must <e "erformed <efore test case e/ecution is started. >or e/am"le; we should o"en some file. o FinaliAation descri<es actions to <e done after test case is "erformed. >or e/am"le if test case crashes data<ase; tester should restore it <efore other test cases will <e "erformed. o o A-tions ste" <y ste" to <e done to com"lete test. 1n+ut data descri"tion

Results o E?+e-ted results contains descri"tion of what tester should see after all test ste"s has <een com"leted o A-tual results contain a <rief descri"tion of what the tester saw after the test ste"s ha!e <een com"leted. This is often re"laced with a

MADHAVA REDDY ANNADI

#' of 11

Fundamentals of Software Testing

,ass<Fail. Fuite often if a test case fails; reference to the defect in!ol!ed should <e listed in this column. @ood Test Case/ 1 ! A $ood test case is one that has a hi$h "ro<a<ility of findin$ errors. A $ood Test is not redundantD there is no "oint in conductin$ a test that has the same "ur"ose as another test. :!ery test should ha!e a different "ur"ose. " A $ood test should <e neither too sim"le nor too com"le/.

Designing Test Cases There are !arious techni,ues in which you can desi$n test cases. >or e/am"le; the <elow illustrated $i!es you an o!er!iew as to how you deri!e test cases usin$ the <asis "ath methodD The <asis "ath testin$ method can <e a""lied to a "rocedural desi$n or to source code. The followin$ ste"s can <e a""lied to deri!e the <asis setD 1. 0se the desi$n or code as a foundation; draw corres"ondin$ flow $ra"h. #. 1. etermine the Cyclomatic com"le/ity of the resultant flow $ra"h. etermine a <asis set of linearly inde"endent "aths.

4. Pre"are test cases that will fore e/ecution of each "ath in the <asis set. Let us now see how to desi$n test cases in a $eneric mannerD 1. #. 0nderstand the re,uirements document. .rea+ the re,uirements into smaller re,uirements ?if it im"ro!es your testa<ility@. 1. >or each 6e,uirement; decide what techni,ue you should use to deri!e the test cases. >or e/am"le; if you are testin$ a Lo$in "a$e; you need to write test cases <asin$ on error $uessin$ and also ne$ati!e cases for handlin$ failures.

MADHAVA REDDY ANNADI

#1 of 11

Fundamentals of Software Testing

4.

Ja!e a Tracea<ility Matri/ as followsD 6e,uirement Test Case Bo

6e,uirement Bo ?)n 6 @

*hat this Tracea<ility Matri/ "ro!ides you is the co!era$e of Testin$. ]ee" fillin$ in the Tracea<ility matri/ when you com"lete writin$ test caseNs for each re,uirement. 40atDs a S-enario5 A scenario is a hy"othetical story; used to hel" a "erson thin+ throu$h a com"le/ "ro<lem or system. C0ara-teristi-s of @ood S-enarios A scenario test has fi!e +ey characteristics. )t is ?a@ a story that is ?<@ moti!atin$; ?c@ credi<le; ?d@ com"le/; and ?e@ easy to e!aluate. The "rimary o<9ecti!e of test case desi$n is to deri!e a set of tests that ha!e the hi$hest attitude of disco!erin$ defects in the software. Test cases are desi$ned <ased on the analysis of re,uirements; use cases; and technical s"ecifications; and they should <e de!elo"ed in "arallel with the software de!elo"ment effort. A test case descri<es a set of actions to <e "erformed and the results that are e/"ected. A test case should tar$et s"ecific functionality or aim to e/ercise a !alid "ath throu$h a use case. This should include in!alid user actions and ille$al in"uts that are not necessarily listed in the use case. A test case is descri<ed de"ends on se!eral factors; e.$. the num<er of test cases; the fre,uency with which they chan$e; the le!el of automation em"loyed; the s+ill of the testers; the selected testin$ methodolo$y; staff turno!er; and ris+.

MADHAVA REDDY ANNADI

#2 of 11

Fundamentals of Software Testing

The test cases will ha!e a $eneric format as <elow. Test -ase 1D B The test case id must <e uni,ue across the a""lication Test -ase des-ri+tion B The test case descri"tion must <e !ery <rief. Test +rere9uisite B The test "re-re,uisite clearly descri<es what should <e "resent in the system; <efore the test can <e e/ecutes. Test 1n+uts B The test in"ut is nothin$ <ut the test data that is "re"ared to <e fed to the system. Test ste+s B The test ste"s are the ste"-<y-ste" instructions on how to carry out the test. E?+e-ted Results B The e/"ected results are the ones that say what the system must $i!e as out"ut or how the system must react <ased on the test ste"s. A-tual Results E The actual results are the ones that say out"uts of the action for the $i!en in"uts or how the system reacts for the $i!en in"uts. ,ass<Fail - )f the :/"ected and Actual results are same then test is Pass otherwise >ail. The test cases are classified into "ositi!e and ne$ati!e test cases. Positi!e test cases are desi$ned to "ro!e that the system acce"ts the !alid in"uts and then "rocess them correctly. Suita<le techni,ues to desi$n the "ositi!e test cases are S"ecification deri!ed tests; :,ui!alence "artitionin$ and State-transition testin$. The ne$ati!e test cases are desi$ned to "ro!e that the system re9ects in!alid in"uts and does not "rocess them. Suita<le techni,ues to desi$n the ne$ati!e test cases are :rror $uessin$; .oundary !alue analysis; internal <oundary !alue testin$ and Statetransition testin$. The test cases details must <e !ery clearly s"ecified; so that a new "erson can $o throu$h the test cases ste" and ste" and is a<le to e/ecute it. The test cases will <e e/"lained with s"ecific e/am"les in the followin$ section. >or e/am"le consider online sho""in$ a""lication. At the user interface le!el the client re,uest the we< ser!er to dis"lay the "roduct details <y $i!in$ email id and 0sername. The we< ser!er "rocesses the re,uest and will $i!e the res"onse. >or this a""lication we will desi$n the unit; )nte$ration and system test cases.
MADHAVA REDDY ANNADI
#! of 11

Fundamentals of Software Testing

>i$ure &.*e< <ased a""lication ;nit Test Cases 2;TC3 These are !ery s"ecific to a "articular unit. The <asic functionality of the unit is to <e understood <ased on the re,uirements and the desi$n documents. 4enerally; esi$n document will "ro!ide a lot of information a<out the functionality of a unit. The esi$n document has to <e referred <efore 0TC is written; <ecause it "ro!ides the actual functionality of how the system must <eha!e; for $i!en in"uts. >or e/am"le; )n the Cnline sho""in$ a""lication; )f the user enters !alid :mail id and 0sername !alues; let us assume that esi$n document says; that the system must dis"lay a "roduct details and should insert the :mail id and 0sername in data<ase ta<le. )f user enters in!alid !alues the system will dis"lay a""ro"riate error messa$e and will not store it in data<ase.

Figure &/ Sna+s0ot of 8ogin S-reen

MADHAVA REDDY ANNADI

#" of 11

Fundamentals of Software Testing

Test Conditions for t0e fields in t0e 8ogin s-reen :mail-)t should <e in this format ?>or :$ [email protected]@. 0sername X )t should acce"t only al"ha<ets not $reater than &.Bumerics and s"ecial ty"e of characters are not allowed. Test Prere,uisiteD The user should ha!e access to Customer Lo$in screen form screen Negati*e Test Case Pro9ect Bame-Cnline sho""in$ 'ersion-1.1 Module-Catalo$ Tes t F 1 Chec+ for in"uttin$ !alues field in :mail :mailK+eerthi@redif fmail 0sernameK8a!ier )n"uts should not <e acce"ted. )t should messa$e !alid :mailH dis"lay G:nter Des-ri+tion Test 1n+uts E?+e-ted Results A-tual results ,ass<F ail

Chec+ for in"uttin$ !alues field in :mail

:mailK9ohn#&Yrediff mail.com 0sernameKPohn

)n"uts should not <e acce"ted. )t should messa$e !alid :mailH dis"lay G:nter

Chec+ for in"uttin$ !alues 0sername field in

:mailKshil"a@yahoo .com

)n"uts should not <e acce"ted. )t should messa$e dis"lay G:nter


## of 11

MADHAVA REDDY ANNADI

Fundamentals of Software Testing

0sernameKMar+#4

correct 0sernameH

,ositi*e Test Case Tes t F 1 Chec+ :mail field for :[email protected] 0sernameKda!e )n"uts should acce"ted. <e Des-ri+tion Test 1n+uts E?+e-ted Results A-tual results ,ass<Fa il

in"uttin$ !alues in

Chec+ :mail field

for

:[email protected] m 0sernameK9ohn

)n"uts should acce"ted. <e

in"uttin$ !alues in

Chec+ 0sername field

for

:mailK/[email protected] 0sernameKmar+

)n"uts should acce"ted. <e

in"uttin$ !alues in

1ntegration Test Cases .efore desi$nin$ the inte$ration test cases the testers should $o throu$h the )nte$ration test "lan. )t will $i!e com"lete idea of how to write inte$ration test cases. The main aim of inte$ration test cases is that it tests the multi"le modules to$ether. .y e/ecutin$ these test cases the user can find out the errors in the interfaces <etween the Modules. >or e/am"le; in online sho""in$; there will <e Catalo$ and Administration module. )n catalo$ section the customer can trac+ the list of "roducts and can <uy the "roducts
MADHAVA REDDY ANNADI
# of 11

Fundamentals of Software Testing

online. )n administration module the admin can enter the "roduct name and information related to it. Ta6le"/ 1ntegration Test Cases Tes t F 1 Chec+ for Lo$in Screen :nter !alues in :mail )n"uts should Des-ri+tion Test 1n+uts E?+e-ted Results A-tual results ,ass<Fa il

and 0serBame. >or :$D :mail Kshil"[email protected] 0sernameKshil"a

<e acce"ted.

.ac+end 'erification

Sele-t

emailC

The :mail

entered and <e at

username from CusG

0sername should dis"layed s,l"rom"t. # Chec+ Product )nformation for Clic+ "roduct information lin+ )t dis"lay com"lete details "roduct of the should

MADHAVA REDDY ANNADI

#$ of 11

Fundamentals of Software Testing

Chec+ for admin screen

:nter !alues in Product )d and Product name fields. >or :$D Product )d-#4% Product Anti!irus name-Borton

)n"uts

should

<e acce"ted.

.ac+end !erification

Sele-t

+id

+name

The Product should

entered name <e

from ,rodu-tG

Product id and

dis"layed at the s,l "rom"t.

& " Test ,lan A test +lan is a systematic a""roach to testin$ a system such as a machine or software. The "lan ty"ically contains a detailed understandin$ of what the e!entual wor+flow will <e. Test plans in software development Cem ]aner; co-author of "esting Computer Software ?)S.B 0-421-1%(4&-0@; has su$$ested that test "lans are written for two !ery different "ur"oses. Sometimes the test "lan is a "roductI sometimes itAs a tool. )tAs too easy; <ut also too e/"ensi!e; to confuse these $oals. )n software testin$; a test "lan $i!es detailed testin$ information re$ardin$ an u"comin$ testin$ effort; includin$

MADHAVA REDDY ANNADI

#% of 11

Fundamentals of Software Testing

Sco"e of testin$ Schedule Test eli!era<les

6elease Criteria 6is+s and Contin$encies

Test +lan tem+lateC 1EEE '!( format 1. Test Plan )dentifier ?TP)@ #. 6eferences 1. )ntroduction 4. Test )tems %. Software 6is+ )ssues &. >eatures to <e Tested 2. >eatures not to <e Tested (. A""roach -. )tem Pass5>ail Criteria 10. :ntry R :/it Criteria 11. Sus"ension Criteria and 6esum"tion 6e,uirements 1#. Test eli!era<les

11. 6emainin$ Test Tas+s 14. :n!ironmental Beeds 1%. Staffin$ and Trainin$ Beeds 1&. 6es"onsi<ilities
MADHAVA REDDY ANNADI
#& of 11

Fundamentals of Software Testing

12. Schedule 1(. Plannin$ 6is+s and Contin$encies 1-. A""ro!als #0. 4lossary T0e Test effort shows the e/"enses for ?still to come@ tests. There is a relation with test costs and failure costs ?direct; indirect; costs for fault correction@. Some factors which influence test effort areD maturity of the software de!elo"ment "rocess; ,uality and testa<ility of the test o<9ect; test infrastructure; s+ills of staff mem<ers; ,uality $oals and test strate$y

' Defe-t Management


efects determine the effecti!eness of the Testin$ what we do. )f there are no defects; it directly im"lies that we donNt ha!e our 9o<. There are two "oints worth considerin$ here; either the de!elo"er is so stron$ that there are no defects arisin$ out; or the test en$ineer is wea+. )n many situations; the second is "ro!in$ correct. This im"lies that we lac+ the +nac+. )n this section; let us understand ' 1 40at is a Defe-t5 For a test engineerC a defe-t is following/ B Any de!iation from s"ecification Anythin$ that causes user dissatisfaction )ncorrect out"ut Software does not do what it intended to do. efects.

>ug < Defe-t < Error/ B Software is said to ha!e 6ug if it features de!iates from s"ecifications. Software is said to ha!e defe-t if it has unwanted side effects. Software is said to ha!e Error if it $i!es incorrect out"ut.
' of 11

MADHAVA REDDY ANNADI

Fundamentals of Software Testing

Defe-t Tra-:ing )n en$ineerin$; defe-t tra-:ing is the "rocess of findin$ defects in a "roduct; ?<y ins"ection; testin$; or recordin$ feed<ac+ from customers@; and ma+in$ new !ersions of the "roduct that fi/ the defects. efect trac+in$ is im"ortant in software en$ineerin$ as com"le/ software systems ty"ically ha!e tens or hundreds of thousands of defectsD mana$in$; e!aluatin$ and "rioriti=in$ these defects is a difficult tas+D defect trac+in$ systems are com"uter data<ase systems that store defects and hel" "eo"le to mana$e them. A .ug tra-:ing s.stem is a software a""lication that is desi$ned to hel" "ro$rammers +ee" trac+ of re"orted software <u$s in their wor+. )t may <e re$arded as a sort of issue trac+in$ system. A ma9or com"onent of a <u$ trac+in$ system is a data<ase that records facts a<out +nown <u$s. >acts may include the time a <u$ was re"orted; its se!erity; the erroneous "ro$ram <eha!ior; and details on how to re"roduce the <u$I as well as the identity of the "erson who re"orted it and any "ro$rammers who may <e wor+in$ on fi/in$ it. ' ! Defe-t Ta?onomies Cate$ories of efectsD

All software defects can <e <roadly cate$ori=ed into the <elow mentioned ty"esD :rrors of commissionD somethin$ wron$ is done :rrors of omissionD somethin$ left out <y accident :rrors of clarity and am<i$uityD different inter"retations :rrors of s"eed and ca"acity

Jowe!er; the a<o!e is a <road cate$ori=ationI <elow we ha!e for you a host of !aried ty"es of defects that can <e identified in different software a""licationsD 1. Conce"tual <u$s 5 #. Codin$ <u$s esi$n <u$s

MADHAVA REDDY ANNADI

1 of 11

Fundamentals of Software Testing

1. )nte$ration <u$s 4. 0ser )nterface :rrors %. >unctionality &. Communication 2. Command Structure (. Missin$ Commands -. Performance 10. Cut"ut 11. :rror Jandlin$ :rrors 1#. .oundary-6elated :rrors 11. Calculation :rrors 14. )nitial and Later States 1%. Control >low :rrors 1&. :rrors in Jandlin$ ata

12. 6ace Conditions :rrors 1(. 1-. #0. #1. ##. Load Conditions :rrors Jardware :rrors Source and 'ersion Control :rrors ocumentation :rrors Testin$ :rrors

MADHAVA REDDY ANNADI

2 of 11

Fundamentals of Software Testing

( Metri-s for Testing


40at is a Metri-5 MMetricN is a measure to ,uantify software; software de!elo"ment resources; and5or the software de!elo"ment "rocess. A Metric can ,uantify any of the followin$ factorsD Schedule; *or+ :ffort; Product Si=e; Pro9ect Status; and Fuality Performance

Measuring enables,. Metrics ena<les estimation of future wor+. That is; considerin$ the case of testin$ ecidin$ the "roduct is fit for shi"ment or efect collected and

deli!ery de"ends on the rate the defects are found and fi/ed. fi/ed is one +ind of metric. ?www."rocessim"act.com@ As defined in the M)S6A 6e"ort;

)t is <eneficial to classify metrics accordin$ to their usa$e. )::: -#(.1 Z4[ identifies two classesD i@ ii@ Process X Acti!ities "erformed in the "roduction of the Software Product X An out"ut of the Process; for e/am"le the software or its documentation. efects are analy=ed to identify which are the ma9or causes of defect and which is the "hase that introduces most defects. This can <e achie!ed <y "erformin$ Pareto analysis of defect causes and defect introduction "hases. The main re,uirement for any of these analyses is Software
MADHAVA REDDY ANNADI

efect Metrics.
! of 11

Fundamentals of Software Testing

>ew of the

efect Metrics areD efects 6e"orted <y SFA L Bo. efects 6e"orted .y Peer

Defe-t Densit./ ?Bo. Cf 6e!iew@5Actual Si=e.

The Si=e can <e in ]LCC; SLCC; or >unction Points. The method used in the Cr$ani=ation to measure the si=e of the Software Product. The SFA is considered to <e the "art of the Software testin$ team. Test effe-ti*eness/ Mt 5 ?tL0at@ where tKtotal no. of defects re"orted durin$

testin$ and 0at K total no. of defects re"orted durin$ 0ser acce"tance testin$ 0ser Acce"tance Testin$ is $enerally carried out usin$ the Acce"tance Test Criteria accordin$ to the Acce"tance Test Plan. Defe-t Remo*al Effi-ien-./ ?Total Bo Cf of S LC Des-ri+tion This metric will indicate the effecti!eness of the defect identification and remo!al in sta$es for a $i!en "ro9ect Formula 6e,uirementsD 6e,uirements 6: "hase@ K 5 Z?6e,uirement ?6e,uirement defects defects corrected in9ected durin$ durin$ efects 6emo!ed 5Total Bo. Cf efects )n9ected@_100 at !arious sta$es

6e,uirements "hase@[ _ 100 esi$nD 6: K Z? esi$n defects corrected durin$ esi$n "hase@ 5 ? efects esi$n

identified durin$ 6e,uirements "hase L "hase@[ _ 100 CodeD "hase L

efects in9ected durin$

6: K Z?Code defects corrected durin$ Codin$ "hase@ 5 ? efects efects identified durin$ esi$n efects in9ected durin$ codin$ "hase@[ _ 100

identified durin$ 6e,uirements "hase L

MADHAVA REDDY ANNADI

" of 11

Fundamentals of Software Testing

C!erallD

6: K Z?Total defects corrected at all "hases <efore deli!ery@ 5

?Total defects detected at all "hases <efore and after deli!ery@[ _ 100 Metri- Re+resentation Percenta$e Cal-ulated at Sta$e com"letion or Pro9ect Com"letion Cal-ulated from .u$ 6e"orts and Peer 6e!iew 6e"orts Defe-t Distri6ution/ ,er-entage of Total defects Analysis; istri<uted across 6e,uirements

esi$n 6e!iews; Code 6e!iews; 0nit Tests; )nte$ration Tests; System

Tests; 0ser Acce"tance Tests; 6e!iew <y Pro9ect Leads and Pro9ect Mana$ers. Software ,ro-ess Metri-s are measures; which "ro!ide information a<out the "erformance of the de!elo"ment "rocess itself. ,ur+ose/ 1. Pro!ide an )ndicator to the 0ltimate Fuality of Software <ein$ Produced #. Assists to the Cr$ani=ation to im"ro!e its de!elo"ment "rocess <y Ji$hli$htin$ areas of )nefficiency or error-"rone areas of the "rocess. Software ,rodu-t Metri-s are measures of some attri<ute of the Software Product. ?:/am"le; Source Code@. ,ur+ose/ 1. 0sed to assess the ,uality of the out"ut

MADHAVA REDDY ANNADI

# of 11

Fundamentals of Software Testing

40at are t0e most general metri-s5 Re9uirements Management Metri-s Colle-ted 1. #. 1. 6e,uirements <y state X Acce"ted; 6e9ected; Post"oned Bo. of <aselined re,uirements Bum<er of re,uirements modified after <ase linin$

Deri*ed Metri-s 1. #. 6e,uirements Sta<ility )nde/ ?6S)@ 6e,uirements to esi$n Tracea<ility

,ro7e-t Management Metri-s Colle-ted 1. Planned Bo. of days #. Actual Bo. of days 1. #. 1. #. 1. #. :stimated effort Actual :ffort :stimated Cost Actual Cost :stimated Si=e Actual Si=e 1. Si=e 'ariance 1. Cost 'ariance 1. :ffort 'ariance Deri*ed Metri-s 1. Schedule 'ariance

MADHAVA REDDY ANNADI

of 11

Fundamentals of Software Testing

Testing = Re*iew Metri-s Colle-ted 1. #. 1. 4. Bo. of defects found <y 6e!iews Bo. of defects found <y Testin$ Bo. of defects found <y Client Total Bo. of defects found <y 6e!iews

Deri*ed Metri-s 1. C!erall 6e!iew :ffecti!eness ?C6:@ #. C!erall Test :ffecti!eness ,eer Re*iews Metri-s Colle-ted 1. #. 1. 4. %. &. 2. (. -. ]LCC 5 >P "er "erson hour ?Lan$ua$e@ for Pre"aration ]LCC 5 >P "er "erson hour ?Lan$ua$e@ for 6e!iew Meetin$ Bo. of "a$es 5 hour re!iewed durin$ "re"aration A!era$e num<er of defects found <y 6e!iewer durin$ Pre"aration Bo. of "a$es 5 hour re!iewed durin$ 6e!iew Meetin$ A!era$e num<er of defects found <y 6e!iewer durin$ 6e!iew Meetin$ 6e!iew Team Si=e 's 6e!iew s"eed 's efects

efects

Ma9or defects found durin$ 6e!iew Meetin$

10. efects 's 6e!iew :ffort

MADHAVA REDDY ANNADI

$ of 11

Fundamentals of Software Testing

Deri*ed Metri-s 1. 6e!iew :ffecti!eness ?Ma9or@ #. Total num<er of defects found <y re!iews for a "ro9ect Ot0er Metri-s Metri-s Colle-ted 1. Bo. of 6e,uirements esi$ned esi$ned

#. Bo. of 6e,uirements not 1. Bo. of 4. Bo. of

esi$n elements matchin$ 6e,uirements esi$n elements not matchin$ 6e,uirements

%. Bo. of 6e,uirements Tested &. Bo. of 6e,uirements not Tested 2. Bo. of Test Cases with matchin$ 6e,uirements (. Bo. of Test Cases without matchin$ 6e,uirements -. Bo. of 10. Bo. of Deri*ed Metri-s 1. efect ensity esi$ned 's not esi$ned efects <y Se!erity efects <y sta$e of - Cri$in; etection; 6emo!al

#. Bo. of 6e,uirements

1. Bo. of 6e,uirements Tested 's not Tested 4. efect 6emo!al :fficiency ? 6:@

MADHAVA REDDY ANNADI

% of 11

Fundamentals of Software Testing

Some Metri-s E?+lained $chedule Variance -$V. Des-ri+tion This metric $i!es the !ariation of actual schedule !s. the "lanned schedule. This is calculated for each "ro9ect X sta$e wise Formula S' K Z?Actual no. of days X Planned no. of days@ 5 Planned no. of days[ _ 100 Metri- Re+resentation Percenta$e Cal-ulated at Sta$e com"letion Cal-ulated from Software Pro9ect Plan for "lanned num<er of days for com"letin$ each sta$e and for actual num<er of days ta+en to com"lete each sta$e /efect "emoval %fficiency -/"%. Des-ri+tion This metric will indicate the effecti!eness of the defect identification and remo!al in sta$es for a $i!en "ro9ect Formula 6e,uirementsD 6e,uirements 6: "hase@ K 5 Z?6e,uirement ?6e,uirement defects defects corrected in9ected durin$ durin$

6e,uirements "hase@[ _ 100 esi$nD 6: K Z? esi$n defects corrected durin$ esi$n "hase@ 5 ? efects esi$n

identified durin$ 6e,uirements "hase L "hase@[ _ 100


MADHAVA REDDY ANNADI

efects in9ected durin$

& of 11

Fundamentals of Software Testing

CodeD "hase L

6: K Z?Code defects corrected durin$ Codin$ "hase@ 5 ? efects efects identified durin$ esi$n efects in9ected durin$ codin$ "hase@[ _ 100 6: K Z?Total defects corrected at all "hases <efore deli!ery@ 5

identified durin$ 6e,uirements "hase L

C!erallD

?Total defects detected at all "hases <efore and after deli!ery@[ _ 100 Metri- Re+resentation Percenta$e Cal-ulated at Sta$e com"letion or Pro9ect Com"letion Cal-ulated from .u$ 6e"orts and Peer 6e!iew 6e"orts 0verall "eview %ffectiveness Des-ri+tion This metric will indicate the effecti!eness of the 6e!iew "rocess in identifyin$ the defects for a $i!en "ro9ect Formula C!erall 6e!iew :ffecti!enessD C6: K Z?Bum<er of defects found <y re!iews@ 5 ?Total num<er of defects found <y re!iews L Bum<er of defects found durin$ Testin$ L Bum<er of defects found durin$ "ost-deli!ery@[ _ 100 Metri- Re+resentation Percenta$e

Cal-ulated at Monthly Sta$e com"letion or Pro9ect Com"letion


$' of 11

MADHAVA REDDY ANNADI

Fundamentals of Software Testing

Cal-ulated from Peer re!iews; >ormal 6e!iews Test 6e"orts Customer )dentified efects

0verall Test %ffectiveness -0T%. Des-ri+tion This metric will indicate the effecti!eness of the Testin$ "rocess in identifyin$ the defects for a $i!en "ro9ect durin$ the testin$ sta$e Formula C!erall Test :ffecti!enessD CT: K Z?Bum<er of defects found durin$ testin$@ 5 ?Total num<er of defects found durin$ Testin$ L Bum<er of defects found durin$ "ost deli!ery@[ _ 100 Metri- Re+resentation Percenta$e

Cal-ulated at Monthly .uild com"letion or Pro9ect Com"letion

Cal-ulated from Test 6e"orts Customer )dentified efects

MADHAVA REDDY ANNADI

$1 of 11

Fundamentals of Software Testing

%ffort Variance -%V. Des-ri+tion This metric $i!es the !ariation of actual effort !s. the estimated effort. This is calculated for each "ro9ect Sta$e wise Formula :' K Z?Actual "erson hours X :stimated "erson hours@ 5 :stimated "erson hours[ _ 100 Metri- Re+resentation Percenta$e

Cal-ulated at Sta$e com"letion as identified in SPP

Cal-ulated from :stimation sheets for estimated !alues in "erson hours; for each acti!ity within a $i!en sta$e and Actual *or+ed Jours !alues in "erson hours. #ost Variance -#V. Des-ri+tion This metric $i!es the !ariation of actual cost 's the estimated cost. This is calculated for each "ro9ect Sta$e wise Formula C' K Z?Actual Cost X :stimated Cost@ 5 :stimated Cost[ _ 100

Metri- Re+resentation Percenta$e

Cal-ulated at
MADHAVA REDDY ANNADI
$2 of 11

Fundamentals of Software Testing

Sta$e com"letion

Cal-ulated from :stimation sheets for estimated !alues in dollars or ru"ees; for each acti!ity within a $i!en sta$e Actual cost incurred

$i1e Variance Des-ri+tion This metric $i!es the !ariation of actual si=e 's. the estimated si=e. This is calculated for each "ro9ect sta$e wise Formula Si=e 'ariance K Z?Actual Si=e X :stimated Si=e@ 5 :stimated Si=e[ _ 100

Metri- Re+resentation Percenta$e

Cal-ulated at Sta$e com"letion Pro9ect Com"letion

Cal-ulated from :stimation sheets for estimated !alues in >unction Points or ]LCC Actual si=e

Productivity on "eview Preparation 2 Technical


MADHAVA REDDY ANNADI
$! of 11

Fundamentals of Software Testing

Des-ri+tion This metric will indicate the effort s"ent on "re"aration for 6e!iew. 0se this to calculate for lan$ua$es used in the Pro9ect Formula For e*er. language 2su-0 as CC CHHC Ia*aC JS8C et-U3 usedC -al-ulate ?]LCC or >P @ 5 hour ?_ Lan$ua$e@

_Lan$ua$e X C; CLL; Pa!a; 8ML; etcU Metri- Re+resentation ]LCC or >P "er hour

Cal-ulated at Monthly .uild com"letion

Cal-ulated from Peer 6e!iew 6e"ort

Num6er of defe-ts found +er Re*iew Meeting Des-ri+tion This metric will indicate the num<er of defects found durin$ the 6e!iew Meetin$ across !arious sta$es of the Pro9ect Formula Bum<er of defects "er 6e!iew Meetin$

Metri- Re+resentation efects 5 6e!iew Meetin$

Cal-ulated at
MADHAVA REDDY ANNADI
$" of 11

Fundamentals of Software Testing

Monthly Com"letion of 6e!iew

Cal-ulated from Peer 6e!iew 6e"ort Peer 6e!iew efect List

Re*iew Team Effi-ien-. 2Re*iew Team SiAe Vs Defe-ts Trend3 Des-ri+tion This metric will indicate the 6e!iew Team si=e and the defects trend. This will hel" to determine the efficiency of the 6e!iew Team Formula 6e!iew Team Si=e to the efects trend

Metri- Re+resentation 6atio

Cal-ulated at Monthly Com"letion of 6e!iew

Cal-ulated from Peer 6e!iew 6e"ort Peer 6e!iew efect List

Re*iew Effe-ti*eness
MADHAVA REDDY ANNADI
$# of 11

Fundamentals of Software Testing

Des-ri+tion This metric will indicate the effecti!eness of the 6e!iew "rocess Formula 6e!iew :ffecti!eness K Z?Bum<er of defects found <y 6e!iews@ 5 ??Total num<er of defects found <y re!iews@ L Testin$@[ _ 100 Metri- Re+resentation Percenta$e

Cal-ulated at Com"letion of 6e!iew or Com"letion of Testin$ sta$e

Cal-ulated from Peer 6e!iew 6e"ort Peer 6e!iew efect List

.u$s 6e"orted <y Testin$

Total num6er of defe-ts found 6. Re*iews Des-ri+tion This metric will indicate the total num<er of defects identified <y the 6e!iew "rocess. The defects are further cate$ori=ed as Ji$h; Medium or Low Formula Total num<er of defects identified in the Pro9ect Metri- Re+resentation efects "er Sta$e

Cal-ulated at
MADHAVA REDDY ANNADI
$ of 11

Fundamentals of Software Testing

Com"letion of 6e!iews

Cal-ulated from Peer 6e!iew 6e"ort Peer 6e!iew efect List

Defe-ts *s Re*iew effort E Re*iew Yield Des-ri+tion This metric will indicate the effort e/"ended in each sta$e for re!iews to the defects found Formula efects 5 6e!iew effort

Metri- Re+resentation efects 5 6e!iew effort

Cal-ulated at Com"letion of 6e!iews

Cal-ulated from Peer 6e!iew 6e"ort Peer 6e!iew efect List

Re9uirements Sta6ilit. 1nde? 2RS13 Des-ri+tion This metric $i!es the sta<ility factor of the re,uirements o!er a "eriod of time; after the re,uirements ha!e <een mutually a$reed and <aselined <etween )!esia Solutions and the Client Formula
MADHAVA REDDY ANNADI
$$ of 11

Fundamentals of Software Testing

6S) K 100 _ Z ?Bum<er of <aselined re,uirements@ X ?Bum<er of chan$es in re,uirements after the re,uirements are <aselined@ [ 5 ?Bum<er of <aselined re,uirements@

Metri- Re+resentation Percenta$e

Cal-ulated at Sta$e com"letion and Pro9ect com"letion

Cal-ulated from Chan$e 6e,uest Software 6e,uirements S"ecification

C0ange Re9uests 6. State Des-ri+tion This metric "ro!ides the analysis on state of the re,uirements Formula Bum<er of acce"ted re,uirements Bum<er of re9ected re,uirements Bum<er of "ost"oned re,uirements

Metri- Re+resentation Bum<er

Cal-ulated at Sta$e com"letion

Cal-ulated from
MADHAVA REDDY ANNADI
$% of 11

Fundamentals of Software Testing

Chan$e 6e,uest Software 6e,uirements S"ecification

Re9uirements to Test -ase Tra-ea6ilit. Des-ri+tion This metric "ro!ides the analysis on the num<er of re,uirements tested 's the num<er of re,uirements not tested Formula Bum<er of 6e,uirements Bum<er of 6e,uirements Tested Bum<er of 6e,uirements not Tested

Metri- Re+resentation Bum<er

Cal-ulated at Sta$e com"letion

Cal-ulated from S6S etail esi$n

Test Case S"ecification

Test -ases to Re9uirements tra-ea6ilit. Des-ri+tion This metric "ro!ides the analysis on the num<er of test cases matchin$ re,uirements 's the num<er of test cases not matchin$ re,uirements

Formula
MADHAVA REDDY ANNADI
$& of 11

Fundamentals of Software Testing

Bum<er of 6e,uirements Bum<er of Test cases with matchin$ 6e,uirements Bum<er of Test cases not matchin$ 6e,uirements

Metri- Re+resentation Bum<er

Cal-ulated at Sta$e com"letion

Cal-ulated from S6S Test Case S"ecification

Num6er of defe-ts in -oding found during testing 6. se*erit. Des-ri+tion This metric "ro!ides the analysis on the num<er of defects <y the se!erity Formula Bum<er of efects

Bum<er of defects of low "riority Bum<er of defects of medium "riority Bum<er of defects of hi$h "riority

Metri- Re+resentation Bum<er

Cal-ulated at
MADHAVA REDDY ANNADI
%' of 11

Fundamentals of Software Testing

Sta$e com"letion

Cal-ulated from .u$ 6e"ort

Defe-ts E Stage of originC dete-tionC remo*al Des-ri+tion This metric "ro!ides the analysis on the num<er of defects <y the sta$e of ori$in; detection and remo!al. Formula Bum<er of efects

Sta$e of ori$in Sta$e of detection Sta$e of remo!al

Metri- Re+resentation Bum<er

Cal-ulated at Sta$e com"letion

Cal-ulated from .u$ 6e"ort

Defe-t Densit. Des-ri+tion This metric "ro!ides the analysis on the num<er of defects to the si=e of the wor+ "roduct

Formula
MADHAVA REDDY ANNADI
%1 of 11

Fundamentals of Software Testing

efect

ensity K ZTotal no. of

efects 5 Si=e ?>P 5 ]LCC@[ _ 100

Metri- Re+resentation Percenta$e

Cal-ulated at Sta$e com"letion

Cal-ulated from efects List .u$ 6e"ort

1) @ENERA8 CONCE,TS
Ad hoc Testing There are two di!er$ent !iews of ad 0o- testing. To one cam" it is software testin$ "erformed without "lannin$ and documentation. The tests are intended to <e run only once; unless a defect is disco!ered. Ad hoc testin$ is a "art of e/"loratory testin$; <ein$ the least formal of test methods. )n this !iew; ad hoc testin$ has <een critici=ed <ecause it isnAt structured; <ut this can also <e stren$thD im"ortant thin$s can <e found ,uic+ly. )t is "erformed with im"ro!isation; the tester see+s to find <u$s with any means that seem a""ro"riate. )t contrasts to re$ression testin$ that loo+s for a s"ecific issue with detailed re"roduction ste"s; and a clear e/"ected result. Ad hoc testin$ is most often used as a com"lement to other ty"es of testin$. 'ug can <e defined as a de!iation from e/"ected result and actual result. A "ersistent error in software or hardware. )f the <u$ is in software; it can <e corrected <y chan$in$ the "ro$ram. )f the <u$ is in hardware; new circuits ha!e to <e desi$ned.

MADHAVA REDDY ANNADI

%2 of 11

Fundamentals of Software Testing

3uality software is reasona<ly <u$-free; deli!ered on time and within <ud$et; meets re,uirements and maintaina<le. (ocali1ation

is Customi=in$ software and documentation for a "articular country. )t

includes the translation of menus and messa$es into the nati!e s"o+en lan$ua$e as well as chan$es in the user interface to accommodate different al"ha<ets and culture Test $cenario Sets of test cases that ensure that the <usiness "rocess flows are tested from end to end. They may <e inde"endent tests or a series of tests that follow each other; each de"endent on the out"ut of the "re!ious one. The terms Stest scenarioS and Stest caseS are often used synonymously Test $uite A collection of test scenarios and5or test cases that are related or that may coo"erate with each other. Test $cript The instructions in a test "ro$ram. )t defines the actions and "ass5fail criteria. >or e/am"le; if the action is Sto enter a !alid account num<er;S the e/"ected result is that the data are acce"ted. :nterin$ an in!alid num<er should yield a "articular error messa$e. See test case. /ynamic testing ?or dynamic analysis@ is a term used in software en$ineerin$ to descri<e the testin$ of the dynamic <eha!ior of code. That is dynamic analysis refers to the e/amination of the "hysical res"onse from the system to !aria<les that chan$e with time and are not consistent. #onfiguration Management ? C @ A disci"line a""lyin$ technical and administrati!e direction and sur!eillance toD ?1@ identify and document the functional and "hysical characteristics of a confi$uration itemI ?#@ control chan$es to those characteristicsI and ?1@ record and re"ort chan$es to "rocessin$ and im"lementation status.

MADHAVA REDDY ANNADI

%! of 11

Fundamentals of Software Testing

Priority! *hich <u$ we ha!e to sol!e first. $everity! A <u$ how much it will affect on a""lication. 'ig4'ang TestingD Testin$ each module inde"endently and then com<inin$ the module to form the "ro$ram. 3uality Assurance/ Fuality Assurance measures the ,uality of a "rocess used to create a ,uality "roduct. )t is in!ol!e in entire de!elo"ment "rocess. The correct definition of Software Fuality Assurance $oes somethin$ li+eDThe function of software ,uality that assures that the standards; "rocesses; and "rocedures are a""ro"riate for the "ro9ect and are correctly im"lemented 3uality #ontrol/ Fuality Control measured the ,uality of a "roduct. Positive Testing Test cases should <e desi$ned to show that the unit under test does what it is su""osed to do. 5egative Testing! A test whose "rimary "ur"ose is falsification i.e. tests desi$ned to <ea+ the software. %&haustive Testing! E/ecutin$ the "ro$ram with all "ossi<le com<inations of !alue for "ro$ram !aria<les. )t is feasi<le for small R sim"le "ro$rams. 6ray 'o& Testing! )t is com<ination of <loc+ <o/ and white <o/ testin$. /ata Integrity! The ,uality of correctness; com"leteness; wholeness; soundness and com"liance with the intention of the creators of the data. )t is achie!ed <y "re!entin$ accidental or deli<erate <ut unauthori=ed insertion; modification or destruction of data in a data<ase 3uality Assurance analyst! A "erson who is res"onsi<le for maintainin$ software ,uality within an or$ani=ation. Such indi!iduals de!elo" and use strin$ent testin$ methods and may also <e in!ol!ed with )SC -000 and the S:) models.

MADHAVA REDDY ANNADI

%" of 11

Fundamentals of Software Testing

Test 7arness is also called an Automated Test Framewor: ?SAT>S@. )t is used in software testin$. A test harness is a collection of software and test data confi$ured to test a "ro$ram unit <y runnin$ it under !aryin$ conditions and monitor its <eha!ior and out"uts. )t has two main "arts namely; the test e/ecution en$ine and the test scri"t re"ository. Test harnesses should include the followin$ ca"a<ilitiesD A standard way to s"ecify setu" ?i.e.; creatin$ an artificial runtime en!ironment@ and cleanu". A method for selectin$ indi!idual tests to run; or all tests. A means of analy=in$ out"ut for e/"ected ?or une/"ected@ results. A standardi=ed form of failure re"ortin$

"isk4based testing ?6.T@ is a ty"e of software testin$ that "rioriti=es the features and functions to <e tested <ased on "riority5im"ortance and li+elihood or im"act or failure. )n theory; since there is an infinite num<er of "ossi<le tests; any set of tests must <e a su<set of all "ossi<le tests. Test techni,ues such as <oundary !alue analysis and state transition testin$ aim to find the areas most li+ely to <e defecti!e. So <y usin$ test techni,ues; a software test en$ineer is already selectin$ tests <ased on ris+. $tatic Testing ?also +nown as S ry 6un Testin$S@ is a form of software testin$ where the software isnAt actually used. Synta/ chec+in$ and manually readin$ the code to find errors are methods of static testin$. This ty"e of testin$ is mostly used <y the de!elo"er himself5herself ?who desi$ned or code the module@. Static testin$ is usually the first ty"e of testin$ done on any system. Static testin$ is $enerally done as unit testin$ followin$ an S:le"hant in CairoS style. AStatic testin$A is $enerally not ta+en as detailed testin$; <ut chec+s mainly for the sanity of the code5al$orithm; and to assess if "ro$ram is ready for more detailed testin$; where methods li+e code re!iew; ins"ection and wal+throu$h are used. Static testin$ also a""lies to *hite <o/ testin$ techni,ues. se4#ase analysis
MADHAVA REDDY ANNADI
%# of 11

Fundamentals of Software Testing

An o<9ect-oriented method for desi$nin$ information systems <y <rea+in$ down re,uirements into user functions. :ach use case is a transaction or se,uence of e!ents "erformed <y the user. 0se cases are studied to determine what o<9ects are re,uired to accom"lish them and how they interact with other o<9ects. Traceability Matri& Tracea<ility Matri/ is one of the document will "re"are <y FA.To ma+e sure all the re,uirements mentioned in the re,uirements document are co!ered in your testin$; we will "re"are the tracea<ility matri/. This document contains the followin$ columns. 6e,Y; .rief descri"tion a<out the re,uirement; test scri"t ? to +now in which test scri"t it is co!ered@; Test case . Inde& A data<ase inde? is a data structure that im"ro!es the s"eed of o"erations in a ta<le. )nde/es can <e created usin$ one or more columns. The dis+ s"ace re,uired to store the inde/ is ty"ically less than the stora$e of the ta<le relational data<ase an inde/ is a co"y of "art of a ta<le. Primary 8ey )n data<ase desi$n; a +rimar. :e. is a !alue that can <e used to identify a uni,ue row in a ta<le. Attri<utes are associated with it. :/am"les of "rimary +eys are Social Security num<ers ?associated to a s"ecific "erson@ or )S.Bs ?associated to a s"ecific <oo+@. )n the relational model of data; a +rimar. :e. is a candidate +ey chosen as the main method of uni,uely identifyin$ a tu"le in a relation. Practical tele"hone <oo+s and dictionaries cannot use names or words or ewey ecimal System num<ers as candidate +eys <ecause they do not uni,uely identify tele"hone num<ers or words. A "rimary +ey ?as well as a uni,ue +ey@ can <e referenced <y a forei$n +ey.
Zcitation needed[

. )n a

40at are t0e do-uments deli*ered from KA de+artment5


MADHAVA REDDY ANNADI
% of 11

Fundamentals of Software Testing

Test "lan Test desi$n s"ecification Test case s"ecification Test "rocedure s"ecification Test lo$s Test incident re"orts Test summary re"orts.

How -an it 6e :nown w0en to sto+ testing5 This can <e difficult to determine. *e can consider some common factors in decidin$ when to sto" are All the hi$h "riority <u$s are fi/ed. The rate at which <u$s are found is too small. The testin$ <ud$et is e/hausted. The "ro9ect duration is com"leted. The ris+ in the "ro9ect is under acce"ta<le limit. eadlines achie!ed Test cases com"leted with certain "ercenta$e "assed. Test <ud$et de"leted Co!era$e of functionality reaches a s"ecified "oint. efect rate falls <elow a certain le!el. .eta or al"ha testin$ "eriod ends.

40at if t0ere is not enoug0 time for t0roug0 testing5


MADHAVA REDDY ANNADI
%$ of 11

Fundamentals of Software Testing

0se ris+ analysis to determine where testin$ should <e focused >i$ure out which functionality it most im"ortant to the the "ro9ect *hich functionality is most !isi<le to the user. *hich functionality has the lar$est safety im"act *hich functionality has the lar$est financial im"act on users.

40. does software 0a*e 6ugs5 Mis-ommuni-ation or NO Communi-ation - >or hi$h ,uality; you must first +now the s"ecifics of what an a""lication should or shouldnAt do ?the a""licationAs re,uirements@. Clic+ here to see how Planner can hel" with re,uirements. Software Com+le?it. - The com"le/ity of current software a""lications can <e difficult to com"rehend for anyone without e/"erience in modern-day software de!elo"ment. *indows-ty"e interfaces; client-ser!er and distri<uted a""lications; data communications; enormous relational data<ases; and sheer si=e of a""lications ha!e all contri<uted to the e/"onential $rowth in software5system com"le/ity. And the use of o<9ect-oriented techni,ues can com"licate instead of sim"lify a "ro9ect unless it is well-en$ineered. ,rogramming Errors - Pro$rammers; li+e anyone else; can ma+e mista+es. )t is im"ortant to fully test all re,uirements and to ha!e a defect trac+in$ solution that allows "ro$rammers to fi/ defects that are found <y the ,uality assurance team. Clic+ here to see how efect Trac+er or Software Planner can hel" with mana$in$ test cases and here to see how it allows ,uality assurance teams to trac+ defects and mana$e software ,uality assurance. C0anging re9uirements 2w0et0er do-umented or undo-umented3 The customer may not understand the effects of chan$es; or may understand and re,uest them anyway - redesi$n; reschedulin$ of en$ineers; effects on other "ro9ects; wor+ already com"leted that may ha!e to <e redone or thrown out; hardware re,uirements that may <e affected; etc. )f there are many minor chan$es or any ma9or chan$es; +nown and un+nown de"endencies amon$ "arts of the "ro9ect are li+ely to interact and cause "ro<lems; and the
MADHAVA REDDY ANNADI
%% of 11

efect Trac+er or Software

Fundamentals of Software Testing

com"le/ity of coordinatin$ chan$es may result in errors. :nthusiasm of en$ineerin$ staff may <e affected. )n some fast-chan$in$ <usiness en!ironments; continuously modified re,uirements may <e a fact of life. )n this case; mana$ement must understand the resultin$ ris+s; and FA and test en$ineers must ada"t and "lan for continuous e/tensi!e testin$ to +ee" the ine!ita<le <u$s from runnin$ out of control. Clic+ here to see how Trac+er or Software Planner can hel" with re,uirements mana$ement. Time ,ressures - Schedulin$ of software "ro9ects is difficult at <est; often re,uirin$ a lot of $uesswor+. *hen deadlines loom and the crunch comes; mista+es will <e made. Clic+ here to see how Software Planner can hel" with Pro9ect Tas+s "oot cause analysis -"#A. is a class of "ro<lem sol!in$ methods aimed at identifyin$ the root causes of "ro<lems or e!ents. The "ractice of 6CA is "redicated on the <elief that "ro<lems are <est sol!ed <y attem"tin$ to correct or eliminate root causes; as o""osed to merely addressin$ the immediately o<!ious sym"toms. .y directin$ correcti!e measures at root causes; it is ho"ed that the li+elihood of "ro<lem recurrence will <e minimi=ed. Jowe!er; it is reco$ni=ed that com"lete "re!ention of recurrence <y a sin$le inter!ention is not always "ossi<le. Thus; 6CA is often considered to <e an iterati!e "rocess; and is fre,uently !iewed as a tool of continuous im"ro!ement. 6oot cause analysis is not a sin$le; shar"ly-defined methodolo$yI there are many different tools; "rocesses; and "hiloso"hies of 6CA in e/istence. Jowe!er; most of these can <e classed into fi!e; !ery-<roadly defined SschoolsS that are named here <y their <asic fields of ori$inD safety-<ased; "roduction-<ased; "rocess-<ased; failure-<ased; and systems-<ased. Safety-<ased 6CA descends from the fields of occu"ational safety and health. Production-<ased 6CA has its ori$ins in the field of ,uality control for industrial manufacturin$. Process-<ased 6CA is <asically a follow-on to "roduction-<ased 6CA; <ut with a sco"e that has <een e/"anded to include <usiness "rocesses. accident analysis and efect

MADHAVA REDDY ANNADI

%& of 11

Fundamentals of Software Testing

>ailure-<ased 6CA is rooted in the "ractice of failure analysis as em"loyed in en$ineerin$ and maintenance.

Systems-<ased 6CA has emer$ed as an amal$amation of the "recedin$ schools; alon$ with ideas ta+en from fields such as chan$e mana$ement; ris+ mana$ement; and systems analysis.

es"ite the seemin$ dis"arity in "ur"ose and definition amon$ the !arious schools of root cause analysis; there are some $eneral "rinci"les that could <e considered as uni!ersal. Similarly; it is "ossi<le to define a $eneral "rocess for "erformin$ 6CA. 6eneral principles of root cause analysis 1. Aimin$ correcti!e measures at root causes is more effecti!e than merely treatin$ the sym"toms of a "ro<lem. #. To <e effecti!e; 6CA must <e "erformed systematically; and conclusions must <e <ac+ed u" <y e!idence. 1. There is usually more than one root cause for any $i!en "ro<lem.. 6eneral process for performing root cause analysis 1. efine the "ro<lem.

#. 4ather data5e!idence. 1. )dentify issues that contri<uted to the "ro<lem. 4. >ind root causes. %. e!elo" solution recommendations.

&. )m"lement the solutions.

/o automate if! 1. The test is intended for re"eated e/ecution in order to <rea+ the a""lication.
MADHAVA REDDY ANNADI
&' of 11

Fundamentals of Software Testing

#. The test will <e run on each new <uild to chec+ the a""lication <asic functionality. 1. Test the a""lication with different set of data. /o not automate if! 1. The test will <e e/ecuted only once. #. The test re,uires immediate e/ecution. 1t is -ommonl. 6elie*ed t0at t0e earlier a defe-t is found t0e -0ea+er it is to fi? it Time Dete-ted

Time 1ntrodu-ed

Re9uirements

Ar-0ite-ture

Constru-tion

S.stem ,ostB Test Release

Re9uirements 1

%-10

10

10-100

Ar-0ite-ture

10

1%

#%-100

Constru-tion

10

10-#%

At SDE1C " le*els of software testing is done at *arious SD8C +0ases ;nit Testing/ in which each unit ?<asic com"onent@ of the software is tested to !erify that the detailed desi$n for the unit has <een correctly im"lemented 1ntegration testing/ in which "ro$ressi!ely lar$er $rou"s of tested software com"onents corres"ondin$ to elements of the architectural desi$n are inte$rated and tested until the software wor+s as a whole. S.stem testing/ in which the software is inte$rated to the o!erall "roduct and tested to show that all re,uirements are met A further le!el of testin$ is also done; in accordance with re,uirementsD

MADHAVA REDDY ANNADI

&1 of 11

Fundamentals of Software Testing

A--e+tan-e testing/ u"on which the acce"tance of the com"lete software is <ased. The clients often do this.

Regression testing/ is used to refer the re"etition of the earlier successful tests to ensure that chan$es made in the software ha!e not introduced new <u$s5side effects.

)n recent years the term $rey <o/ testin$ has come into common usa$e. The ty"ical $rey <o/ tester is "ermitted to set u" or mani"ulate the testin$ en!ironment; li+e seedin$ a data<ase; and can !iew the state of the "roduct after his actions; li+e "erformin$ a SFL ,uery on the data<ase to <e certain of the !alues of columns. )t is used almost e/clusi!ely of client-ser!er testers or others who use a data<ase as a re"ository of information; <ut can also a""ly to a tester who has to mani"ulate 8ML files ? T or an actual 8ML file@ or confi$uration files directly. )t can also <e used of testers who +now the internal wor+in$s or al$orithm of the software under test and can write tests s"ecifically for the antici"ated results. >or e/am"le; testin$ a data warehouse im"lementation in!ol!es loadin$ the tar$et data<ase with information; and !erifyin$ the correctness of data "o"ulation and loadin$ of data into the correct ta<les. Test levels 0nit testin$ tests the minimal software com"onent and su<-com"onent or modules <y the "ro$rammers. )nte$ration testin$ e/"oses defects in the interfaces and interaction <etween inte$rated com"onents ?modules@. >unctional testin$ tests the "roduct accordin$ to "ro$ramma<le wor+. System testin$ tests an inte$rated system to !erify5!alidate that it meets its re,uirements. Acce"tance testin$ testin$ can <e conducted <y the client. )t allows the enduser or customer or client to decide whether or not to acce"t the "roduct. Acce"tance testin$ may <e "erformed after the testin$ and <efore the im"lementation "hase. See also Development stage

MADHAVA REDDY ANNADI

&2 of 11

Fundamentals of Software Testing

#lpha testing is simulated or actual o"erational testin$ <y "otential users5customers or an inde"endent test team at the de!elo"ersA site. Al"ha testin$ is often em"loyed for off-the-shelf software as a form of internal acce"tance testin$; <efore the software $oes to <eta testin$.

$eta testing comes after al"ha testin$. 'ersions of the software; +nown as <eta !ersions; are released to a limited audience outside of the com"any. The software is released to $rou"s of "eo"le so that further testin$ can ensure the "roduct has few faults or <u$s. Sometimes; <eta !ersions are made a!aila<le to the o"en "u<lic to increase the feed<ac+ field to a ma/imal num<er of future users.

)t should <e noted that althou$h <oth Al"ha and .eta are referred to as testin$ it is in fact use emersion. The ri$ors that are a""lied are often unsystematic and many of the <asic tenants of testin$ "rocess are not used. The Al"ha and .eta "eriod "ro!ides insi$ht into en!ironmental and utili=ation conditions that can im"act the software. A sample testing cycle Althou$h testin$ !aries <etween or$ani=ations; there is a cycle to testin$D

1. Re9uirements Anal.sis/ Testin$ should <e$in in the re,uirements "hase of


the software de!elo"ment life cycle. urin$ the desi$n "hase; testers wor+ with de!elo"ers in determinin$ what as"ects of a desi$n are testa<le and under what "arameter those tests wor+. #. Test ,lanning/ Test Strate$y; Test Plan?s@; Test .ed creation. 1. Test De*elo+ment/ Test Procedures; Test Scenarios; Test Cases; Test Scri"ts to use in testin$ software. 4. Test E?e-ution/ Testers e/ecute the software <ased on the "lans and tests and re"ort any errors found to the de!elo"ment team. %. Test Re+orting/ Cnce testin$ is com"leted; testers $enerate metrics and ma+e final re"orts on their test effort and whether or not the software tested is ready for release.
MADHAVA REDDY ANNADI
&! of 11

Fundamentals of Software Testing

Retesting t0e Defe-ts

Software Release 8ife C.-le


A software release refers to the distri<ution; whether "u<lic or "ri!ate; of an initial or new and u"$raded !ersion of a com"uter software "roduct. :ach time a software "ro$ram or system is chan$ed; the "ro$rammers and com"any doin$ the wor+ decide on how to distri<ute the "ro$ram or system; or chan$es to that "ro$ram or system. Software "atches are one method of distri<utin$ the chan$es; as are downloads and com"act discs.

Software release stages The software release life -.-le is com"osed of different sta$es that descri<e the sta<ility of a "iece of software and the amount of de!elo"ment it re,uires <efore final release. :ach ma9or !ersion of a "roduct usually $oes throu$h a sta$e when new features are added; or the al+0a sta$eI a sta$e when it is <ein$ acti!ely de<u$$ed; or the 6eta sta$eI and finally a sta$e when all im"ortant <u$s ha!e <een remo!ed; or the sta6le sta$e. )ntermediate sta$es may also <e reco$ni=ed. The sta$es may <e formally announced and re$ulated <y the "ro9ectAs de!elo"ers; <ut sometimes the terms are used informally to descri<e the state of a "roduct. Con!entionally; code
MADHAVA REDDY ANNADI
&" of 11

Fundamentals of Software Testing

names are often used <y many com"anies for !ersions "rior to the release of the "roduct; thou$h the actual "roduct and features are rarely secret. $oftware release stages ,reBal+0a Sometimes a <uild +nown as +reBal+0a is issued; <efore the release of an al"ha or <eta. )n contrast to al"ha and <eta !ersions; the "re-al"ha is usually not Sfeature com"leteS. At this sta$e desi$ners are still determinin$ e/actly what functionalities the "roduct should and should not ha!e. Al+0a The al+0a *ersion of a "roduct still awaits full de<u$$in$ or full im"lementation of all its functionality <ut satisfies a ma9ority of the software re,uirements. )t often lac+s features "romised in the final release <ut demonstrates the feasi<ility and <asic structure of the software. As the first ma9or sta$e in the release lifecycle; it is named after the 4ree+ letter al"ha; the first letter in the 4ree+ al"ha<et. The al"ha <uild of the software is usually the first <uild deli!ered to the software testers. )n the first "hase of al"ha testin$; de!elo"ers test the software usin$ white <o/ techni,ues. Additional ins"ection is then "erformed usin$ <lac+ <o/ or $rey <o/ techni,ues. This is usually done <y another dedicated testin$ team sometimes concurrently. Mo!in$ to <lac+ <o/ testin$ is often +nown as the second sta$e of al"ha testin$. >eta A 6eta *ersion or 6eta release usually re"resents the first !ersion of a com"uter "ro$ram that im"lements all features in the initial re,uirements analysis. )t is li+ely to <e useful for internal demonstrations and "re!iews to select customers; <ut unsta<le and not yet ready for release. Some de!elo"ers refer to this sta$e as a +re*iew; as a te-0ni-al +re*iew ?TP@ or as an earl. a--ess. As the second ma9or sta$e in the release lifecycle; followin$ the al"ha sta$e; it is named after the 4ree+ letter <eta; the second letter in the 4ree+ al"ha<et.

MADHAVA REDDY ANNADI

&# of 11

Fundamentals of Software Testing

Cften this sta$e <e$ins when the de!elo"ers announce a feature free=e on the "roduct; indicatin$ that no more feature re,uirements will <e acce"ted for this !ersion of the "roduct. Cnly software issues; or <u$s and unim"lemented features will <e addressed. .eta !ersions stand at an intermediate ste" in the full de!elo"ment cycle. e!elo"ers release either a -losed 6eta or an o+en 6etaI closed <eta !ersions are released to a select $rou" of indi!iduals for a user test; while o"en <etas are to a lar$er community $rou"; usually the $eneral "u<lic. The testers re"ort any <u$s that they found and sometimes minor features they would li+e to see in the final !ersion. An e/am"le of a ma9or "u<lic <eta test was when Microsoft started releasin$ re$ular *indows 'ista Community Technolo$y Pre!iews ?CTP@ to <eta testers startin$ in Panuary #00%. The first of these was <uild %#1-. Su<se,uent CTPs introduced most of the "lanned features; as well as a num<er of chan$es to the user interface; <ased in lar$e "art on feed<ac+ from <eta testers. *indows 'ista was deemed feature com"lete with the release of <uild %10( CTP; released on >e<ruary ##; #00&; and much of the remainder of wor+ <etween that <uild and the final release of the "roduct focused on sta<ility; "erformance; a""lication and dri!er com"ati<ility; and documentation. *hen a <eta <ecomes a!aila<le to the $eneral "u<lic it is often widely used <y the technolo$ically sa!!y and those familiar with "re!ious !ersions as thou$h it were the finished "roduct. 0sually de!elo"ers of freeware or o"en-source <etas release them to the $eneral "u<lic while "ro"rietary <etas $o to a relati!ely small $rou" of testers. 6eci"ients of hi$hly "ro"rietary <etas may ha!e to si$n a non-disclosure a$reement. A release is called feature com"lete when the "roduct team a$rees that functional re,uirements of the system are met and no new features will <e "ut into the release; <ut si$nificant software <u$s may still e/ist. Com"anies with a formal software "rocess will tend to enter the <eta "eriod with a list of +nown <u$s that must <e fi/ed to e/it the <eta "eriod; and some com"anies ma+e this list a!aila<le to customers and testers. As the internet has allowed for ra"id and ine/"ensi!e distri<ution of software; com"anies ha!e <e$un to ta+e a more fle/i<le a""roach to use of the word S<etaS. Betsca"e Communications was infamous for releasin$ al"ha le!el !ersions of its Betsca"e we< <rowser as "u<lic <eta releases. )n >e<ruary #00%; ` Bet "u<lished
MADHAVA REDDY ANNADI
& of 11

Fundamentals of Software Testing

an article a<out the recent "henomenon of a <eta !ersion often stayin$ for years and <ein$ used as if it were in "roduction-le!el Z1[. )t noted that 4mail and 4oo$le Bews; for e/am"le; had <een in <eta for a lon$ "eriod of time and were not e/"ected to dro" the <eta status des"ite the fact that they were widely usedI howe!er; 4oo$le Bews did lea!e <eta in Panuary #00&. This techni,ue may also allow a de!elo"er to delay offerin$ full su""ort and5or res"onsi<ility for remainin$ issues. )n the conte/t of *e< #.0; "eo"le e!en tal+ of "er"etual <etas to si$nify that some software is meant to stay in <eta state. The term 6eta test a""lied to software follows from an early ).M hardware de!elo"ment con!ention datin$ <ac+ to "unched card ta<ulatin$ and sortin$ machines. Jardware first went throu$h an al"ha test for "reliminary functionality and manufacturin$ feasi<ility. Then a <eta test to !erify that it actually correctly "erformed the functions it was su""osed to; and then a - test to !erify safety. *ith the ad!ent of "ro$ramma<le com"uters and the first sharea<le software "ro$rams; ).M used the same terminolo$y for testin$ software. $eta tests were conducted <y "eo"le or $rou"s other than the de!elo"ers. As other com"anies <e$an de!elo"in$ software for their own use; and for distri<ution to others; the terminolo$y stuc+ and now is "art of our common !oca<ulary. Release -andidate The term release -andidate refers to a final "roduct; ready to release unless fatal <u$s emer$e. )n this sta$e; the "roduct features all desi$ned functionalities and no +nown showsto""er class <u$s. At this "hase the "roduct is usually code com"lete. Microsoft Cor"oration often uses the term release candidate. urin$ the 1--0s;

A""le Com"uter used the term S$olden masterS for its release candidates; and the final $olden master was the $eneral a!aila<ility release. Cther terms include gamma ?and occasionally also delta; and "erha"s e!en more 4ree+ letters@ for !ersions that are su<stantially com"lete; <ut still under test; and omega for final testin$ of !ersions that are <elie!ed to <e <u$-free; and may $o into "roduction at any time. 4amma; delta; and ome$a are; res"ecti!ely; the third; fourth; and last letters of the 4ree+ al"ha<et. Some users dis"ara$in$ly refer to release candidates and e!en final S"oint ohS releases as S$amma testS software; su$$estin$ that the de!elo"er has chosen to use its customers to test software that is not truly ready for $eneral

MADHAVA REDDY ANNADI

&$ of 11

Fundamentals of Software Testing

release. Cften; <eta testers; if "ri!ately selected; will <e <illed for usin$ the release candidate as thou$h it were a finished "roduct. A release is called code com"lete when the de!elo"ment team a$rees that no entirely new source code will <e added to this release. There may still <e source code chan$es to fi/ defects. There may still <e chan$es to documentation and data files; and to the code for test cases or utilities. Bew code may <e added in a future release. @old<general a*aila6ilit. release The gold or general a*aila6ilit. release !ersion is the final !ersion of a "articular "roduct. )t is ty"ically almost identical to the final release candidate; with only lastminute <u$s fi/ed. A $old release is considered to <e !ery sta<le and relati!ely <u$free with a ,uality suita<le for wide distri<ution and use <y end users. )n commercial software releases; this !ersion may also <e si$ned ?used to allow end-users to !erify that code has not <een modified since the release@. The e/"ression that a software "roduct Shas $one $oldS means that the code has <een com"leted and Sis <ein$ mass-"roduced and will <e for sale soon.S Cther terms for the !ersion include $old master; $old release; or $old <uild. The term $old anecdotally refers to the use of S$old master discS which was commonly used to send the final !ersion to manufacturers who use it to create the mass-"roduced retail co"ies. )t may in this conte/t <e a hold-o!er from music "roduction. )n some cases; howe!er; the master disc is still actually made of $old; for <oth aesthetic a""eal and resistance to corrosion. Microsoft and others use the term Srelease to manufacturin$S ?6TM@ to refer to this !ersion ?for commercial "roducts; li+e *indows 8P; as in; S.uild #&00 is the *indows 8P 6TM releaseS@; and Srelease to *e<S ?6T*@ for freely downloada<le "roducts. >o? Co+. A 6o? -o+. is the final "roduct; "rinted on a disc that is included in the actual release; com"lete with disc $ra"hic art. This term is used mostly <y re!iewers to differentiate from $old master discs. A <o/ co"y does not necessarily come enclosed in the actual <o/ed "roduct - it refers to the disc itself.

MADHAVA REDDY ANNADI

&% of 11

Fundamentals of Software Testing

Sta6le<unsta6le )n o"en source "ro$rammin$; !ersion num<ers or the terms sta6le and unsta6le commonly distin$uish the sta$e of de!elo"ment. The term stable refers to a !ersion of software that is su<stantially identical to a !ersion that has <een throu$h enou$h real-world testin$ to reasona<ly assume there are no showsto""er "ro<lems; or at least that any "ro<lems are +nown and documented. Cn the other hand; the term unstable does not necessarily mean that there are "ro<lems - rather; that enhancements or chan$es ha!e <een made to the software that ha!e not under$one ri$orous testin$ and that more chan$es are e/"ected to <e imminent. 0sers of such software are ad!ised to use the stable !ersion if it meets their needs; and to only use the unstable !ersion if the new functionality is of interest that e/ceeds the ris+ that somethin$ mi$ht sim"ly not wor+ ri$ht.

11 Ca+a6ilit. Maturit. Model


%#apability Maturity Model& A "rocess de!elo"ed <y S:) in 1-(& to hel" im"ro!e; o!er time; the a""lication of an or$ani=ationAs su""ortin$ software technolo$ies. The "rocess is <ro+en into fi!e le!els of se,uential de!elo"mentD )nitial; 6e"eata<le; efined; Mana$ed and C"timi=in$. Maturity model The Ca+a6ilit. Maturit. Model ?CMM@ is a way to de!elo" and refine an or$ani=ationAs "rocesses. The first CMM was for the "ur"ose of de!elo"in$ and refinin$ software de!elo"ment "rocesses. A maturity model is a structured collection of elements that descri<e characteristics of effecti!e "rocesses. A maturity model "ro!idesD a "lace to start the <enefit of a communityNs "rior e/"eriences a common lan$ua$e and a shared !ision a framewor+ for "rioriti=in$ actions a way to define what im"ro!ement means for your or$ani=ation

MADHAVA REDDY ANNADI

&& of 11

Fundamentals of Software Testing

A maturity model can <e used as a <enchmar+ for assessin$ different or$ani=ations for e,ui!alent com"arison. )t descri<es the maturity of the com"any <ased u"on the "ro9ect the com"any is dealin$ with and the clients. $tructure of #MM Maturit. 8e*els A layered framewor+ "ro!idin$ a "ro$ression to the disci"line

needed to en$a$e in continuous im"ro!ement ?)t is im"ortant to state here that an or$ani=ation de!elo"s the a<ility to assess the im"act of a new "ractice; technolo$y; or tool on their acti!ity. Jence it is not a matter of ado"tin$ these; rather it is a matter of determinin$ how inno!ati!e efforts influence e/istin$ "ractices. This really em"owers "ro9ects; teams; and or$ani=ations <y $i!in$ them the foundation to su""ort reasoned choice.@ Le. ,ro-ess Areas ]ey "rocess area ?]PA@ identifies a cluster of related acti!ities that; when "erformed collecti!ely; achie!e a set of $oals considered im"ortant. @oals The $oals of a +ey "rocess area summari=e the states that must e/ist for that +ey "rocess area to ha!e <een im"lemented in an effecti!e and lastin$ way. The e/tent to which the $oals ha!e <een accom"lished is an indicator of how much ca"a<ility the or$ani=ation has esta<lished at that maturity le!el. The $oals si$nify the sco"e; <oundaries; and intent of each +ey "rocess area. Common Features Common features include "ractices that im"lement and institutionali=e a +ey "rocess area. These fi!e ty"es of common features includeD Commitment to Perform; A<ility to Perform; Acti!ities Performed; Measurement and Analysis; and 'erifyin$ )m"lementation. Le. ,ra-ti-es The +ey "ractices descri<e the elements of infrastructure and "ractice that contri<ute most effecti!ely to the im"lementation and institutionali=ation of the +ey "rocess areas. (evels of the #MM ?See cha"ter # of ?March #00# edition of CMM) from S:)@; "a$e 11.@ There are fi!e le!els of the CMM. Accordin$ to the S:);

MADHAVA REDDY ANNADI

1'' of 11

Fundamentals of Software Testing

'(redictability) effectiveness) and control of an organi*ation+s software processes are believed to improve as the organi*ation moves up these five levels. ,hile not rigorous) the empirical evidence to date supports this belief.' 8e*el 1 B 1nitial At maturity le!el 1; "rocesses are usually ad hoc and the or$ani=ation usually does not "ro!ide a sta<le en!ironment. Success in these or$ani=ations de"ends on the com"etence and heroics of the "eo"le in the or$ani=ation and not on the use of "ro!en "rocesses. )n s"ite of this ad hoc; chaotic en!ironment; maturity le!el 1 or$ani=ations often "roduce "roducts and ser!ices that wor+I howe!er; they fre,uently e/ceed the <ud$et and schedule of their "ro9ects. 8e*el ! B Re+eata6le At maturity le!el #; software de!elo"ment successes are re"eata<le. The "rocesses may not re"eat for all the "ro9ects in the or$ani=ation. The or$ani=ation may use some <asic "ro9ect mana$ement to trac+ cost and schedule. Process disci"line hel"s ensure that e/istin$ "ractices are retained durin$ times of stress. *hen these "ractices are in "lace; "ro9ects are "erformed and mana$ed accordin$ to their documented "lans. 8e*el " B Defined The or$ani=ationNs set of standard "rocesses; which is the <asis for le!el 1; is esta<lished and im"ro!ed o!er time. These standard "rocesses are used to esta<lish consistency across the or$ani=ation. Pro9ects esta<lish their defined "rocesses <y the or$ani=ationNs set of standard "rocesses accordin$ to tailorin$ $uidelines. The or$ani=ationNs mana$ement esta<lishes "rocess o<9ecti!es <ased on the or$ani=ationNs set of standard "rocesses and ensures that these o<9ecti!es are a""ro"riately addressed. 8e*el # B Managed 0sin$ "recise measurements; mana$ement can effecti!ely control the software de!elo"ment effort. )n "articular; mana$ement can identify ways to ad9ust and ada"t
MADHAVA REDDY ANNADI
1'1 of 11

Fundamentals of Software Testing

the "rocess to "articular "ro9ects without measura<le losses of ,uality or de!iations from s"ecifications. Cr$ani=ations at this le!el set ,uantitati!e ,uality $oals for <oth software "rocess and software maintenance. 8e*el $ B O+timiAing Maturity le!el % focuses on continually im"ro!in$ "rocess "erformance throu$h <oth incremental and inno!ati!e technolo$ical im"ro!ements. Fuantitati!e "rocessim"ro!ement o<9ecti!es for the or$ani=ation are esta<lished; continually re!ised to reflect chan$in$ <usiness o<9ecti!es; and used as criteria in mana$in$ "rocess im"ro!ement. The effects of de"loyed "rocess im"ro!ements are measured and e!aluated a$ainst the ,uantitati!e "rocess-im"ro!ement o<9ecti!es. .oth the defined "rocesses and the or$ani=ationNs set of standard "rocesses are tar$ets of measura<le im"ro!ement acti!ities. $i& $igma is a methodolo$y to mana$e "rocess !ariations that uses data and statistical analysis to measure and im"ro!e a com"anyAs o"erational "erformance. )t wor+s <y identifyin$ and eliminatin$ defects in manufacturin$ and ser!ice-related "rocesses. The ma/imum "ermissi<le defect is 1.4 "er one million o""ortunities. Jowe!er; Si/ Si$ma is manufacturin$-oriented and needs further rese on its rele!ance to software de!elo"ment. Ca+a6ilit. Maturit. Model 1ntegration #apability Maturity Model Integration %#MMI& is a "rocess im"ro!ement a""roach that "ro!ides or$ani=ations with the essential elements of effecti!e "rocesses.
Z1[

The latest !ersion of CMM) !er 1.# was released in Au$ust #00&. There e!elo"ment; CMM)

are 1 constellations of CMM) in the new !ersion; namelyD- CMM) Ser!ices and CMM) Ac,uisition. CMM) for

e!elo"ment 'er 1.# consists of ## "rocess areas with Ca"a<ility or

Maturity le!els. CMM) is created <y the Software :n$ineerin$ )nstitute ?S:)@ and is a!aila<le for download from the S:). CMM) should <e ada"ted to each indi!idual com"any; therefore com"anies are not Scertified.S A com"any is a""raised ?e.$. with an a""raisal method li+e SCAMP)@ at a certain le!el of CMM). The results of such an a""raisal can <e "u<lished if released <y the a""raised or$ani=ation.Z#[
MADHAVA REDDY ANNADI
1'2 of 11

Fundamentals of Software Testing

CMM) Lo$o Process Areas The CMM) contains #% "rocess areasD CMM) Causal Analysis and 6esolution CMM) Confi$uration Mana$ement CMM) 6esolution CMM) )nte$rated Pro9ect ecision Analysis and CMM) Product )nte$ration CMM) Control CMM) Pro9ect Plannin$ CMM) Process and Product Fuality Assurance Su""lier CMM) Fuantitati!e Pro9ect Pro9ect Monitorin$ and

Mana$ement CMM) )nte$rated

Mana$ement CMM) )nte$rated Teamin$ CMM) Measurement and Analysis CMM) Cr$ani=ational :n!ironment

Mana$ement CMM) 6e,uirements e!elo"ment

CMM) 6e,uirements Mana$ement CMM) 6is+ Mana$ement CMM) Su""lier A$reement

for )nte$ration CMM) Cr$ani=ational )nno!ation and e"loyment CMM) efinition Cr$ani=ational Process

Mana$ement CMM) Technical Solution CMM) 'alidation CMM) 'erification


1'! of 11

MADHAVA REDDY ANNADI

Fundamentals of Software Testing

CMM) Cr$ani=ational Process >ocus CMM) Cr$ani=ational Process

Performance CMM) Cr$ani=ational Trainin$

7istory The CMM) is the successor of the CMM. The CMM was de!elo"ed from 1-(2 until 1--2. )n #00# !ersion 1.1 of the CMM) was releasedD !1.# followed in Au$ust #00&. The $oal of the CMM) "ro9ect is to im"ro!e usa<ility of maturity models for software en$ineerin$ and other disci"lines; <y inte$ratin$ many different models into one framewor+. )t was created <y mem<ers of industry; $o!ernment and the S:). The main s"onsors included the Cffice of the Secretary of Bational efense ?CS @ and the
Z1[

efense )ndustrial Association ?B )A@ Systems :n$ineerin$ Committee.

Software Engineering 1nstitute


The $oftware %ngineering Institute -$%I. is a federally funded research and de!elo"ment center s"onsored <y the 0.S. e"artment of efense and o"erated <y Carne$ie Mellon 0ni!ersity; with offices in Pitts<ur$h; Pennsyl!aniaI >ran+furt; 4ermanyI 6edstone Arsenal; Ala<amaI and Arlin$ton; 'ir$inia. The S:) "u<lishes <oo+s on software en$ineerin$ for industry; $o!ernment and military a""lications and "ractices. They are most famous for the software Ca"a<ility Maturity Model; ?now CMM)@; which identifies essential elements of effecti!e system R software en$ineerin$ "rocesses and can <e used to rate the le!el of an or$ani=ationAs ca"a<ility for "roducin$ ,uality software systems. The S:) is also the home of C:6T; the federally funded com"uter security or$ani=ation. >ormerly +nown as the Com"uter :mer$ency 6es"onse Team; C:6TAs wor+ in!ol!es handlin$ com"uter security incidents and !ulnera<ilities; "u<lishin$ security alerts; researchin$ lon$-term chan$es in networ+ed systems; and de!elo"in$ information and trainin$ to hel" im"ro!e security at )nternet sites across the country.

MADHAVA REDDY ANNADI

1'" of 11

Fundamentals of Software Testing

The S:) has often <een the site of "rotests a$ainst the military fundin$ of research; as well as other issues.

1EEE '!(
I%%% 9+:4)::9; also +nown as the '!( Standard for Software Test

Do-umentation is an )::: standard that s"ecifies the form of a set of documents for use in ei$ht defined sta$es of software testin$; each sta$e "otentially "roducin$ its own se"arate ty"e of document. The standard s"ecifies the format of these documents <ut does not sti"ulate whether they all must <e "roduced; nor does it include any criteria re$ardin$ ade,uate content for these documents. These are a matter of 9ud$ment outside the "ur!iew of the standard. The documents areD Test ,lanD a management +lanning do-ument that showsD Jow the testin$ will <e done *ho will do it *hat will <e tested Jow lon$ it will ta+e *hat the test co!era$e will <e; i.e. what ,uality le!el is re,uired Test Design S+e-ifi-ationD detailin$ test conditions and the

e/"ected results as well as test "ass criteria. Test Case S+e-ifi-ationD s"ecifyin$ the test data for use in runnin$ the test conditions identified in the Test esi$n S"ecification

Test ,ro-edure S+e-ifi-ationD detailin$ how to run each test; includin$ any set-u" "reconditions and the ste"s that need to <e followed

Test 1tem Transmittal Re+ort D re"ortin$ on when tested software com"onents ha!e "ro$ressed from one sta$e of testin$ to the ne/t

Test 8ogD recordin$ which tests cases were run; who ran them; in what order; and whether each test "assed or failed

MADHAVA REDDY ANNADI

1'# of 11

Fundamentals of Software Testing

Test 1n-ident Re+ortD detailin$; for any test that failed; the actual !ersus e/"ected result; and other information intended to throw li$ht on why a test has failed. This document is deli<erately named as an incident re"ort; and not a fault re"ort. The reason is that a discre"ancy <etween e/"ected and actual results can occur for a num<er of reasons other than a fault in the system. These include the e/"ected results <ein$ wron$; the test <ein$ run wron$ly; or inconsistency in the re,uirements meanin$ that more than one inter"retation could <e made. The re"ort consists of all details of the incident such as actual and e/"ected results; when it failed; and any su""ortin$ e!idence that will hel" in its resolution. The re"ort will also include; if "ossi<le; an assessment of the im"act u"on testin$ of an incident.

Test

Summar.

Re+ortD

mana$ement

re"ort

"ro!idin$

any

im"ortant information unco!ered <y the tests accom"lished; and includin$ assessments of the ,uality of the testin$ effort; the ,uality of the software system under test; and statistics deri!ed from )ncident 6e"orts. The re"ort also records what testin$ was done and how lon$ it too+; in order to im"ro!e any future test "lannin$. This final document is used to indicate whether the software system under test is fit for "ur"ose accordin$ to whether or not it has met acce"tance criteria defined <y "ro9ect sta+eholders.

I$0
-000 certification standards de!elo"ed <y the )nternational Cr$ani=ation for Standardi=ation ?)SC@ that ser!es as a <asis for ,uality standards for $lo<al manufacturers.

MADHAVA REDDY ANNADI

1' of 11

Fundamentals of Software Testing

1! Client<ser*er Ar-0ite-ture
An architecture in which the userAs PC ?the client@ is the re,uestin$ machine and the ser!er is the su""lyin$ machine; <oth of which are connected !ia a local area networ+ ?LAB@ or wide area networ+ ?*AB@. #lient The client is the userAs machine; which contains the user interface ?*indows; Mac; etc.@ and can "erform some or all of the a""lication "rocessin$. ;ile $erversD >ile ser!ers; which ran$e in si=e from PCs to mainframes; store data and "ro$rams and share those files with the clients. )n this case; the ser!er functions as a remote dis+ dri!e to the clients. TwoBtier -lient<ser*er/ A two-way interaction in a client5ser!er en!ironment; in which the user interface is stored in the client and the data are stored in the ser!er. The a""lication lo$ic can <e in either the client or the ser!er ;at #lient/ A userAs com"uter that contains its own a""lications that are run in the machine. Bew "ro$rams are installed on the local hard dis+. This is the ty"ical way "eo"le use their com"uters. ;at $erver/ A ser!er in a client5ser!er en!ironment that "erforms most or all of the a""lication "rocessin$ with little or none "erformed in the client. The counter"art to a fat ser!er is a thin client. Two4tier client/server A two-way interaction in a client5ser!er en!ironment; in which the user interface is stored in the client and the data are stored in the ser!er. The a""lication lo$ic can <e in either the client or the ser!er Three4tier client/server! A three-way interaction in a client5ser!er en!ironment; in which the user interface is stored in the client; the <ul+ of the <usiness a""lication lo$ic is stored in one or more ser!ers; and the data are stored in a data<ase ser!er True #lient/$erver/ To <e a true client5ser!er en!ironment; <oth client and ser!er must share in the <usiness "rocessin$. >or e/am"le; a data<ase ser!er "rocesses re,uests from the client to loo+ u" data or u"date data in its data<ase. )n this case; the ser!er is "erformin$ a search at its end to res"ond to the ,uery recei!ed from a client. To illustrate this conce"t; re!iew the e/am"les <elow.
MADHAVA REDDY ANNADI
1'$ of 11

Fundamentals of Software Testing

Bon-Client5Ser!er

This e/am"le is not StrueS client5ser!er <ecause the file ser!er functions li+e a remote dis+ dri!e for loadin$ the data<ase mana$ement system ? .MS@ software into the client and readin$ the data<ase as if it were local. All 100;000 records are transmitted to the client for com"arin$; and the client does all the "rocessin$.

MADHAVA REDDY ANNADI

1'% of 11

Fundamentals of Software Testing

Two-tier Client5Ser!er

This is StrueS client5ser!er <ecause the ser!er "artici"ates in the <usiness "rocessin$. A ,uery $enerated in the client is sent to the data<ase mana$ement system ? .MS@ in the ser!er; which res"onds <y searchin$ at the ser!er side and returnin$ only the results of the ,uery. )f %0 records matched the criteria out of our 100;000-record e/am"le; only those %0 records are transmitted <ac+ to the clientI two thousand times less data o!er the networ+.

Three-tier Client5Ser!er

)n this case; the "rocessin$ is di!ided <etween two or more ser!ers; one used for a""lication "rocessin$ and another for data<ase "rocessin$. This is !ery common in lar$e enter"rises. See a""lication ser!er and data<ase ser!er.

Cn the ser!er side; the *e< uses a multi-tier architecture with interlin+ed *e< ser!ers; a""lication ser!ers; data<ase ser!ers and cachin$ ser!ers. Cn the client side; user machines commonly e/ecute scri"ts em<edded in countless *e< "a$es.
MADHAVA REDDY ANNADI
1'& of 11

Fundamentals of Software Testing

They also e/ecute Pa!a a""lets; Pa!a "ro$rams and rich client a""lications; all of which means that <oth client and ser!er are coo"eratin$ in tandem Application server! .efore the *e<; the term referred to a com"uter in a client5ser!er en!ironment that "erformed the <usiness lo$ic ?the data "rocessin$@. )n a two-tier client5ser!er en!ironment; which is most common; the userAs machine "erforms the <usiness lo$ic as well as the user interface; and the ser!er "ro!ides the data<ase "rocessin$. )n a three-tier en!ironment; a se"arate com"uter ?a""lication ser!er@ "erforms the <usiness lo$ic; althou$h some "art may still <e handled <y the userAs machine. <eb $erver! A com"uter that deli!ers *e< "a$es to <rowsers and other files to a""lications !ia the JTTP "rotocol. )t includes the hardware; o"eratin$ system; *e< ser!er software; TCP5)P "rotocols and site content ?*e< "a$es and other files@. )f the *e< ser!er is used internally and not <y the "u<lic; it may <e called an Sintranet ser!er.S 7TTP $erver - *e< ser!erS may refer to 9ust the software and not the entire com"uter system. )n such cases; it refers to the JTTP ser!er ?))S; A"ache; etc.@ that mana$es re,uests from the <rowser and deli!ers JTML documents and files in res"onse. )t also e/ecutes ser!er-side scri"ts ?C4) scri"ts; PSPs; ASPs; etc.@ that "ro!ide functions such as data<ase searchin$ and e-commerce. One Com+uter or T0ousandsD A sin$le com"uter system that "ro!ides all the )nternet ser!ices for a de"artment or a small com"any would include the JTTP ser!er ?*e< "a$es and files@; >TP ser!er ?file downloads@; BBTP ser!er ?news$rou"s@ and SMTP ser!er ?mail ser!ice@. This system with all its ser!ices could <e called a *e< ser!er. )n )SPs and lar$e com"anies; each of these ser!ices could <e in a se"arate com"uter or in multi"le com"uters. A datacenter for a lar$e "u<lic *e< site could contain hundreds and thousands of *e< ser!ers.

MADHAVA REDDY ANNADI

11' of 11

Fundamentals of Software Testing

4e6

Ser*ers

Are

>uilt

1nto

E*er.t0ing/

*e< ser!ers are not only used to deli!er *e< "a$es. *e< ser!er software is <uilt into numerous hardware de!ices and functions as the control "anel for dis"layin$ and editin$ internal settin$s. Any networ+ de!ice; such as a router; access "oint or "rint ser!er may ha!e an internal *e< ser!er ?JTTP ser!er@; which is accessed <y its )P address 9ust li+e a *e< site.

*e< Ser!er >undamentals


MADHAVA REDDY ANNADI
111 of 11

Fundamentals of Software Testing

*e< <rowsers communicate with *e< ser!ers !ia the TCP5)P "rotocol. The <rowser sends JTTP re,uests to the ser!er; which res"onds with JTML "a$es and "ossi<ly additional "ro$rams in the form of Acti!e8 controls or Pa!a a""lets.

*e< Ser!er :n!ironment

This shows all the ser!er-side "rocesses that can ta+e "lace in a *e< ser!er and a""lication ser!er. There is o!erla" <etween a *e< ser!er and an a""lication ser!er; as <oth can "erform similar tas+s. The *e< ser!er and a""lication ser!er can <e in the same machine or in se"arate com"uters.

MADHAVA REDDY ANNADI

112 of 11

Fundamentals of Software Testing

A .uilt-in *e< Ser!er

This home "a$e is not comin$ from the *e<; <ut from the *e< ser!er <uilt into TallyAs (10& color laser "rinter. Any *e< <rowser can access the confi$uration "anel <y )P address when *e< ser!er software "ro!ides the user interface. The 1-#.1&(.1.#%0 is the )P address of the "rinter.

MADHAVA REDDY ANNADI

11! of 11

Fundamentals of Software Testing

/atabase $erver! A com"uter in a LAB dedicated to data<ase stora$e and retrie!al. The data<ase ser!er is a +ey com"onent in a client5ser!er en!ironment. )t holds the data<ase mana$ement system ? .MS@ and the data<ases. 0"on re,uests from the client machines; it searches the data<ase for selected records and "asses them <ac+ o!er the networ+. A data<ase ser!er and file ser!er may <e one and the same; <ecause a file ser!er often "ro!ides data<ase ser!ices. Jowe!er; the term im"lies that the system is dedicated for data<ase use only and not a central stora$e facility for a""lications and files.

Three-Tier Client5Ser!er

An a""lication ser!er in a three-tier client5ser!er en!ironment "ro!ides middle tier "rocessin$ <etween the userAs machine and the data<ase mana$ement system ? .MS@.

MADHAVA REDDY ANNADI

11" of 11

Fundamentals of Software Testing

1. Since the ad!ent of the *e<; the term most often refers to software in an intranet5)nternet en!ironment that hosts a !ariety of lan$ua$e systems used to "ro$ram data<ase ,ueries and5or $eneral <usiness "rocessin$. These scri"ts and ser!ices; such as Pa!aScri"t and Pa!a ser!er "a$es ?PSPs@; ty"ically access a data<ase to retrie!e u"-to-date data that is "resented to users !ia their <rowsers or client a""lications. The a""lication ser!er may reside in the same com"uter as the *e< ser!er ?JTTP ser!er@ or <e in a se"arate com"uter. )n lar$e sites; multi"le com"uters are used for <oth a""lication ser!ers and *e< ser!ers ?JTTP ser!ers@. :/am"les of *e< a""lication ser!ers include .:A *e<lo$ic Ser!er and ).MAs *e<S"here A""lication Ser!er. See *e< ser!er.

MADHAVA REDDY ANNADI

11# of 11

Fundamentals of Software Testing

A""lication Ser!ers R *e< Ser!ers

There is o!erla" <etween an a""lication ser!er and a *e< ser!er; as <oth can "erform similar tas+s. The *e< ser!er ?JTTP ser!er@ can in!o+e a !ariety of scri"ts and ser!ices to ,uery data<ases and "erform <usiness "rocessin$; and a""lication ser!ers often come with their own JTTP ser!er which deli!ers *e< "a$es to the <rowser.

MADHAVA REDDY ANNADI

11 of 11

You might also like