Manual Testing Complete

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 25
At a glance
Powered by AI
The key takeaways are about quality assurance, quality control, manual testing vs automation testing, and components of a test case.

The two main types of software discussed are products and projects. Products are developed for general customers while projects are developed for specific customers.

The two main types of testing discussed are manual testing and automation testing. Manual testing involves human interaction while automation testing uses tools.

Manual Testing

Quality: Quality is defined as justification of all client requirements of a project/product & deliver the
application in time without having any defects is called Quality.
Quality Assurance: Quality Assurance is the process to verify all the standards and process of a company
for giving the right product to the client.
Quality Control: Quality Assurance is the process to verify all the standards and process of a company for
giving the right product to the client.
Software: Software is a set of Programs, set of logic & related data that gives instructions to the system,
what to do & how to do.
!r"
A set of e#ecuta$le programs is called software. %e have & types of Software'
(. Product
&. Project
1. Product: Product is something which is developing for segment of customers. )here is no end of developing
& testing activities in the product, $ecause the product will release into the mar*et versions $y versions.
E.g.: !perating system, +S office, Photoshop & Processors ,tc.
2. Project: Project is something which is developing for only specific customers. )here is end of developing
& testing activities in the project, $ecause whenever we satisfying the client requirements we need to stop the
developing & testing activities.
E.g.: +anufacturing application, -ospital management application, etc
.n ,ach & every Project/Product development we have & different teams.
(. /evelopment )eam
&. )esting )eam
1. Deelo!"ent Tea": )he responsi$ility of /evelopment )eam has to develop the Application project/
product according to the client requirements.
2. Testing Tea": )he responsi$ility of )esting )eam has to )est the developed Application Project/Product"
$y using different )esting )ypes & )echniques according to the client requirements.
Testing: )esting is the 0erification & 0alidation to ensure to deliver the defect free Application/Product to
the client.
!r"
Software Testing: )esting is a process of e#ecuting a program with the intent of finding errors.
!r"
Perform the testing on the software application or product is called Software )esting.
%e have & different types of )esting under Software )esting.
(. +anual )esting
&. Automation testing

1. Manual Testing: Perform the )esting on the Application/Product with human interaction is called
+anual )esting.
2. Auto"ation Testing: Perform the )esting on the Application/Product with the help of some
)hird party tools li*e Q)P, Selenium, 1oad runner etc. is called an Automation )esting.
.n $oth +anual )esting & Automation )esting we are performing same testing
in the application, $ut the way is different while we are performing the testing in +anual
)esting & Automation )esting.
#$y Testing is re%uired in Software& '(r) #$at is "eant *y Quality Software&
(. +aintain the quality of the application.
&. .dentify the defects and solved the defects $efore release the application to the clients without having
any defects in the application.
2. +eet the client requirements 3unctionality".
4. +eet the client e#pectations Performance, 5sa$ility, 6ompati$ility, Security 7"
8. )ime to release.
#$at are Testing o*jecties&
(. )o find out the difference $/w customer ,#pected values & Actual values.
&. )o find out the errors, which are unidentified $y the development team during developing the
application.
2. %hether the application is wor*ing or not according to the company standards.
Client: 6lient is a person who is providing the requirements to the company for developing their $usiness
application is called client.
Co"!any: 6ompany is an organi9ation which is developing the application according to the client
requirements is called company.
End +ser: ,nd user is a person who is using the application in the final stage is called an ,nd user.
,.g.' .nfosys has developed an online application for S:. :an*. -ere S:. :an* is a
client to .nfosys 6ompany and ,nd user will $e customers of S:. :an*.
Difference *etween Defect, *ug - error&
Defect: %hile e#ecuting the test cases, if we found any issues then the issues is called defect.
.ug: !nce the developer accepts our defect, then it is called as $ug.
Error: .t may $e program error or synta# error.
Software *idding
A proposal to develop the software is called software $idding.
Software Deelo!"ent /ife Cycle 'SD/C)
S/16 is the process which we are following to complete the software project/product development
activities that includes $oth /evelopment & )esting Activities.

1. 0e%uire"ents:
;equirements is the first phase of S/16. !nce the project has $een confirmed $etween client
& company. 6lient will provide the requirements to the company. 6lient always provides the requirements in
their $usiness prospective.
.n this phase :A :usiness Analyst" will $e involving to collect the requirements from the
client. 3or collect the requirements from the client, :A will following $elow approaches.
(. Questionnaire
&. <)=<nowledge )ransfer
2. %al*through
4. .nspection
(. Questionnaire: :y using this approach :A will collect the requirements from the client $y as*ing the
questions to the clients.
&. 1T21nowledge Transfer: .n this approach client will provide <)=Sessions to the :A for understand
the requirements.
2. #al3t$roug$: .n this approach :A will go through the requirements documents which has provided
$y the client and understand the requirements.
4. 4ns!ection: .n this approach :A will collect the requirements from the clients $y inspecting the client
$usiness location directly.
2. Analysis: .n Analysis phase, :A will $e involving to analy9e the requirements and :A will design the
documents as understanda$le format of the requirement documents called 5se case/:;S doc.
.0S doc: :;S doc divided into & docs.
(. S;S
&. 3;S
S0S doc: S;S doc contains details a$out software & hardware requirement.
50S doc: 3;S doc contains details a$out the functionality of the project.
+se case Doc: 5se case doc is in the format of word doc. !ne use case doc contains one flow of requirements.
6. Design: .n designing phase, system architecture will $e involving to design the architecture of the
application.
)here are & types of /esigns
1. 7/D '7ig$ /eel Design): .t defines overall architecture of the application that includes all the
modules in the application.
2. //D '/ow /eel Design): .t defines overall architecture of the individual modules that includes all the
su$ modules & screens of the application.
> +ost of the projects are using 5+1 for designing the architecture of the application.
8. Coding: .n coding phase, development team will $e involving to write the code for the functionality
of the individual module. After completion of all individual modules, the development team will $e
integrate all the modules and ma*e it as a single application.
9. Testing: .n testing phase, )esting )eam will $e involving to perform the testing on the application
$ased on the client requirements. %hile testing the application testing team will e#ecute the test cases,
using different types of testing & techniques.
:. 0elease;Maintenance;Production: .n this phase, )echnical People will $e involving to deploy the
application into production environment.
Ty!es of Testing
we have & types of )esting. (. 3unctional )esting
&. ?on 3unctional )esting
1. 5unctional Testing: )esting the application against $usiness requirements. 3unctional testing is done
using the functional specifications provided $y the client or $y using the design specifications li*e use cases
provided $y the design team.
+nit Testing
.n 5nit testing, development team will $e involving to perform the testing on individual module of the
project through a set of white $o# testing techniques is called 5nit testing/+odule/6omponent )esting.
4ntegration Testing 'To! Down, .otto" u! Testing)
.n .ntegration )esting, development team & testing team will $e involving to perform the testing on
the application. /uring .ntegration )esting we are verifying the data flow $etween all the modules in the
application is wor*ing or not. :ecause every individual module is wor*ing fine $ut after integrates all the
module & ma*e it as single application that application may or may not wor*.
A!!roac$es in 4ntegration Testing
(. )op=/own Approach
&. :ottom=5p Approach
2. Sandwich Approach
4. :ig $ang Approach
1. To!2Down A!!roac$
.f all main modules are developed and some of the su$ modules were not developed in a project. .n
that case programmers need to create some temporary programs called Stu$s. %hich acting as a su$ module.
2. .otto"2+! A!!roac$
.f all su$ modules are developed and some of the main modules were not developed in a project. .n
that case programmers need to create are creating some temporary programs called /rivers. %hich acting as
a main module.
6. Sandwic$ A!!roac$
.f some of the +ain modules & some of the su$ modules were not developed in a project. .n that case
programmers need to create some temporary programs called drivers & stu$s. %hich acting as a main
modules & su$ modules.
8. .ig *ang A!!roac$
%henever all the modules are developed then integrate all the modules & ma*e it as a single
application is called :ig $ang approach.
Ty!es of Testing
Sanity Testing
After receiving initial $uild, )esting ,ngineers validating the major functionalities of the application
whether it is wor*ing or not. .f major functionalities of the application is wor*ing fine then we can perform
further testing of the application. .f major functionalities of the application is not wor*ing fine then we can@t
move further testing. So we can reject the $uild.
S"o3e Testing
0alidating the major functionality of the application $y the development team in development
environment is called smo*e testing.
<ote: /efinition wise $oth Sanity & Smo*e testing is different. :ut practical oriented is same.
+sa*ility Testing;+ser 4nterface Testing;=+4 Testing
0erifying the user friendliness of the application in terms of colors, logos, fonts, alignments etc. is
called 5sa$ility )esting.
,ase of use 5nderstanda$le to users to operate"
1oo* & 3eel Pleasantness or attractive screens"
Speed of interface 1ess no. of events to complete the tas*"
+sa*ility Test Scenarios:
%e$ page content should $e correct without any spelling or grammatical errors
All fonts should $e same as per the requirements.
All the te#t should $e properly aligned.
All the error messages should $e correct without any spelling or grammatical errors and the error
message should match with the field la$el.
)ool tip te#t should $e there for every field.
All the fields should $e properly aligned.
,nough space should $e provided $etween field la$els, columns, rows, and error messages.
All the $uttons should $e in a standard format and si9e.
-ome lin* should $e there on every single page.
/isa$led fields should $e grayed out.
6hec* for $ro*en lin*s and images.
6onfirmation message should $e displayed for any *ind of update and delete operation.
6hec* the site on different resolutions A4B # 4CB, ABB#CBB etc.D"
6hec* the end user can run the system without frustration.
6hec* the ta$ should wor* properly.
Scroll $ar should appear only if required.
.f there is an error message on su$mit, the information filled $y the user should $e there.
)itle should display on each we$ page
All fields )e#t$o#, dropdown, radio $utton etc" and $uttons should $e accessi$le $y *ey$oard
shortcuts and the user should $e a$le to perform all operations $y using *ey$oard.
6hec* if the dropdown data is not truncated due to the field si9e and also chec* whether the data is
hardcoded or managed via administrator.
5unctional Testing
0alidating the overall functionality of the application includes the major functionality with respect to
the clients $usiness requirements is called functional testing.
3unctionality or ;equirements testing has following coverage'
6ontrol 3low or :ehavioral 6overage !$ject Properties 6hec*ing".
.nput /omain 6overage 6orrectness of Si9e and )ype of every ./! !$ject".
,rror -andling 6overage Preventing negative navigation".
6alculations 6overage correctness of o/p values".
:ac*end 6overage /ata 0alidation & /ata .ntegrity of data$ase ta$les".
Service 1evels !rder of functionality or services".
Successful 3unctionality 6om$ination of a$ove all".
.e$aioral Testing;=+4 Testing;Control 5low Testing
.n 6ontrol 3low )esting, we are validate each & every o$ject in the application whether the screens
are responding correctly or not, while operating on those o$jects in the screens.
4n!ut Do"ain Testing:
.n .nput /omain )esting, we are using $oundary value analysis & equivalence class partition
techniques to validate the si9e & type of each & every input o$ject in the application.
.oundary >alue analysis:
:oundary values are used for testing the si9e and range of an o$ject.

E%uialence Class Partitions:
,quivalence classes are used for testing the type of the o$ject.
Error $andling Testing
.n ,rror handling )esting, we are validating each & every screen in the application $y giving invalid
data to the o$jects to get error messages or not when user given the invalid data.
Calculation Coerage Testing
.n 6alculation )esting, %e are validate on the calculation part in the application $y giving the valid
data to the o$jects to get the correct output values or not when user given the valid data.
5unctional Test Scenarios:
)est all the mandatory fields should $e validated.
)est the system should not display the error message for optional fields.
)est the numeric fields should not accept the alpha$ets and proper error message should display.
)est for negative num$ers if allowed for numeric fields.
)est the ma# length of every field to ensure the data is not truncated.
)est the pop up message E)his field is limited to 8BB characters" should display if the data reaches
the ma#imum si9e of the field.
)est that a confirmation message should display for update and delete operations.
)est the amount values should display in currency format.
)est the timeout functionality.
)est the functionality of the $uttons availa$le
)est the Privacy Policy & 3AQ is clearly defined and should $e availa$le for users.
)est if any functionality fails the user gets redirected to the custom error page.
)est all the uploaded documents are opened properly.
)est the user should $e a$le to download the uploaded files.
)est the email functionality of the system.
)est the java script is properly wor*ing in different $rowsers .,, 3irefo#, 6hrome, safari and !pera".
)est to see what happens if a user deletes coo*ies while in the site.
)est to see what happens if a user deletes coo*ies after visiting a site.
)est all the data inside com$o/list $o# is arranged in chronological order.
0ecoery Testing
.n this testing we are verifying how much time the application is ta*ing to come $ac* from a$normal
state to normal state.
E?a"!les of recoery testing:
(. %hile an application is running, suddenly restart the computer, and afterwards chec* the validness of
the applicationFs data integrity.
2. %hile an application is receiving data from a networ*, unplug the connecting ca$le. After some time,
plug the ca$le $ac* in and analy9e the applicationFs a$ility to continue receiving data from the point
at which the networ* connection disappeared.
2. ;estart the system while a $rowser has a definite num$er of sessions. Afterwards, chec* that
the $rowser is a$le to recover all of them.
Co"!ati*ility Testing
.n 6ompati$ility )esting, we are verifying whether the application is wor*ing in different $rowsers,
different types of !S, different types of system software & ,tc.
3orward compati$ility ==G application is ready to run $ut !perating system is not
supporting.

:ac*ward compati$ility ==G !perating system is supporting $ut the application has some
internal coding pro$lems to run on !perating system.
Co"!ati*ility Test Scenarios:
)est the we$site in different $rowsers .,, 3irefo#, 6hrome, Safari and !pera" and ensure the we$site
is displaying properly.
)est the -)+1 version $eing used is compati$le with appropriate $rowser versions.
)est the images display correctly in different $rowsers.
)est the fonts are usa$le in different $rowsers.
)est the java script code is usa$le in different $rowsers.
)est the Animated H.3@s across different $rowsers.
Tool for Co"!ati*ility Testing:
Spoon.net' Spoon.net provides access to thousands of applications :rowsers" without any installs.
)his tool helps you to test your application on different $rowsers on one single machine.
Configuration Testing
%e are validating the application in different configuration of the system li*e ;A+, Processor and
-// etc. is called 6onfiguration )esting.
<ote' 6ompati$ility testing is suggesti$le for projects.
6onfiguration testing is suggesti$le for products.
Certification testing:
6ertification testing is also related to 6ompati$ility testing, however product will $e certified as fit to
use. 6ertification is applica$le for hardware, !perating systems or :rowser. e.g. hardware/laptops are
certified for %indows I etc., !r"
%e have to certified the product whether the product is compati$le or not for appropriate
software/hardware devices.
0e2testing
;e=testing the application whether the $ugs are fi#ed or not. ;e=testing is done $ased on failed test
cases of previous $uild. !r"
;e=e#ecution of a test with multiple test data to validate a function, e.g. )o validate multiplication,
)est ,ngineers use different com$inations of input in terms of min, ma#, =ve, Jve, 9ero, int, float, etc.
0egression Testing:
/uring this testing, testing team will $e involving to perform regression testing on modified $uild
application" to verify whether the fi#ed defect is impacting on other functionality of the application or not.
;egression testing is done $ased on passed test cases of previous $uild.
Any dependent modules may also cause side effects.
%$en we s$ould do t$e regression testing&
;egression testing typically carryout in following cases
(" %hen the new functionality introduced then regression testing will $e conduct to find out
whether e#isting functionality is $ro*en or not.
&" %henever there are :ug fi#es, to verify the other functionality is not affected $y the $ug fi#es we need to do
the regression testing
2" After 6ode refactoring, regression testing needs to carry out to ma*e sure that there is no functionality is
regressed due to code refactoring.
4" After +erging the $ranches or code, regression testing needs to done, as during merging $ranches or code,
there are more chances of functionality $rea*age.
;egression testing plays important role in software testing.
Perfor"ance Testing
)esting the application how much load is applied on server to e#ecute the current application in terms
of 1oad, Stress & 0olume testing.
During !erfor"ance testing we $ae to use 6 tec$ni%ues.
/oad Testing: 0alidating the performance of the application with the client e#pected users is called 1oad
)esting. )esting of an application with various ?o. of concurrent users to verify the response time of the
application $y increasing the ?o. of users.
;un S5) under customer e#pected configuration and customer e#pected load ?o. of users" to calculate
Espeed in processingK is called as 1oad testing.
Stress Testing' 0alidating the performance of the application $y increasing the client e#pected users to the
ma#imum level & identifies the $rea*point of the application. :y giving different sets of load continuously for
some time. %e are going to find the application is sta$ility and where it is getting crashed will $e analy9ed in
the stress testing.
;un S5) under customer e#pected configuration and more than customer e#pected configuration and
relia$ility is called as stress testing.
Soa3 Testing;Endurance Testing
;un S5) under customer e#pected configuration and customer e#pected load continuously to identify
the memory lea*ages. .t means how much time the application is wor*ing even after applying reasona$le load
on the application is called longevity//ura$ility is called ,ndurance testing/Soa* )esting.
>olu"e Testing: )esting of an application with various volumes of data to verify where the application is
$rea*ing.
=eneral Test scenarios for !erfor"ance testing:
)o determine the performance, sta$ility and scala$ility of an application under different load conditions.
)o determine if the current architecture can support the application at pea* user levels.
)o determine which configuration si9ing provides the $est performance level.
)o identify application and infrastructure $ottlenec*s.
)o determine if the new version of the software adversely had an impact on response time.
)o evaluate product and/or hardware to determine if it can handle projected load volumes.
-ow to do Performance testingD :y +anual )esting or $y Automation
Practically it is not possi$le to do the performance testing manually $ecause of some draw$ac*s li*e'
+ore num$er of resources will $e required.
Simultaneous actions are not possi$le.
Proper system monitoring is not availa$le.
?ot easy to perform the repetitive tas*.

-ence to overcome the a$ove pro$lems we should use Performance testing tool. :elow is the list of some
popular testing tools.
Apache L+eter
1oad ;unner
:orland Sil* Performer.
;ational Performance )ester
%AP)
?,! 1!A/
Security Testing
0alidating the security of the application in terms of the authentication & authori9ation.
Aut$entication: 0erifying whether the system is accepting valid/;ight user or not.
Aut$ori@ation: 0erifying whether the system is providing right information to right users or not.
Test Scenarios for Security Testing:
0erify the we$ page which contains important data li*e password, credit card num$ers, secret
answers for security question etc should $e su$mitted via -))PS SS1".
0erify the important information li*e password, credit card num$ers etc should display in encrypted
format.
0erify password rules are implemented on all authentication pages li*e ;egistration, forgot password,
change password.
0erify if the password is changed the user should not $e a$le to login with the old password.
0erify the error messages should not display any important information.
0erify if the user is logged out from the system or user session was e#pired, the user should not $e
a$le to navigate the site.
0erify to access the secured and non secured we$ pages directly without login.
0erify the E0iew Source codeK option is disa$led and should not $e visi$le to the user.
0erify the user account gets loc*ed out if the user is entering the wrong password several times.
0erify the coo*ies should not store passwords.
0erify if, any functionality is not wor*ing, the system should not display any application, server, or
data$ase information. .nstead, it should display the custom error page.
0erify the SQ1 injection attac*s.
0erify the user roles and their rights. 3or ,#ample )he requestor should not $e a$le to access the
admin page.
0erify the important operations are written in log files, and that information should $e tracea$le.
0erify the session values are in an encrypted format in the address $ar.
0erify the coo*ie information is stored in encrypted format.
0erify the application for :rute 3orce Attac*s
Data*ase Testing
0alidating the data$ase of the application is called data$ase testing. %hatever we performed in front
end application that should reflect in $ac* end data$ase & whatever we performed in $ac* end application
that should reflect in front end application.
To !erfor" t$e Data*ase testing, t$e tester s$ould *e aware of t$e *elow "entioned !oints:
)he tester should understand the functional requirements, $usiness logic, application flow and
data$ase design thoroughly.
)he tester should figure out the ta$les, triggers, store procedures, views and cursors used for the
application.
)he tester should understand the logic of the triggers, store procedures, views and cursors created.
)he tester should figure out the ta$les which get affected when insert update and delete /+1"
operations are performed through the we$ or des*top applications.
%ith the help of the a$ove mentioned points, the tester can easily write the test scenarios for /ata$ase
testing.
Test Scenarios for Data*ase Testing:
0erify the data$ase name' )he data$ase name should match with the specifications.
0erify the )a$les, columns, column types and defaults' All things should match with the specifications.
0erify whether the column allows a null or not.
0erify the Primary and foreign *ey of each ta$le.
0erify the Stored Procedure'
)est whether the Stored procedure is installed or not.
0erify the Stored procedure name
0erify the parameter names, types and num$er of parameters.
)est the parameters if they are required or not.
)est the stored procedure $y deleting some parameters
)est when the output is 9ero, the 9ero records should $e affected.
)est the stored procedure $y writing simple SQ1 queries.
)est whether the stored procedure returns the values
)est the stored procedure with sample input data.
0erify the $ehavior of each flag in the ta$le.
0erify the data gets properly saved into the data$ase after the each page su$mission.
0erify the data if the /+1 5pdate, delete and insert" operations are performed.
6hec* the length of every field' )he field length in the $ac* end and front end must $e same.
0erify the data$ase names of QA, 5A) and production. )he names should $e unique.
0erify the encrypted data in the data$ase.
0erify the data$ase si9e. Also test the response time of each query e#ecuted.
0erify the data displayed on the front end and ma*e sure it is same in the $ac* end.
0erify the data validity $y inserting the invalid data in the data$ase.
0erify the )riggers.
Test data
A data or a value which we are using to test the application is called )est data. %e are using test data
in input domain testing, re testing & regression testing types.
Positie Testing
Performing testing on the application with Jve )est data is called Positive )esting.
<egatie Testing
Performing testing on the application with Mve )est data is called ?egative )esting.
,.g. Performing testing in login screen with valid 5id & pwd is called Jve )esting. .f we perform the testing
in same application with invalid uid & invalid pwd, invalid uid & valid pwd, valid uid & invalid pwd, etc. is
called Mve testing.
A2Testing
Performing the testing on the application directly $y the client in developer@s environment is
called N=)esting.
!r"
%e invite the Selected 6ustomers/6lient to our location and As* them to ta*e a loo* at software, how
it wor*sD etc. get the feed$ac* front them its called as Alpha testing. Also Alpha testing happens only !?6,
and not many times.
+ain purpose of Alpha testing to get the feed$ac* or client or customers or users a$out how they feel
a$out the software. this feed$ac* gets analy9ed $y Product +anager, /elivery +anager and )eam to change
the certain things $efore the release.
B2Testing
Performing the testing on the application directly $y the client li*e people is called O=)esting.
!r"
.n :eta )esting we distri$ute the software to the selected users or customer or client as*s the feed$ac*
of it. as Alpha testing was already done $efore :eta testing.
4nstallation Testing
/uring this test, )esting team validates whether application $uild along with supported software@s
into customers site li*e configured systems. /uring this test, )esting team o$serves $elow factors.
Setup program e#ecution to start installation
,asy .nterface
Amount of dis* occupied after installation
Parallel Testing; Co"!aratie Testing
)esting team comparing our application with various versions or" similar applications to identify the
wea*ness & strengths of the application.
Acce!tance Testing
After completing of system testing, Project management is concentrating on acceptance testing to
collect the feed$ac* from real customers & +odel customers. .n this acceptance testing developers & testers
are also involving to convince customers. )here are two ways in acceptance testing such as N=testing & O=
testing.
0elease Testing
After completing of Acceptance testing & their modifications, Project management is concentrating on
software release. -ere project manager will form the release team with few developers, few testers, few h/w
engineers and one delivery manager has head. )his team will go to customer site and start s/w installation in
customer@s site. ;elease team o$serves the following factors during
6omplete installation
!verall .nstallation
.nput devices handling
6o=e#istence with !S
6o=e#istence with other s/w to share resources.
After completion of a$ove o$servations in customer@s site, release team will provide training to the customer@s
site people and then release team come $ac* to the organi9ation.
Ad$oc Testing
Adhoc testing is an informal testing type with an aim to $rea* the system.
)his testing is usually an unplanned activity.
.t doesn@t follow any test design techniques to create test cases. .n fact is does not create test cases
altogetherP
.t is primarily performed if the *nowledge of testers in the system under test is very high.
)esters randomly test the application without any test cases or any $usiness requirement doc.
Adhoc testing can $e achieved with the testing technique called ,rror guessing.
,rror guessing can $e done $y the people having enough e#perience on the system to EHuessK the
most li*ely source of errors.

Ty!es of ad$oc testing
(. :uddy )esting
&. ,#ploratory )esting
2. Pair testing
4. +on*ey )esting
.uddy Testing
/ue to lac* of time to complete the application, )esters will join with developers to continue
development & testing parallel from $eginning stages of development. '(r)
)wo $uddies mutually wor* on identifying defects in the same module. +ostly one $uddy will $e from
development team and another person will $e from testing team. :uddy testing helps the testers develop $etter
test cases and development team can also ma*e design changes early. )his testing usually happens after unit
testing completion.
E?!loratory Testing
/ue to lac* of documentation, testers can prepare scenarios and cases for responsi$le modules depends on
past e#perience, discussions with others, $y operating S5) screens etc.
Pair testing
)wo testers are assigned modules, share ideas and wor* on the same machines to find defects. !ne person can
e#ecute the tests and another person can ta*e notes on the findings. ;oles of the persons can $e a tester and
scri$er during testing. !r"
/ue to lac* of s*ills, junior tester will join with senior testers to share the *nowledge during testing.
:uddy testing is com$ination of unit and system testing together with developers and testers $ut Pair testing is
done only with the testers with different *nowledge levels.,#perienced and non=e#perienced to share their
ideas and views"
Mon3ey Testing
;andomly test the product or application without test cases with a goal to $rea* the system
Adantages of Ad$oc Testing :
Adhoc )esting saves lot of time as it doesn@t require ela$orate test planning , documentation and test
case design.
.t chec*s for the completeness of testing and find more defects then planned testing.
Disadantages of Ad$oc Testing :
)his testing requires no documentation/ planning /process to $e followed. Since this testing aims at
finding defects through random approach, without any documentation, defects will not $e mapped to
test cases. -ence, sometimes, it is very difficult to reproduce the defects as there are no test=steps or
requirements mapped to it.
Do"ain Testing:
/omain testing is a software testing technique, !$jective of domain testing is to select test cases of critical
functionality of the software and e#ecute them. /omain testing does not intend to run all the e#isting test
cases.
End2to2end Testing:
,nd to end testing is performed $y testing team, focus of end to end testing is to test end to end flows e.g. right
from order creation till reporting or order creation till item return etc and chec*ing ., ,nd to end testing is
usually focused mimic*ing real life scenarios and usage. ,nd to end testing involves testing information flow
across applications.
Parallel Testing; co"!etitie testing
Perform the testing on the application $y comparing the previous versions of the application
or with other competitive products in the mar*et to find the wea*ness & strengths of the application.
Agile Testing
/ue to sudden changes in requirements, testing team can change corresponding scenarios and cases
then they will go to retesting and regression testing on those modified software.
+ser Acce!tance testing '+AT):
5ser Acceptance testing is a must for any projectQ it is performed $y clients/end users of the software. 5ser
Acceptance testing allows S+,s Su$ject matter e#perts" from client to test the software with their actual
$usiness or real=world scenarios and to chec* if the software meets their $usiness requirements.
Testing "et$odologies
)here are 2 different types of testing methodologies.
(. :lac* $o# )esting
&. %hite $o# )esting
2. Hrey $o# )esting
1. .lac3 *o? Testing
Performing testing on the application without having the structural *nowledge or" coding *nowledge
is called :lac* $o# testing. :lac* $o# testing is done $y the )esting team, $ecause testing team people doesn@t
required coding *nowledge $ut application *nowledge is required for testing the application.
2. #$ite *o? testing
Performing testing on the application with having the structural or source code *nowledge is called
white $o# testing. %hite $o# testing is done $y the development team, $ecause the developers should have
structural *nowledge is required.
6. =rey *o? testing
Performing testing on the application with structural as well as application *nowledge is called grey
$o# testing. Hrey $o# testing is performed $y person who is having *nowledge on $oth structural *nowledge
as well as application testing.
Software Testing /ife Cycle 'ST/C)
S)16 is a process which we are following to complete the Software testing activities in a project. .t is
part of S/16, $ecause S/16 is the com$ination of $oth /evelopment & )esting activities. :ut in S)16 only
)esting activities.
Test 4nitiation
.n )est .nitiation phase, QA manager will $e involving to prepare the )est strategy document. )his
document is *nown as test methodology or test approach. QA +anager will form the team for testing the
application.
Test Plan
After getting the )est strategy document from QA +anager, )est lead will $e involving to design the
)est Plan document along with senior mem$ers in a project $ased on the S;S, Project Plan & )est strategy
documents.
Test Plan docu"ent
)est Plan document is in the format of word document. .t is the route map document for testing of the
application. .t defines what to test, how to test, who to test & when to test.
Contents in Test Plan docu"ent
(. Aut$or: .t defines who designed this document )est lead along with senior mem$ers"
&. 0eiewed *y: .t defines who has reviewed this document QA +anage"
2. Descri!tion: .t defines the $rief description a$out the project.
4. Test 4te"s: 1ist of all modules in the project
8. 5eatures to *e tested: 1ist out the testa$le features in the current release, and then we are going to
test only those features.
A. 5eatures not to *e tested: 1ist out the not testa$le features in current release, and then we are not
going to test those features.
I. Entry Criteria: .t defines when to start performing the testing. %e are starting the testing after
completion of development of the application and also application should $e availa$le to the testing
environment.
C. E?it Criteria: .t defines when to close the testing activity. After completion of e#ecution of all the
test cases for minimum one time and all defects should $e closed.
R. Sus!ension Criteria' .t defines when to suspend or stop the testing activity. %e stopping the testing
$ased on the $elow factors.
,nvironment .ssues
Application crashes
,#ception ,rrors
+ajor defect detected in the $uild :loc*er"
(B. 0esu"!tion Criteria: .t defines when to continue the testing activity. %e are continue testing $ased
on the $elow factors.
:loc*er defect got fi#ed
Application crashes got clear
,#ception errors are fi#ed
((. Test Deliera*les: .t defines list of documents to $e prepared $y testers in testing environment.
,.g.' )est scenarios, )est cases, Automation test scripts, defect reports & status report.
(&. Testing ty!es to *e used: .t defines what are all the testing types we are using to perform the testing
in the project.
E.g.' Sanity testing, Smo*e testing, Adhoc testing, ,#ploratory testing, 3unctional )esting, 5ser
.nterface )esting etc.
(2. Tools to *e used: .t defines what are all the automation tools we are using in the project.
,.g.' Q)P, Selenium, 1oad runner, etc.
(4. Eniron"ent needs: .t defines what are the software used for developing the application & what
are the minimum hardware requirements for this application.
(8. 0oles - 0es!onsi*ilities: .t defines allocate the wor* to every mem$er in the team module wise.
Test Case Design
.n this phase, testing team will $e involving to design the test cases $ased on the client requirements.
!nce we are getting the requirements in the form of 5se case doc & :;S doc. %e need to start the designing
the test cases.
Test case: )est case is a sequential ela$orated e#ecuta$le form of requirements is called )est case.
)o design the test cases we are following different testing techniques.
(. :0A :oundary value Analysis"
&. ,6P ,quivalence class partitioning"
2. ,rror guessing
1. .>A '.oundary alue Analysis): 4t defines t$e range of t$e data w$ic$ we are using to !erfor"
t$e testing.
5se case :;S :0A
(.0alidate 5./ 5ser .d" (. 5./ should accept only 4=(8
characters
+inS4SPass
+a#S(8SPass
+inJ(SG4J(S8SGPass
+a#J(SG(8J(S(ASG3ail
+in=(SG4=(S2SG3ail
+a#=(SG(8=(S(4SGPass
&.0alidate P%/ Password" &. P%/ should accept only 4=I
characters
+inS4SPass
+a#SISPass
+inJ(SG 4J(S8SGPass
+a#J(SGIJ(SCSG3ail
+in=(SG4=(S2SG3ail
+a#=(SGI=(SASGPass
,.g.' Prepare :0A for local mo$ile num$er which is starting with TR@
+a#S R R R R R R R R R R SG Pass
+inS R B B B B B B B B B SG Pass
+a#J(SG ( B B B B B B B B B SG 3ail
+inJ(SG R B B B B B B B B ( SGPass
+a#=(SG R R R R R R R R R C SG Pass
+in=(SG C R R R R R R R R R SG 3ail
2. ECP 'E%uialence class !artitioning): .t defines whether the data is valid or invalid which we are
using
to performing the testing on the application.
& types of ,quivalence class partitioning (. 0alid
&. .nvalid
,.g.' )o validate the 5ser ?ame, it contains only alpha$ets E?areshK
0alid .nvalid
a=9 Small letters" ?um$ersB=R"
A=U 6apital letters" Special characters >, V, W, , &, X, Y"
,.g.' Prepare the ,6P for mo$ile num$er ECBRR(A2I&AK
0alid .nvalid
B=R a=9
A=U
Space
Special character
,.g.' Prepare the ,6P for PA? card ?um$er EPP;R4AC<K
0alid .nvalid
B=R a=9
A=U Space
Special character
2. ,rror Huessing' ,rror guessing is e#perience $ased testing. :ased on the previous e#perience the
tester will guess the errors in the application and design the test cases.
Test Case E?ecution:
.n this phase, testing team will $e involving to e#ecute the test cases. !nce the application has
deploying into the team environment testing team will perform testing on the application $y e#ecuting the test
cases.
/uring e#ecution time we are comparing e#pected result of test case with actual result in the
application. )he e#pected result is same as actual result we are mar*ing the status of the step as pass and if
e#pected result is mismatch with actual result we are mar*ing the status of the step is fail.
.f we identified any mismatch $/w e#pected & actual result. %e need to report that mismatch as a
defect to the development team.
Defects
.n this phase, $oth )esting team & developing team will $e involving to report the defects, fi# the
defects, retesting the defects & close the defects etc.
Defect /ife Cycle ;.ug /ife Cycle
Defect: A mismatch $etween e#pected result & actual result is called defect.
<ew: %hen the tester found new defect and posted for the first time. )hen the status as ?ew.
Assigned: After the tester has posted the defect, the test lead validates the defect whether it is correct or not.
.f the defect is correct, then the lead assigns the $ug to corresponding development lead. )hen the status as
assigned from new.
(!en: After test lead assigned the defect, the developer has started analy9ing and wor*ing on the defect fi#.
)hen the developer gives the status as open.
5i?ed: %hen developer ma*es necessary code changes and fi#es the defect, then the developer gives the
status as T3i#ed@.
>erified: After fi#ed the $ug $y the developer, the tester tests the $ug again. .f the $ug is not present in the
software, tester approves that the $ug is fi#ed and changes the status to EverifiedK.
0eo!en: .f the $ug still e#ists even after the $ug is fi#ed $y the developer, the tester changes the status to
EreopenedK. )he $ug goes through the life cycle once again.
Closed: !nce the $ug is fi#ed, it is tested $y the tester. .f the tester feels that the $ug no longer e#ists in the
application, tester changes the status of the $ug to EclosedK. )his state means that the $ug is fi#ed, tested and
approved.
Du!licate: .f the $ug is repeated twice or the two $ugs mention the same concept of the $ug, then one $ug
status is changed to EduplicateC.
0ejected: .f the developer feels that the $ug is not genuine, he rejects the $ug. )hen the state of the $ug is
changed to ErejectedK.
Deferred: )he $ug, changed to deferred state means the $ug is e#pected to $e fi#ed in ne#t releases. )he
reasons for changing the $ug to this state have many factors. Some of them are priority of the $ug may $e low,
lac* of time for the release or the $ug may not have major effect on the software.
<ot a *ug: )he state given as E?ot a $ugK if there is no change in the functionality of the application. 3or
an e#ample' .f customer as*s for some change in the loo* and field of the application li*e change of color of
some te#t then it is not a $ug $ut just some change in the loo*s of the application.
0egression Testing:
/uring this testing, testing team will $e involving to perform regression testing on modified
application to verify the whether the fi#ed defect is impacting on other functionality of the application.
+ost of the projects are using Automation tools li*e Q)P, Selenium, sil* test etc. to perform the
regression testing on the application.
Test closure
whenever we are e#ecuted all the test cases and when all defects got closed, QA manager will $e
involving to signoff the testing activity and technical team will deploy the application into production
environment
Ad$oc testing strategies
3rom software testing principles, e#haustive testing is impossi$le due to this reason organi9ations are
following optimal test strategies for software. /ue to some ris*s some organi9ations are following adhoc
testing.
a) Mon3ey Testing:
/ue to lac* of time, testing team is conducting testing on some modules of S5) only instead of all modules.
*) .uddy Testing:
/ue to lac* of time, testers will join with developers to continue development and testing parallel from early
stages of development.
c) E?!loratory Testing:
/ue to lac* of documentation, testers can prepare scenarios and cases for responsi$le modules depends on
past e#perience, discussions with others, internet $rowsing, $y operating S5) screens, conference with the
customer site people and etc.
d) Pair Testing:
/ue to lac* of s*ills, junior testers will join with senior testers to share their *nowledge during testing.
e) Agile Testing:
/ue to sudden changes in requirements, testing team can change corresponding scenarios and cases then they
will go to retesting and regression testing on those modified software.
f) De*ugging:
)o increase or to improve the s*ills of testers, developers can release software with *nown $ugs. .f testers are
found those $ugs testing team is good otherwise team needs some training.
0es!onsi*ilities of Manual Tester
(. 5nderstanding all requirements of a project or product.
&. 5nderstanding testing requirements related to the current project / product.
2. Assist test lead during test planning.
4. %riting test scenarios for responsi$le modules and responsi$le testing topics $y using Z:lac* $o#
techniquesZ.
8. 3ollow .,,, C&R standards while documenting test cases with required test data and test
environment,
A. .nvolve in test cases review meeting along with the test lead.
I. .nvolve in smo*e testing on initial software $uild
C. ,#ecute test cases $y using test data on S5) in test environment to detect defects.
R. .nvolve in defect reporting and trac*ing
(B. 6onduct ;etesting, Sanity testing and ;egression testing on every modified S5) to close $ugs.
((. Strong in SQ1 to connect to S5) data$ase while testing.
(&. .nvolve in final regression testing on final S5) during test closure
(2. Loin with the developers to do acceptance testing to collect the feed $ac* on software from the real
customers / model customers
(4. Assist test lead in ;)+ preparation $efore signoff from the project / product
#$en a tester is said to *e good tester&
A good )ester is a tester who will find more defects & report the defects which are accepted $y the developers
then the tester is a good tester.
0e%uire"ent Tracea*ility Matri?
)his document is used for trac*ing the requirements for chec*ing the scenarios and test cases coverage.
/efault, this matri# is having columns requirement id, scenario id and test case id. ;equirement id will $e
updated $y the test lead and he will share this document to the entire testing team for updating the scenario
ids and test case ids. !nce entire team is updated with their scenario ids and test case ids, test lead is going to
verify every requirement is covered for writing the test case or not. .f at all he finds any of the requirements is
not covered which means scenario ids and test case ids will $e empty for corresponding requirements, then he
will assign those requirements to some$ody in our team for writing scenarios and test cases.
Difference *etween Seerity - Priority wit$ e?a"!le
Severity & Priority will $e assigned for a particular $ug to *now the importance of the $ug.
Seerity: Severity descri$es how the defect is impacting on functionality of the application.
)ypes of Severity
(. :loc*er
&. 6ritical
2. +ajor
4. +inor
1) .loc3er: application is not wor*ing/ major functionality is completely $rown. )ester can not do further
testing. )ester is $loc*ed.
2) Critical: some part of functionality is $rown, tester cannot test some part of functionality and there is no
wor*around.
6) Major: in this type, defects are logical defects which do not $loc* any functionality. +ajor type usually
contains functional and major 5. defects.
8) Minor: it mostly contains 5. defects, minor usa$ility defects. /efects which does not harm to application
under test
Priority : .t is term indicates the importance of the defect and when it should gets addressed or fi#ed.
1) 7ig$: it has high $usiness value, end user can not wor*, unless the defect gets fi#ed. in this case Priority
should $e -igh, means immediate fi# of the defect.
2) Mediu": end user can wor* using wor*around $ut some functionality end user cannot use and that
functionality is not regularly used $y the user.
6) /ow: no or very less impact on end user
Difference *etween Seerity and Priority:
0e%uire"ent 4d Scenario 4d Test Case 4d
(" Severity should $e defined $y QA whereas Priority should $e /efined $y /ev/ /elivery manager
&" Severity driven $y functionality whereas priority driven $y the $usiness value.
2" Severity descri$e the how the defect is impacting the functionality of the product or software under test and
Priority indicates the importance of the defect and when it should gets addressed or fi#ed.
E?a"!les
7ig$ Priority and 7ig$ Seerity:
+ajor functionality failure li*e log in is not wor*ing, crashes in $asic wor*flow of the software are the $est
e#ample of -igh priority and -igh Severity
(" Application crashed while opening
&" %e$site home page failed to load.
7ig$ Priority and /ow Seerity:
(" Spelling mista*e on menu names, client@s names or any important name which is getting highlighted to the
end user.
)here is very common mista*es people were doing while giving the e#amples, they give e#ample of logo and
logo misspelled this is wrong e#ample. it comes under high priority and high severity. 1ogo and company
name is identity of the company or organi9ation then how it should $e low severityD
)ester should $e judgmental while assigning the Severity to the defect
/ow Priority and $ig$ Seerity:
(" 6rashed in application if end users do some weird steps which are not usual or invalid steps.
)his is all a$out Severity and Priority, let me *now if anyone has questions on it.
7ow to test coo3ies in we* a!!lication&
6oo*ies is encrypted te#t file used to store small information on user@s or client@s machine. .t is generated $y
the we$ server and sent to the internet $rowser.
T$ere are : ways to test t$e coo3ies.
(. )ypically, we$site should $e wor* as e#pected even coo*ie is disa$le. to test this case, disa$le
to coo*ies from internet $rowser settings. and try to access the we$site or we$ application. in this
case application should wor* as e#pected.
&. 6onfigure the internet $rowser to prompt the coo*ie for acceptance or rejection. try several times $y
accepting and rejecting the coo*ies and o$serve the $ehavior.
2. ,dit 6oo*ie file in any te#t editor and modify the some parameters, due to modification of coo*ie, it
gets corrupted now access the we$site and see its $ehavior
4. ;emove the all coo*ies and from defined location usually its )emp folder" and chec* the $ehavior of
pages.
8. 6hec* the coo*ies $ehavior on different $rowsers.
A. !pen the coo*ies in )e#t editor and ma*e sure that all sensitive information li*e user password,
session ids are encrypted properly.
#$at is Test case& #$at are t$e contents in Test case&
)est 6ase is a set of steps which gives the certain output with defined input conditions. )ypically test
case contains test steps and at the end of the test steps, tester needs to compare the actual result with e#pected
result.
%hat are the items of test case templateD
3ollowing parameter should $e considered while writing test case.
(. )est 6ase ?um$er' this num$er should $e unique.
&. Pre=6ondition' Pre=conditions is necessary to ma*e sure that we can get desired output. Pre=
condition should $e all necessary date to start test e#ecution. Sometimes no need of pre=condition.
2. /escription' /escription or test steps are very important parameter of )est case writing. %here tester
writes the e#act test steps to get the desired output.
4. ,#pected ;esult' ,#pected output at the end of test steps should $e written here.
8. Actual ;esult' %hat actual result tester gets after e#ecuting the test case should $e go here. it helps
when we e#ecute the same test case again.
A. Status Pass/3ail"' this is decision ma*ing status whether test case is passed or failed or $loc*ed after
test e#ecution.
I. ;emar*s' if tester wants to specify some specific thing, then this item is made to ma*e such comments
or remar*.

To me it seems like you 'want' a android phone. Perhaps by accident you ended up with 520. Most of
the things you have listed are features found in droids and that is what you should consider buying.

ust a friendly advise! after reading your post. " would seriously consider returning the phone.
#therwise! you will continue to hate this phone everyday! especially when you consider that windows
phone cost a lot more than comparable droids.

You might also like