0% found this document useful (0 votes)
1 views18 pages

Testing PDF

The document outlines the two main types of software testing: Functional and Non-Functional Testing, detailing various subtypes within each category. Functional Testing verifies application features against requirements, while Non-Functional Testing evaluates system attributes like performance and security. Specific testing methods discussed include Unit Testing, Integration Testing, Regression Testing, and Usability Testing, among others.

Uploaded by

waste57waste
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views18 pages

Testing PDF

The document outlines the two main types of software testing: Functional and Non-Functional Testing, detailing various subtypes within each category. Functional Testing verifies application features against requirements, while Non-Functional Testing evaluates system attributes like performance and security. Specific testing methods discussed include Unit Testing, Integration Testing, Regression Testing, and Usability Testing, among others.

Uploaded by

waste57waste
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

01

Types of Software
Test ng i
There are two ma n types of software test ng:
i
i
Funct onal
i
Non-funct onal Test ng
i
i
For each of these, we have many test ng types that we w ll be d scuss ng
i
i
i
i
further n th s presentat on.
i
i
i
02
Funct onal Test ng
i
i
Funct onal test ng s a stage n the software del very l fecycle n wh ch
i
i
i
i
i
i
i
i
qual ty eng neers ver fy whether the appl cat on under test’s features
i
i
i
i
i
behaves as per the r requ rements.
i
i
Examples of funct onal Test ng:
i
i
Un t Test ng
i
i
Integrat on Test ng
i
i
Smoke Test ng
i
San ty Test ng
i
i
Regress on Test ng
i
i
End-to-End Test ng
i
Acceptance Test ng
i
Wh te Box Test ng
i
i
Black Box Test ng
i
Interface Test ng
i
03
Non-Funct onal

i
Test ng i
When perform ng non-funct onal test ng, testers evaluate other
i
i
i
attr butes of system behav or, such as the system's performance,
i
i
rel ab l ty, and stab l ty.
i
i
i
i
i
Examples of non-funct onal Test ng:
i
i
Performance Test ng
i
Secur ty Test ng
i
i
Usab l ty Test ng
i
i
i
Installat on Test ng
i
i
Rel ab l ty Test ng
i
i
i
i
04
Un t Test ng i
i
Un t test ng s a type of software test ng wh ch s done on an nd v dual
i
i
i
i
i
i
i
i
i
un t or component to test ts correct ons.
i
i
i
Typ cally, Un t test ng s done by the developer at the appl cat on
i
i
i
i
i
i
development phase. Each un t n un t test ng can be v ewed as a method,

i
i
i
i
i
funct on, procedure, or object.
i
Developers often use test automat on tools such as NUn t, Xun t, JUn t,

i
i
i
i
Cha , Jest for the test execut on.
i
i
05
Integrat on Test ng

i
i
Integrat on test ng s a type of software test ng where two or more
i
i
i
i
modules of an appl cat on are log cally grouped together and tested as a

i
i
i
whole. The focus of th s type of test ng s to f nd the defect on nterface,

i
i
i
i
i
commun cat on, and data flow among modules. Top-down or Bottom-up
i
i
approach s used wh le ntegrat ng modules nto the whole system.
i
i
i
i
i
Th s type of test ng s done on ntegrat ng modules of a system or
i
i
i
i
i
between systems. For example, a user s buy ng a fl ght t cket from any

i
i
i
i
a rl ne webs te. Users can see fl ght deta ls and payment nformat on
i
i
i
i
i
i
i
wh le buy ng a t cket, but fl ght deta ls and payment process ng are two
i
i
i
i
i
i
d fferent systems. Integrat on test ng should be done wh le ntegrat ng of
i
i
i
i
i
i
a rl ne webs te and payment process ng system.
i
i
i
i
06
Regress on Test ng

i
i
Any new change or feature added to the software can wreck ts ex st ng

i
i
i
funct onal t es. Regress on test ng s performed every t me alterat ons
i
i
i
i
i
i
i
i
are made to check for the software’s stab l ty and funct onal t es. Due to

i
i
i
i
i
ts work- ntens ve nature, regress on test ng s often automated.
i
i
i
i
i
i
Example: A food del very app added a funct on to help users add mult ple
i
i
i
promot ons on top of each other. A regress on test needs to be done to
i
i
make sure the checkout and payment process s not affected.

i
07
San ty Test ng
i
i
S m lar to regress on test ng, san ty test ng s conducted for a new bu ld
i
i
i
i
i
i
i
i
w th m nor bug f xes, or new code added. If rejected n the san ty test ng
i
i
i
i
i
i
phase, the bu ld w ll not proceed to further test ng. Wh le regress on
i
i
i
i
i
test ng checks the ent re system after alterat ons, san ty test ng targets
i
i
i
i
i
spec f c areas that are affected by the new code or bug f xes only.
i
i
i
Example: On an e-commerce webpage, users cannot add a part cular

i
product to the r cart even when the stock s ava lable. After the ssue was
i
i
i
i
f xed, san ty test ng s performed to ensure that the “add to cart”
i
i
i
i
funct on s ndeed work ng.
i
i
i
i
08
Smoke Test ng

i
When a new bu ld s completed, t s handed to the QAs for smoke test ng.

i
i
i
i
i
In th s phase, only the most cr t cal and core funct onal t es are tested
i
i
i
i
i
i
to ensure that they y eld the ntended results. As an early-stage

i
i
acceptance test, smoke test ng adds a ver f cat on layer to determ ne

i
i
i
i
i
whether or not the new bu ld can proceed to the next stage or needs re-

i
work.
Example: A ut l ty company bu lt an app w th the funct on to report
i
i
i
i
i
outages n customers’ homes. Th s funct on reports the address and other
i
i
i
relevant nformat on as well as not f es the homeowner when a
i
i
i
i
d spatcher s on the way to help. Smoke test ng w ll val date th s feature
i
i
i
i
i
i
on a fundamental level to assure that when an outage s reported, the

i
correct nformat on s sent so a d spatcher can be there on t me.
i
i
i
i
i
09
End to End Test ng

i
It nvolves test ng a complete appl cat on env ronment n a s tuat on that
i
i
i
i
i
i
i
i
m m cs real-world use, such as nteract ng w th a database, us ng
i
i
i
i
i
i
network commun cat ons, or nteract ng w th other hardware,
i
i
i
i
i
appl cat ons, or systems f appropr ate.
i
i
i
i
For example, a tester s test ng a pet nsurance webs te. End to End
i
i
i
i
test ng nvolves test ng of buy ng an nsurance pol cy, LPM, tag, add ng
i
i
i
i
i
i
i
another pet, updat ng cred t card nformat on on users’ accounts,
i
i
i
i
updat ng user address nformat on, rece v ng order conf rmat on ema ls
i
i
i
i
i
i
i
i
and pol cy documents.
i
10
Secur ty Test ng
i
i
It s a type of test ng performed by a spec al team. Any hack ng method
i
i
i
i
can penetrate the system.

Secur ty Test ng s done to check how the software, appl cat on, or
i
i
i
i
i
webs te s secure from nternal and/or external threats. Th s test ng
i
i
i
i
i
ncludes how much software s secure from mal c ous programs, v ruses
i
i
i
i
i
and how secure & strong the author zat on and authent cat on processes

i
i
i
i
are.

It also checks how software behaves for any hacker’s attack & mal c ous

i
i
programs and how software s ma nta ned for data secur ty after such a
i
i
i
i
hacker attack.
11
Performance Test ng

i
Performance test ng s test ng of an appl cat on’s stab l ty and

i
i
i
i
i
i
i
response t me by apply ng load.
i
i
The word stab l ty means the ab l ty of the appl cat on to w thstand n
i
i
i
i
i
i
i
i
the presence of load. Response t me s how qu ckly an appl cat on s

i
i
i
i
i
i
ava lable to users. Performance test ng s done w th the help of tools.
i
i
i
i
Loader.IO, JMeter, LoadRunner, etc. are good tools ava lable n the

i
i
market.
12
Types of Performance
Test ng i
Load test ng
i
Load test ng s test ng of an appl cat on’s stab l ty and response t me
i
i
i
i
i
i
i
i
by apply ng load, wh ch s equal to or less than the des gned number
i
i
i
i
of users for an appl cat on.
i
i
For example, your appl cat on handles 100 users at a t me w th a
i
i
i
i
response t me of 3 seconds, then load test ng can be done by apply ng
i
i
i
a load of the max mum of 100 or less than 100 users. The goal s to
i
i
ver fy that the appl cat on s respond ng w th n 3 seconds for all the
i
i
i
i
i
i
i
users.
Stress Test ng
i
Stress test ng s test ng an appl cat on’s stab l ty and response t me by
i
i
i
i
i
i
i
i
apply ng load, wh ch s more than the des gned number of users for an
i
i
i
i
appl cat on.
i
i
For example, your appl cat on handles 1000 users at a t me w th a
i
i
i
i
response t me of 4 seconds, then stress test ng can be done by
i
i
apply ng a load of more than 1000 users. Test the appl cat on w th
i
i
i
i
1100,1200,1300 users and not ce the response t me. The goal s to
i
i
i
ver fy the stab l ty of an appl cat on under stress.
i
i
i
i
i
Endurance Test ng
i
Endurance test ng s test ng an appl cat on’s stab l ty and response
i
i
i
i
i
i
i
t me by apply ng load cont nuously for a longer per od to ver fy that
i
i
i
i
i
the appl cat on s work ng f ne.
i
i
i
i
i
For example, car compan es soak test ng to ver fy that users can dr ve
i
i
i
i
cars cont nuously for hours w thout any problem.
i
i
13
Usab l ty Test ng
i
i
i
Usab l ty test ng s test ng an appl cat on from the user’s perspect ve to
i
i
i
i
i
i
i
i
check the look and feel and user-fr endl ness.

i
i
For example, there s a mob le app for stock trad ng, and a tester s
i
i
i
i
perform ng usab l ty test ng. Testers can check the scenar o l ke f the
i
i
i
i
i
i
i
mob le app s easy to operate w th one hand or not, scroll bar should be
i
i
i
vert cal, background color of the app should be black and pr ce of and
i
i
stock s d splayed n red or green color.
i
i
i
The ma n dea of usab l ty test ng of th s k nd of app s that as soon as the
i
i
i
i
i
i
i
i
user opens the app, the user should get a glance at the market.
14
UAT (User
Acceptance Test ng)

i
User acceptance test ng (UAT), also called appl cat on test ng or end-
i
i
i
i
user test ng, s a phase of software development n wh ch the software s
i
i
i
i
i
tested n the real world by ts ntended aud ence.
i
i
i
i
User acceptance test ng val dates the test ng done at the end of the
i
i
i
development cycle. It s typ cally completed after un t test ng, qual ty
i
i
i
i
i
assurance, system test ng and ntegrat on test ng. The software may
i
i
i
i
undergo other test ng phases and be completely funct onal but m ght
i
i
i
st ll not meet ts requ rements f t s not well rece ved by ts ntended
i
i
i
i
i
i
i
i
i
users.

Who performs UAT?


End users normally perform user acceptance test ng. They are the i
most effect ve group to test software n th s form because they know
i
i
i
exactly how the software w ll be used on a da ly bas s and what
i
i
i
changes need to be made to be su table for th s day-to-day use.
i
i
Internal funct onal experts also play a role n UAT, as they help shape
i
i
UAT cycles and test management, as well as nterpret the results.
i
15
Alpha Test ng

i
Alpha Test ng s a type of acceptance test ng; performed to dent fy all
i
i
i
i
i
poss ble ssues and bugs before releas ng the f nal product to the end
i
i
i
i
users. Alpha test ng s carr ed out by the testers who are nternal
i
i
i
i
employees of the organ zat on. The ma n goal s to dent fy the tasks
i
i
i
i
i
i
that a typ cal user m ght perform and test them.
i
i
To put t as s mple as poss ble, th s k nd of test ng s called alpha only
i
i
i
i
i
i
i
because t s done early on, near the end of the development of the
i
i
software, and before beta test ng. The ma n focus of alpha test ng s to
i
i
i
i
s mulate real users by us ng a black box and wh te box techn ques.
i
i
i
i
16
Beta Test ng

i
Beta Test ng s performed by “real users” of the software appl cat on
i
i
i
i
n “real env ronment” and t can be cons dered as a form of external
i
i
i
i
User Acceptance Test ng. It s the f nal test before sh pp ng a product to
i
i
i
i
i
the customers. D rect feedback from customers s a major advantage of
i
i
Beta Test ng. Th s test ng helps to test products n customer’s
i
i
i
i
env ronment.
i
Beta vers on of the software s released to a l m ted number of end-
i
i
i
i
users of the product to obta n feedback on the product qual ty. Beta
i
i
test ng reduces product fa lure r sks and prov des ncreased qual ty of the
i
i
i
i
i
i
product through customer val dat on.
i
i
17
A/B Test ng

i
A/B test ng s an exper mental method n wh ch two vers ons of
i
i
i
i
i
i
anyth ng are contrasted to see wh ch s “better” or more effect ve.
i
i
i
i
Th s s often done n market ng when two d fferent types of content—
i
i
i
i
i
whether t be ema l ng copy, a d splay ad, a call-to-act on (CTA) on a web
i
i
i
i
i
page, or any other market ng asset—are be ng compared. Th s s usually
i
i
i
i
done before launch ng any product n the market so that the company
i
i
can get better results.

Th s also helps n compar ng the performance of two or more var ants of


i
i
i
i
ema ls and then select ng the best among them based on the result g ven
i
i
i
by the aud ence. So, now w thout wa t ng any t me let’s move forward and
i
i
i
i
i
take a look that what s A/B test ng:
i
i
18
Ad Hoc-Monkey
Test ng i
Ad Hoc Test ng s a k nd of test ng where testers who know the software
i
i
i
i
well test t w thout a str ct plan. It's also called Random Test ng or
i
i
i
i
Monkey Test ng.
i
Testers m ght use some ex st ng test cases or choose them randomly to
i
i
i
test the software.

The term "Monkey Test ng" comes from the dea that testers are
i
i
essent ally "monkey ng around" w th the software, m m ck ng a playful
i
i
i
i
i
i
and exploratory approach to uncover h dden problems.
i

You might also like