0% found this document useful (0 votes)
15 views

Manual Testing Material

The document provides an overview of manual testing. It defines manual testing, discusses the objectives of testing as finding bugs and providing quality software. It outlines tester qualities like having an attitude of breaking software and good analytical skills. It also summarizes different testing methodologies like black box, white box and grey box testing and the software development life cycle.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Manual Testing Material

The document provides an overview of manual testing. It defines manual testing, discusses the objectives of testing as finding bugs and providing quality software. It outlines tester qualities like having an attitude of breaking software and good analytical skills. It also summarizes different testing methodologies like black box, white box and grey box testing and the software development life cycle.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

om

MANUAL TESTING

l.c
ai
( By V. Mallikarjuna Reddy, [email protected] )

gm
@
lik
al
m
a.
nn
ve
n,
ju
ar
lik
al
M

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Software Testing
Software Testing is an activity/process to find the bugs in the Software. The main objective is to
provide Quality software to the Customer.

Definition of Quality
Bug free
Meets the customer requirements

om
User friendly

l.c
Requirement

ai
It’s the functionality and expectation from the software which are defined by the customers.

gm
@
Advantages of Testing lik
al
● Quality product
m

● Client Satisfaction
● More business
a.

● Bug free product


nn

● Good reputation in the market


ve
n,

Tester Qualities/Skills
ju

➔ Attitude of breaking the software


ar

➔ Negative thinking
lik

➔ Good analytical skills


➔ Patience
al

➔ Clear and detailed communication


M

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


om
l.c
ai
gm
@
lik
al
m
a.
nn
ve

Organization Hierarchy (Technical team. Didn’t include HR and etc non-technical):


n,
ju
ar
lik
al
M

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


om
l.c
ai
gm
@
lik
al
m
a.
nn
ve
n,

Manufacturing Company Vs Software Company:


ju

First Manufacturing then Testing(Testers) then launch in the Market


ar

First S/w Coding -> Testing(Testers) -> Live/Production


lik
al
M

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Project
The software application developed for their own purpose and not for selling.
Ex: Amazon/Facebook/flipkart etc.

Product
The software applications developed for selling purposes. Ex: JIRA, Google Docs/Drive

om
l.c
ai
Testing Methodologies

gm
@
★ Black box testing
★ White box testing lik
★ Grey box testing
al
m

Black box testing: The engineers who perform the black box testing are called as Black box
a.

testers. Test the functionality and UI of the application. They don’t have how the code is
nn

written, and the logic inside the code.


ve

White box/ Glass box/ Clear box testing: Testing the application code (ie the internal code,
n,

logic) is called white box. Usually developers will be involved in this.


ju
ar

Grey box: is a testing method which is a combination of Black box and White box methods.
lik

Usually test engineers who have the knowledge of structural parts will be involved in gray box
testing.
al
M

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Software Development Life Cycle (SDLC)

It is a software development process. Contains seven phases.

1. Requirements Gathering 2. Requirements Analysis 3. Design 4. Coding 5. Testing


6. Deployment 7. Maintenance

om
1. Requirements gathering: Business Analysts gather the requirements from the customer and

l.c
prepare the Business Requirement Specification document. BA responsibility is to gather as
much information as possible from the customers. The BRS is prepared by the BA and is

ai
submitted to the Project Managers.

gm
2. Requirements Analysis: The PMs analyze the Business Requirements(BRS). Requirements are

@
analyzed to check whether they are possible to develop or not. Technology selection, Resource
lik
plan and HW/SW plan also created. Finally, a Functional Requirement Specification
al
document(FRS) will be created. FRS is also called System Requirement Specification(SRS).
m
a.

3. Design: Before starting the actual coding, it is highly important to understand what we are
nn

going to create and what it should look like? The requirement specifications are studied in this
phase and system design is prepared. System Design helps in defining overall system
ve

architecture. Describes desired features including screen layouts, HLDs, LLDs and overall
n,

architecture of the system.


ju

4. Coding: developers/programmers write the code based on the design docs and follow the
ar

coding standards.
lik
al

5. Testing: Testers review the BRS, the FRS/SRS and create test cases. They start testing when
M

the application is developed by the developers.

6. Deployment: The application will be deployed into Live (i.e, Production) when the testing is
completed. The public can start using the application when it is in Live.

7. Maintenance: The Maintenance will be started after the application is launched in the
Production. The maintenance means fixing of the bugs found in the Live and any
enhancements. The maintenance will continue as long as the project is on Live.

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Overview on Sample Projects: Eg: Amazon

om
l.c
ai
gm
@
lik
al
m
a.
nn
ve
n,
ju
ar
lik
al
M

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Client - Server over the Internet:

om
l.c
ai
gm
@
lik
al
m
a.
nn
ve
n,
ju
ar
lik
al
M

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


SDLC Models

There are various models for software development. Each model contains a series of phases to
build the software successfully.

Waterfall Model: In this model, the software development progresses sequentially from one
phase to another phase. This model is also referred to as Linear sequential model. Each phase
must be completed before the next phase can begin. The output of each stage becomes input
for the next stage. The sequential execution of all the phases in the SDLC is known as the

om
Waterfall model. Testing is carried out once the code has been fully developed.

l.c
ai
gm
@
lik
al
m
a.
nn
ve
n,
ju
ar

When is this model suitable?


lik

Requirements are Clearly defined


Product definition is Stable
al

Technology is understood and not dynamic


M

Advantages of waterfall model:


● Simple and easy to follow
● More Detailed documents for understanding the requirements
● The documentation freezed at every phase
● Less investment because the teams start working when the new phase starts. Ie, no
need to have all the teams from the start of the project.

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Disadvantages of Waterfall model:
● Cost of fixing the bugs is high because tester role starts from Testing phase
● The requirement changes are not accepted in the middle of the process
● Not suitable for longer and complex projects

Testing starts only after the implementation is done. But if you are working in a large project,
where the systems are complex,  it's easy to miss out key details in the requirements phase
itself. In such cases , an entirely wrong product will be delivered to the client. The earlier in

om
life cycle a defect is detected, the cheaper it is to fix it. As the saying, "A stitch in time saves
a nine"

l.c
ai
V model /Verification and Validation (V&V):

gm
@
V-model is the modified version of the Waterfall model. This means for each phase in the
lik
waterfall model there is a corresponding testing phase. There are Verification phases on one
side and Validation phases on the other side of the ‘V’.
al
m

Verification: It is also known as SQA(Software Quality Assurance). QA checks that the Software
a.

is developing as per the guidelines and specifications. This is also called ‘Static Testing’.
nn

Validation: It is also known as SQC(Software Quality Control). Test engineers check that the
ve

software is developed as per the requirements". This is also called ‘Dynamic Testing’ means
validating the software i.e. actual test execution.
n,
ju
ar
lik
al
M

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


om
l.c
ai
gm
@
lik
al
m
a.
nn

The test team participates from the Requirements phase and starts reviewing and creating the
ve

test documentation.
n,

Business Requirements: This is also called BRS or CRS(Customer Requirement Specification) or


ju

URS(User Requirement Specification). It contains business requirements i.e. from the customer.
ar

These are not much understand by the Technical team(ie, Devs/Designers/Testers)


lik

System Requirements: This is called as SRS(System Requirements specification) and prepared by


al

the Project/Product owners. These are easy to understand by the technical team.
M

HLD: HLD and LLDs are prepared by the Designers. High level designs. This contains high level
modules.

LLD: Low level designs. This contains very detailed designs. Each module is broken to
submodules with more details.

Unit Testing: Developers do this testing. They check the modules individually which is called a
unit.

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Integration testing: Developers start integrating the modules (ie, linking one module to another
module). This is called integration testing. This is to test if the modules work properly when all
of them are integrated.

System Testing: This is conducted by the Testers. To ensure the whole system works well after
the integration testing.

UAT: User Acceptance Testing. Client side testers will do this testing before going to Production.

Static testing techniques: Reviews, Walkthroughs, Inspection

om
Reviews - Doing these whether the documents contain correct and complete details.
These are not conducted by the owner(ie Author) of the document, usually a third

l.c
person(colleagues/Leads/customer) will do the review. Eg: Requirements review, Code review,

ai
Test plan review, Test cases review.

gm
Walkthrough - The owner(ie Author) of the document will explain or discuss with team

@
members/peers/lead.
lik
Inspection - This is like Review only. Here all the documents will be cross checked to
al
ensure everyone in the team following the same requirements and having common
m

understanding.
a.

Dynamic Testing: Unit testing, Integration testing, System testing, UAT.


nn
ve

Advantages of V Model
n,

● Test team involves in each phase


● Cost of fixing the bugs is low because testing starts at early stage
ju

● Requirement changes are possible at any phase.


ar
lik

Disadvantages of V-model
● If any changes happen mid way, not only the requirements documents but also the test
al

documentation needs to be updated.


M

● Investment is more because all the teams work from first phase

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Agile Development

It is an iterative and incremental development. Agile means ‘moving fast’.

Advantages of Agile model:


● Customer satisfaction by rapid delivery of working software.
● Welcoming the changing requirements

om
● Working s/w is delivered frequently(weeks rather than months)
● Daily conversations among Clients, BAs, POs, Devs and Testers

l.c
● Working s/w will be more useful than just presenting docs to clients in the meetings

ai
● Customer collaboration- requirements can’t be fully collected at the beginning,

gm
therefore continuous customer/stakeholder involvement is very important.

@
Disadvantages of Agile model:
lik
● Less documentation because the requirements can change just in time
● Difficult for the new starters because of less documentation
al
m

Agile methodologies:
a.

Agile Unified Process (AUP), Scrum, Dynamic Systems Development Method (DSDM),
nn

Extreme Programming (XP), Feature Driven Development (FDD),


ve

Lean software development


n,

Scrum
ju
ar

Scrum is one of the Agile methodologies. Scrum is focused on delivering business value all the
time.
lik
al

Scrum Roles:
M

There are 3 roles: Product Owner, Scrum Master & the Development Team

Product Owner: The Product Owner represents the stakeholders and also the voice of
the customer. Accountable for ensuring that the team delivers value to the business. The
Product Owner maintains the product backlog i.e, adding or removing the user
stories(requirements).

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Scrum Master: Helps the team to follow the scrum process. Responsible for all the
meetings, keeps the team on track. Scrum Master helps everyone and makes the team more
productive. Is accountable for removing impediments/roadblocks.

Team: The team responsible for delivering the working s/w at the end of each Sprint.
The team means Design, Dev & QA people.

Sprint:
Sprint is a fixed time period which is also known as Iteration. It could be 1 to 4 weeks.

om
User Story:

l.c
A feature/requirement is called as a user story in the Scrum model.
The structure of a story is: "As a <user type> I want to <do some action> so that <desired

ai
result>"

gm
Eg: As a user I want the statement enquiry so that I can view the monthly statement

@
Meetings/Ceremonies/Scrum Events:
lik
al
m

Sprint planning meeting – Each sprint begins with a meeting called SPM. At the
a.

beginning of the sprint cycle, a “Sprint planning meeting” is held. Discuss the items from the
nn

product backlog.
ve

The team selects what work is to be done in the sprint. Product owner describes the user
n,

stories to the development team. The team estimates the stories based on complexity. Teams
ju

use the Fibonacci series: 1, 1, 2, 3, 5, 8. If any story takes more than 8, then the team breaks
into smaller stories.
ar
lik

Daily Scrum - The status meeting that happens each day during the sprint is called the
al

Daily Scrum also called as daily standup.


M

All members of the Development Team come prepared with the updates for the meeting.
The meeting starts precisely on time even if some Development team members are missing.
The meeting should happen at the same location and same time every day.
The meeting length is set (time boxed) to 15 minutes.
Other interested parties can come but normally only the core roles speak

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


During the meeting, each team member answers three questions:
● What have you done since yesterday?
● What are you planning to do today?
● Any impediments/ blocks which stop your work?

Any impediment/stumbling block identified in this meeting is documented by the Scrum Master
and worked towards resolution outside of this meeting. No detailed discussions shall happen in
this meeting.

om
Sprint Review Meeting – At the end of the sprint, the team reviews the output of the
sprint. Review the work that was completed and not completed.

l.c
Present(demo) the completed work to the stakeholders.

ai
Incomplete work cannot be demonstrated.

gm
Sprint retrospective - To make continuous process improvements. Two main questions

@
are asked in the sprint retrospective: What(i.e, process, relationship among people/other teams
lik
and the tools) went well during the sprint? What could be improved in the next sprint? SM
records action points for further sprints.
al
m

Artifacts:
a.

Product backlog: Product backlog is the single source of all requirements. It is a


nn

prioritized list of high-level requirements.


ve

Sprint backlog: A prioritized list of requirements to be completed during the sprint. This
list is chosen from the Product backlog.
n,

Increment: The functionality which is completed at the end of the sprint.


ju
ar

Sprint Board:
lik

Sprint 1 (Start Date: End Date: )


al
M

To Do list Dev Dev QA QA Completed Done(Prod


In-Progress Completed In-Progress Owner)

User story6, 7, User story5, User story4 User story3 User story2 User story1
8
Bug 1
Bug 3, Bug 2

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Meetings Duration:
Sprint Planning: 4 Hours(2 weeks Sprint), 8 Hours ( 4 weeks Sprint)
Sprint Review: 2 Hours (2weeks Sprint), 4H(1-Month sprint),
Sprint Retro: 1.5 Hr (2weeks Sprint),3H(1-Month sprint),
PB Refinement: 1 Hr Shouldn't consume more than 10% of engineer capacity.
Daily Scrum: 15mts: 1.5 hr per 2 weeks sprint

om
l.c
ai
gm
@
lik
al
m
a.
nn
ve
n,
ju
ar
lik
al
M

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Testing Life Cycle

It explains the various phases in the software testing.

Requirements analysis: Testing team start understanding the requirements from the
Requirements and the design documents. (Testing team: QA Lead, Sr Test engineer, Test
Engineer)

om
Test Planning: After reviewing the requirements, a test plan will be created. The test plan

l.c
describes the process to conduct the software testing. Prepared by Test Lead or Sr Test
Engineers.

ai
gm
@
Test Design: Writing the test cases, test scenarios is called test design. Test engineers or Sr Test
lik
engineers will be responsible for writing the test cases, creating the test data and maintaining
the Traceability Matrix.
al
m
a.

Test Execution: Test execution will be started when the TCs are ready and the code is ready. Test
nn

engineers will compare the actual result with the Expected result.
ve
n,

Bug Reporting: The bugs will be reported to the developers after analyzing the test results.
ju
ar
lik

Bug Verification: Bugs will be retested when they are fixed by the development team and a
al

quick regression testing will be done.


M

Test Closure or Testing Sign-Off: Testing will be closed by creating the Test Summary report.
This report will be prepared by the Team Lead. This report contains an overall summary of the
testing. It includes the build number, total no of TCs executed, Passed, failed and no of bugs
reported and fixed.

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Test Case Design Techniques

These techniques help us to avoid the exhaustive testing(testing with all the possible
combinations / input values) and to design more effective test cases. Also these techniques help
us to achieve maximum test coverage with the minimal combinations. Main objective: Better
test coverage and Reduce the duplicate test execution. They are:

om
a. Boundary Value Analysis(BVA)
b. Equivalence Class Partitioning(ECP)

l.c
c. Decision Table
d. State Transition

ai
e. Error guessing

gm
@
Boundary Value Analysis:
lik
BVA is useful to design Test cases for a range of input. Most of the time developers make
al
mistakes while implementing the conditions such as <, <=, >, >=. BVA is useful to find defects in
these kinds of conditions.
m
a.

Eg: Suppose the field ‘Age’ accepts values from 1 to 50 years. We can write 50 TCs to test
nn

all the values, but it is not an effective test.


So this can be divided as below:
ve

(Min - 1), Min, (Min + 1), (Max - 1), Max, (Max + 1)


n,
ju

Eg: 0, 1, 2, 49, 50, 51


ar

Finally we can test using 6 combinations instead of 50 combinations.


lik
al

Equivalence Class Partitioning:


M

This can be used when the input is a combination of different types of data. In this technique,
the input data will be divided into equivalent classes/groups. Only one value will be taken from
each group because the rest of the values in that group produce the same output.

Eg: Suppose the field ‘City’ accepts only alphabets.


Here the data can be divided into these classes. Valid: (A-Z), (a-z) and Invalid: (0-9), (@!* etc
special)
Valid set: A - Z, a - z. Eg: HYDERABAD, bangalore, ChennAi
Invalid set: Special chars, Numbers, spaces. Eg: 1234, 12hyderabad, chennai+, bang alore

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


If a number field accepts only 1 to 200. We can choose data with single digit, two digits and
three digits. Eg: 2, 10, 200

Error Guessing:

This technique basically depends on the experience of the testers who can think about the
problematic areas.

Eg:

om
Submit the form without filling the Mandatory fields
Typing white spaces and Submit in the Facebook comments

l.c
Decision table Testing:

ai
gm
This technique is useful when there are multiple fields. If the fields are two then the

@
combinations will be 4 i.e., 2^2. If the fields are ten then the combinations will be 1024 i.e.,
2^10. So, we can choose a rich set to test minimum combinations and save time.
lik
al
Eg:
m

Input Combination 1 Combination 2 Combination 3 Combination 4


a.
nn

User Name Valid Valid Invalid Invalid


ve

Password Valid Invalid Valid Invalid


n,

Result Home page Error Error Error


ju
ar

State Transition:
lik
al

This technique is used to test the different states/statuses in the application. We need to check
M

all the states of the application by executing the different conditions.

Eg:
1. We can design test cases to test the grades of a student Distinction, First, Second and Fail.

2. Ecommerce purchase Or Online Food delivery. Ordered -> Shipped -> Out For Delivery ->
Delivery

3. Refund initiated -> Refund inprogress -> Settled.

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Testing Levels

There are four types of Testing levels.

1. Unit Testing 2. Integration Testing 3. System Testing and 4. Acceptance Testing

1. Unit testing

This testing is carried out by the Developers to know whether the code/program is working

om
properly or not. A smallest program or function in the code is known as Unit. This is done by the
developers, so it comes under White box testing. The Unit testing is also referred to as

l.c
Component/Module level testing.

ai
2. Integration testing

gm
@
The process of combining one module with another module is called Integration testing. It
lik
checks the data communication/data flow(Request data and Response data) among the
modules. This is carried out by the Developers at the code level. Testers do the integration
al
testing at the application level. (Amazon Registration, login, delivery address, products list, add
m

to cart, payment, Order tracking)


a.
nn

Integration testing divided into two parts:


a. Incremental integration: In this each unit is integrated one at a time. There are two types.
ve

1. Top Down Integration: It is a process of incrementally adding the modules from Top to
n,

Bottom. Stubs are needed to simulate the bottom level modules(ie, child modules). The
ju

module added is the child of the previous module.


ar

Stub: In the top down integration, if the bottom level module is not ready then it will be
lik

replaced by the Stub.


al

2. Bottom Up Integration: It is a process of incrementally adding the modules from


M

Bottom to Top. Drivers are needed to simulate top level units. The module added is the
parent of the previous module.

Driver: In the bottom up integration, if the top level unit is not ready then it will be
replaced by the Driver.

b. Non-incremental integration testing (or Bigbang): Integrating all the modules at once and
doing testing is called Bigbang. This method can be used to save time in the integration process.

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


3. System testing

Performed after integration testing to ensure that the entire system(collection of all the
modules which makes the system) is working as per the FRS/SRS. Testing all the
features/modules whether working properly or not.

System testing includes: Functional testing, User Interface testing(GUI), Usability Testing and
Non-functional testing.

om
Functional Testing: Functional testing refers to testing the features of the software as

l.c
mentioned in the FRS document. Providing inputs/data, data saving, test calculation(total
amount, tax calculation, discounts calculation, total marks) error validations, receipts.

ai
gm
GUI testing: Checking the front-end. Elements on the screens or forms. Eg: Text boxes, Buttons,

@
hyperlinks, images, animations, Radio buttons, check boxes, calendar, video or audio buttons etc

lik
Usability testing: Test as an end user to understand the content, help texts on the application. Whether
al
they are meaningful or difficult to understand. The application should be self exploratory i.e., simple to
m

understand.
a.

Non-functional Testing: Validating non-functional requirements such as Performance, Security


nn

is called Non-functional system testing. Checking the speed(response) of the application.


ve

Performance Testing: Load testing (Gradually increasing the load i.e. adding the users to access the
n,

application), Stress testing(Sudden increase/decrease of the load) and Volume testing(How much data is
ju

handled by the application).


ar

Security Testing: Authentication(testing whether the user is already in the system or not),
lik

Authorization(User is already in the system. But whether user has permission/privileges to access some
al

modules in the application)


M

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


4. User Acceptance testing
This is conducted when testers complete the testing and provide the Sign-off. UAT is conducted
to determine if the application is ready for use or not. UAT is performed by the client side
people. The application will be deployed into Production when the UAT is successfully passed.

UAT is 2 types:

Alpha testing: This is conducted before the beta testing. Alpha is conducted in the
staging environment which is similar to the Production environment. People involved in this are

om
Testers, Developers, and limited people inside the organization participate in this testing.

l.c
Beta testing: This is conducted after the Beta testing. Testing will be performed on the
Staging environment. People involved in this are End users(Real users) i.e. outside the

ai
organization, they can do testing from their homes or anywhere. This is done before releasing

gm
the product into the market.

@
lik
Test Environments
al
m

Dev Environment: This env is used by the developers for developing and testing the code
written by them. Eg: www.gmaildev.com
a.
nn

Test Environment: This is used by the test engineers for test execution. Testers use this to
ve

execute their test cases. Eg: www.gmailtest.com


n,

Staging Environment: This is also called the Release environment. This is similar to the
ju

Production environment, so it is also called a Production like environment. Client side people
ar

use this environment to test the application. The software will be tested in the Staging env
lik

before deploying the software into Production. Eg: www.gmailrelease.com


al

Production Environment: When the testing is completed on the Staging env, then the software
M

will be deployed into the Production env. This is also called a LIVE environment. The public can
use the software when it is in the Production environment.

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Test Cases

Test case
A test case is a sequence of steps/actions to be performed on the application to compare both
the Actual result and the Expected result.

We can divide the TCs into these types:

1. GUI test cases, 2. Functional Test cases, 3. Non-Functional test cases

om
1. GUI Test cases:

l.c
a. Check for the availability of the GUI elements eg: Buttons, Text boxes, check boxes, Radio

ai
buttons, Hyperlinks etc

gm
b. Check for the alignments and look & feel of the UI elements/objects (means whether

@
they are placed properly on the web pages)
c. Check for the spellings and grammar lik
d. Check for the consistency of the elements (means one page says Sign In and other page
al
says Log In)
m
a.

2. Functional Test cases: Check for the functionality of the application. Testing the Forms by
nn

entering the data, clicking the hyperlinks, buttons etc,. These can be divided into two types.
a. Positive test cases: Check for the positive flow of the application by giving valid inputs.
ve

b. Negative test cases: Check for the negative behavior by giving the invalid inputs.
n,

Checking the error messages.


ju
ar

3. Non-Functional TCs: These test cases will be designed to test the Performance of the
application. This is done by performance testers. Eg: Load testing and Stress testing
lik
al
M

Test case Template

Test Actual Result(ie,


TestCase Requirement Test Case Test Data & Steps(Test Expected Test Result)
ID ID Priority Title Pre-Condition Description) Result (Pass/ Fail) Bug ID Comments

Note: See the example test cases in the another xls file

Test case ID: The serial number of the test case.

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Requirement ID: The requirement number for which the test case is created

TC Priority: This describes how important it is to execute the test case. eg: P1, P2, P3. High,
Medium & Low. p1-> Describes the main/critical functionality. P2-> Major functionality(field
level validations), P3-> Minor

Test Data: The input data which is needed to execute the test cases.
Pre-condition:It is the setup(Or preparation) that should already exist to execute the test cases.

om
Test steps: The steps/actions to be performed on the application.

l.c
Expected Result: While creating the TCs, testers write the expected result/behavior as

ai
mentioned in the requirements.

gm
Actual result(Test Result): While executing the TCs, testers note down the actual

@
result/behavior observed in the application. Testers will compare both ER & AR and note down
the results. Either Pass or Fail.
lik
al
m

Bug Life cycle


a.

When a bug is found then it will be assigned different statuses. The bug moves through various
nn

statuses till it is closed.


ve

Bug/Defect/Issue: If the Expected result and the Actual result is not same then we can call it as
n,

a Bug. (or) If the application is deviating from the requirement we can say it as a Bug. (OR) If the
application doesn’t work as expected then it's a bug.
ju
ar

Note: A human being(Developer) can make an error(mistake) in the code. These mistakes
lik

produce defects(fault, bug) in the Application. Due to these Bugs, the system will Fail means a
al

Failure/Incident.
M

Error & Mistake are the same.


Bug/Defect/Issue/Fault are the same.
Failure & Incident are the same.

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Bug statuses:

▪ New: This is the initial state for any newly reported issue. In this state a ‘Triage' can be
conducted (depends on projects). Product owner/TL sets the Priority.

▪ In progress: The bug will be moved to In Progress when a developer is accepted and
assigned the bug to him/her.

▪ Fixed(or Verify): When the bug is resolved then it will be changed to Fixed (or Verify).
Developer will assign back to the Reporter or the testing team.

om
▪ Closed: When the bug is working fine after the re-testing then it will be moved to Closed
status.

l.c
ai
▪ Deferred(Unsupported): The bug will be moved to ‘Deferred(Or Unsupported)’ if the

gm
resolution is not needed at present. These can be resolved in the future builds and then
closed.

@
▪ Rejected: Developers can reject the bug with these reasons: Duplicate( it means another
tester already created a similar bug), Not Reproducible(When the Developer is unable to
lik
reproduce the bug) and WAI(it means ‘Working As Intended’. Developer says the
al
behavior is correct. In this cases Tester need to modify the Test cases)
m
a.
nn
ve
n,
ju
ar
lik
al
M

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Bug Severity: It describes the impact of the bug on the application. Also it tells how badly the
bug affects the end users and the application.

Severity can be categorized as: Critical, High, Medium, Low (or) S1,S2, S3 etc.

▪ High: The Defects that cause the system crash and loss of critical data. There is no
workaround available. Can't release the product live. It's like a Showstopper. Testing
can't proceed further.

Ex: Unsuccessful installation, Can't sign in/register, data entered into the forms
not saved.

om
▪ Medium: The Defects that affect major functionality. There is a workaround but not

l.c
satisfactory. Significant impact on the users. The product can't be released live.

ai
Ex: a) When the User is logged into a website, the home page is not displayed and

gm
another page is displayed. Workaround is: Users can go to the Home page by clicking the
‘Home page’ link on the other pages.

@
lik
b) Log out button is missed on other pages and only available on the Home page. So users
need to come to the Home page to log out from the website.
al
m

▪ Low: The defect that affects too low and can be ignored. They can be cosmetic and
a.

grammatical.
nn

Ex: Spelling mistakes, text alignments, color of the pages or fields etc
ve
n,

Bug Priority: It indicates the importance or urgency to resolve the defect. Though the
ju

priority may be initially set by a Tester, it is usually finalized by the Project/Product Manager
in triage meetings.
ar
lik

Priority can be categorized as: High, Medium, Low or P1, P2 , P3 etc.


al

High: High priority bugs need to be fixed immediately. Developers stop working on other
M

tasks and work on these bugs immediately. Fix needs to be ready before the end of the
current sprint or iteration.

Medium: These bugs need to be fixed immediately if there are no High priority bugs. Fix
needs to be ready before the end of the current sprint or iteration.

Low: These bugs may or may not be fixed at all. These are just nice to have.

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Bugs with different Severity & Priority levels

Low Severity - High Priority


For example if the name 'Windows' is written as 'Vindows' , its severity is low since it does
not affect the functionality of the application. But it needs to be fixed at High priority since
the client does not want to ship the application with an incorrect logo.
High Severity - Low Priority
If you signed in and signed out repeatedly for 50 times in a day from Facebook, the
application does not allow you to sign in again on that day. This affects the user a lot but is

om
not urgent. Very few people do this. Another example: I can add 2000 products, but adding
2001 products, the cart removes all previously added products.

l.c
High Severity - High Priority

ai
Unable to Login into the Website. When you try to open a site it crashes. or when you try to

gm
open a game and the app crashes.

@
Low Severity - Low Priority
lik
Suppose there is a spelling mistake on the website or some typo errors.
al
Common fields in a Bug report
m
a.

Bug id: generate automatic / manual depends on the tool.


nn

Bug Summary/Title: Giving title to the bug. The actual behavior of the system.
ve

Environment: What kind of env are you using in terms of OS, browser, Url. device etc etc
n,

Steps to Reproduce the Bug (Or) Description: Detailed steps to reproduce the bug. Also
ju

Reproducibility frequency.
ar

Actual Result: the actual behavior seen on the system


lik

Expected Result: the expected behavior from the system


al

Attachments: Screenshots, log files of the bug to help developers to understand the defect
M

Severity: the severity of the bug


Priority: the priority of the bug. Depends on the project type. set at the time of bug creation or
at the time of triage
Assignee: name of the assignee(developer/designer/pm etc). from whom you are expecting the
resolution.

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Test Plan

A Test Plan provides a detailed approach to manage and execute the software projects
effectively.

Note: A test plan is prepared by the Team Lead or Project Manager

TEST PLAN Contents:

om
Objective
Revision History

l.c
Scope

ai
1. Features to be Tested

gm
2. Features not to be Tested
Environment

@
1. Hardware
2. Software
Entry & Exit Criteria
lik
al
Assumptions
m

Test Strategy
a.

1. Testing Types
nn

2. Risk management
3. Configuration Management
ve

4. Defect Management
n,

Staffing
ju

Communication Plan
ar

1. Meetings
2. Reporting
lik

Schedule
al

Test Design
M

Test Deliverables
References
Glossary

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Objective: It describes the main purpose of writing this test plan.

Eg: The main objective is to test the funds transfer between the savings accounts.

Revision History: It gives the history of the test plan i,e, who created and modified the versions
and dates.

Author Date Version Reviewed by Notes

om
Mann 14-Jun-2011 Draft 1.0 Initial version

Christ 18-Jun-2011 Draft 1.1 Team Lead/QA Updated as per

l.c
Manager name comments

ai
received

gm
Mann 20-Jul-2011 Issue 1.0 Issue version

@
lik
Scope: It talks about features to test and not to test
al
1. Features to be Tested: The list of features/functionalities need to be tested by the
m

testing team. Eg: Funds transfer from account A to B, Account balance of A & B
2. Feature not to be Tested: The list of features which are not tested by the testing team.
a.

Eg: A/c balance messages on mobile phones, Emails about transactions etc
nn
ve

Environment: This describes the list of Hardware and Software needed to test the application.
Eg:
n,

Machine - Windows Server Enterprise; OS: Windows 2007, Processor: Intel xeon, Memory: 4 GB,
ju

Hard Disk: 150 GB, Database: SQL server 2008, Browser: IE Version 10, FF 32.0
ar
lik

Entry & Exit Criteria: This specifies when to Start the Test execution and when to Stop the test
execution.
al

1. Entry criteria:
M

Eg: Test Environment should be ready.


White box testing should be completed
Test cases and Test data should be ready
2. Exit criteria:
Eg: All the test cases are executed
There are no High/Medium bugs in the Open status

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Assumptions: List the assumptions that have been made during the preparation of this plan.
Discuss them with Lead, PM or Clients for the confirmation. Based on the confirmation these
can be added in the Scope section.
Eg: The test accounts/Test data will be provided by the Client.
The testing is only on IE and FF browsers and not on Chrome & Safari browsers
(Note: Some assumptions are safe, but sometimes assumptions cost a lot of money to the
company.)

Test Strategy: Test strategy deals about the below:

om
1. Testing Types
2. Risk Management

l.c
3. Configuration Management
4. Defect Management

ai
1. Testing types: This section deals about the types of testing to be performed by the Testing

gm
team.
Eg:

@
Unit testing and Integration testing will be conducted by the Developers
lik
Smoke testing, System testing, Regression testing will be conducted by the Testing team
al
UAT(Alpha & Beta) will be conducted by the Client.
m
a.

2. Risk Management: (Prevention is better than cure) It deals with the risks that occur during
nn

the testing. Specify the mitigation plan and contingency plan for each risk.
ve

Risk Mitigation Contingency


n,

An unexpected How to avoid/stop the risk How to minimize the impact


ju

condition/situation that may i.e preventive measures to when the risk is occurred i.e
ar

or may not occur be taken before the risk the plan to implement when
lik

occurs. the risk is occurred.


al

Eg: 1. Delay of the builds Review the dev progress Take additional resources or
M

from the dev team daily and chase the devs to work more hours/night
deliver the build on a shifts to meet the delivery
specified date. date

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


2. Changes to the original 2. No mitigation because it is Test & Dev schedule will
Requirements or Designs. Or Unavoidable move further for some days.
Unplanned/Urgent Leaves or Additional resources will
be added. or Team will work
over time

3. Configuration Management: It talks about managing the test deliverables carefully


throughout the project. Also specifies the tools(VSS, Perforce, Confluence, Google Drive) which

om
will be used for keeping the deliverables securely.
Eg: Visual Source Safe will be used to keep the test deliverables.

l.c
ai
4. Defect Management: It talks about managing the defects that occur during the test

gm
execution. It defines the bug statuses, the defect Severity and Priority levels. Also mention the
tools(JIRA, Bugzilla, Quality Center) which will be used for managing the bugs.

@
lik
Eg: Bug statuses will be New, In Progress, Verify etc.
Severity * Priority levels like High, Medium & Low
al
m

Staffing details: It contains the list of people involved in the project.


a.
nn

Name Role Email ID


ve

John Project Manager john@


n,

Suresh Test Lead


ju
ar

Trisha Developer
lik

Padma Test Engineer


al
M

Communication Plan: This describes the frequency of the communication and the tools(Skype,
Google Hangout, Gmail, Team Viewer etc) used for the communication purpose.
1. Meetings:
Eg: Dev and Test teams daily meeting, Weekly meeting with Client via Skype to discuss the
Testing status.
2. Reports:
Eg: Daily status reports to the client, Weekly status report to the Client

Schedule: It talks about the timelines for each team.

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Eg: Dev Start Date: mm-dd-yy & Ends on mm-dd-yy
Testing Start Date: mm-dd-yy & Ends on mm-dd-yy

Test Cases: Test cases template and the Test cases will be created in this section.

Test Deliverables: It tells about the items which will be delivered to the client from the testing
team.

om
Eg: Test Plan, Test cases, Test Results, Traceability Matrix, Bug report and Test Summary Report.

l.c
ai
References: This contains the list of documents ( eg: FRS, Design docs) which are referred to

gm
create the test plan and the test cases.

@
Document Name Author Version
lik
FundTransfer-Req John 1.0
al
uirements.doc
m
a.

Designs-FundTran Sunil 1.1


nn

sfer.pdf
ve
n,

Glossary: Define terms and acronyms used in this document to eliminate confusion in testing
ju

and communications.
ar

Eg:
lik

www - World Wide Web


TC - Test case
al

ATM - Automated Teller Machine


M

UAT - User Acceptance Testing


FB - FaceBook

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Requirement Traceability Matrix(RTM): This gives the test coverage. That is whether the test
cases are covered for all the requirements or not. The test engineer will maintain this matrix
while creating the test cases. This is a mapping between TCs & Requirements.

Req ID TC ID

R1 TC1

R2 TC2,
TC3

om
Rn TCn

l.c
ai
Daily Status Report: This includes the tasks that are completed on that particular day. Test

gm
engineer his/her report to Team Lead. Team Lead collects the reports from the team members
and send this to PM or Client daily or Weekly.

@
Sample report:
lik
Tasks for Today:
al
1. Test case creation is completed for Funds transfer functionality
m

2. Test execution completed for Registration and login pages(Total: 20, Executed: 20, Pass:
a.

15, Fail: 5)
nn

3. Number of bugs posted: 08


ve

Tasks for Tomorrow:


n,

1. We will start execution for Funds transfer functionality


ju
ar

Test Summary Report: It is an overall testing status report. This will be prepared by the Team
lik

Lead when the testing is completed.


al

Sample:
M

We completed the test execution of the Funds transfer application.


Build No: Build 15 is tested, Test url: testaxisbank.com
Total Test Engineers: 5
No of Days: 4 (Start Date & End Date)
Total TCs - 100
Total Executed - 100
Total Passed - 90
Total Failed - 10 (P1 = 0, P2 = 1, P3= 9)

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Total Bugs Created - 50 (High severity=10, Medium=30, Low =10)
Total Bugs Closed - 45
Bugs Deferred - 5

Different types of Terminology and Testing definitions

Build: An executable application. The build will be created by the Developers and given to the
Test team.

om
Smoke Testing(Build Verification Testing):

l.c
It's conducted on every new build to know whether the build is Stable or Unstable. If the basic

ai
test cases are passed then the test team continues further testing otherwise rejects the build.

gm
The tester touches all the main areas of the application without getting into too deep.

@
Regression Testing:
lik
Regression testing is performed when the code is changed. It is performed to make sure that
al
the existing functionality is still working fine and not broken due to the changes in the code. The
m

code can be changed when the new functionality is added to the existing or when the bugs are
a.

fixed.
nn

Sanity Testing: It's conducted on a stable build. If we don’t have time to do full regression, then
ve

we identify a few important functionalities and do the testing which is called Sanity testing. Its a
part of Regression testing.
n,
ju

Retesting:
ar

Retesting is performed when a bug is fixed. We execute the steps which were mentioned in
‘Steps to reproduce’. This is called Re-testing.
lik
al

Usability Testing:
M

Usability means to check whether the application is user-friendly or not. Make sure that the
Button names, label names and help messages etc are meaningful.

Eg: If the drop down field ‘Country Name’ has country names in the jumbling(not in
Alphabetical order). Then it is very difficult to locate a country name, so it's not usable.

It would be good if the Password field shows some help text when creating the password. Some
applications show the help text after submitting the Password.

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Accessibility Testing:
This test is to verify that the application can be used by the physically challenged users (Eg:
Blind, Deaf etc). Tools: Voice over, Talkback. They speak the names/content on the screens.

Compatibility testing:
Testing the application on different Environments. There are two types.
a> Browser compatibility(Cross Browser testing): Testing the application on different
browsers like FF, IE, Chrome, Safari and Opera etc. Also, testing on different versions

om
of Browser eg: IE8, IE9, IE10 etc.
b> OS compatibility: Testing on different OSs like Windows 7, 98, XP, Vista, Unix, Linux

l.c
and Mac etc.

ai
Again the combinations like:

gm
Windows machine with IE, FF & Chrome
Mac machine with Safari, Chrome

@
Linux machine with IE, Opera etc
lik
al
m

Exploratory testing:
a.

Domain experts will perform this testing. Doing without knowing the
nn

functionality/requirements. Just explore and understand the application then do the testing. It
is like the domain experts review the entire application.
ve
n,

Security testing:
ju

This is to check whether the application is secured or not.


ar

Eg: Check whether users can login with Valid details or not.
lik

Submit some code in the fields on the Forms(code will be given by the developers)
al

Check for session/cookies expiry


M

Critical info(passwords etc) is encrypted or not, browser back/forward and access the direct
URLs.

Localization Testing:
Testing the application in different languages(Chinese, Urdu, Hindi, UK English etc) is known as
Localization testing. In this the application will be tested for the currency symbols, date & time
formats and the text alignment.
Eg: Google website can be tested in multiple languages

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


End-to-End testing:
Testing the entire system from one end to the other end is called E-to-E testing.

Eg: Test the Registration into Bookmyshow.com -> Test the Login into the site -> Searching the
movies -> Test the seat booking -> Making payment -> Test the accounts for the balance -> Test
the ticket details on Phone / Email -> Check the printed details on the Ticket -> Logout -> Go to
the Theater and show the ticket

om
Ad-hoc testing:
Test engineers can perform the testing in their own way. Testers know the functionality of the

l.c
application. Test cases will not be designed for this testing. This can be performed when the test
engineers have some spare time after completion of testing.

ai
gm
Monkey testing: Testing an application by doing some abnormal actions or in a zigzag way to

@
find the defects is called Monkey/Zigzag testing. We can find defects like crashing or hanging.
lik
Eg: Clicking the ‘Submit’ button quickly for 2 to 3 times
al
Scroll the page quickly from top to bottom vice versa
m
a.
nn

Installation/uninstallation testing: To verify the builds are installed and uninstalled successfully.
Also upgrading the build from lower to higher version or higher to lower version.
ve
n,

Frequently Asked Questions in the Interviews?


ju

How to test a condition(transaction) which occurs in the Future dates?


ar

https://artoftesting.com/test-cases-water-bottle
lik

Improving from the Interviews.


al

Naukri and Resume preparation.


M

What do you cover/consider when writing the Test cases: 1) GUI test cases(ie, to cross-check
the application is looking as per the Designs. All Links, buttons, Radio buttons, Check boxes,
dropdowns, color, images, animations). 2) Positive flow test cases 3) Negative test cases (ie,
using Test case Design techniques)

(By Mallikarjuna Reddy Vennapusa, Email-ID: [email protected])

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])


Sample Interview Questions

1) Tell me about yourself?


2) Can you give a Brief overview on your current Project?
3) In your current project,what are your Daily activities? Or What are your Roles and
Responsibilities?
4) What is STLC(Software Testing Life Cycle)?
5) What is the Bug Life cycle? Or What are the different Statuses of a Bug?
6) What is the difference between Priority and Severity?
7) Did you find any High priority bugs in your project? Two example bugs?

om
8) What is a Deferred bug?
9) What are the fields Or details you give when writing a Bug?

l.c
10) What is Regression Testing? Smoke testing? Sanity testing?
11) What are the fields in the Test case Template?

ai
12) Do you use any techniques to write Test cases? Or What are the test case design

gm
techniques?
13) What is BVA?

@
14) What is RTM(Requirement Traceability Matrix)? Or How do you make sure the Test
coverage? lik
15) Scenario based cases. Example: How do you test a Login page? How do you test Shopping
al
cart?
m

16) How do you test a water bottle? How do you test a pen?
17) What is your response if the Developer says your bug is invalid(or Rejected) ?
a.

18) If the Customer says a bug is in the Production, then what is your response?
nn

19) Difference between Verification and Validation?


ve

20) How many test cases do you write every day? How many test cases do you execute every
day?
n,

21) When do you stop Testing?


22) Do you know Test plan? What are the contents in the Test plan?
ju

23) What is Retesting?


ar

24) What Agile Scrum process do you use in your project?


lik

25) Where do you write your test cases? Or What do you use to write the Test cases?
26) Where do you post the bugs? Or What do you use to create a new bug?
al

27) Example for High Severity and Low Priority bug?


M

28) Example of Low Severity and High Priority bug?


29) What is the baseline for writing the TCs (Test Cases)?
30) Difference between Actual Result and Expected Result?
31) What is Cross browser testing? Why to test in multiple browsers?
32) When do you Start testing? Or When do you start Test Execution?
33) What is Negative testing?
34) What is Positive testing?

(Manual Testing Trainer: V. Mallikarjuna Reddy, Email-id: [email protected])

You might also like