Manual Testing
Manual Testing
What is software?
• A Software is a collection of computer programs that helps us to
perform a task
• Types of Software:
1) System Software
Ex: Device drivers, OS, Servers etc.
2) Programming Software
Ex: Compiler, Debugger, Interpreter etc.
3) Application Software
Ex: Industrial Automation, Business software, Games etc.
Software Development Life Cycle (SDLC)
• SDLC, Software Development Life Cycle is a process used by Software
industry to design, develop and test high quality software(s).
• It aims to produce a high quality software that meets customer
expectations.
SDLC Model
• Waterfall Model
• Increment Model
• Spiral Model
• V-Model
• Agile Model
Waterfall Model
Incremental/Iterative Model
Spiral Model
V Model
Agile Model
What is Software Testing?
• Software Testing is part of Software Development Process
• Software Testing is an activity to detect and identify the defects in
software
• The objective of testing is to release quality product to the client
What is the need of testing?
• Ensure that software is bug free
• Ensure that system meets customer requirements and software
specification
• Ensure that system meets end user expectation
• Fixing the bugs identified after Software release is very expensive
Software Quality
• Quality Software is reasonably:
• Bug Free
• Delivered on time
• Within budget
• Meets requirements/expectations
• Maintainable
• Quality: Quality is defined as justification of all the requirements of a
customer in a product
• Note: Quality is not defined in the Product. It is defined in the customer’s
mind.
Error, Bug & Failure
• Error – Any incorrect human action that triggers a problem in the
system is called error.
• Defect/Bug – Deviation from the expected behaviour to the actual
behaviour is called bug.
• Failure – The deviation identified by end-user while using the system
is called a failure.
Why there are bugs in Software?
• Miscommunication
• Software Complexity
• Programming Errors
• Changing Requirements
• Lack of Skilled Tester
Etc..
Quality Assurance & Quality Control
Verification & Validation
Types of Software Testing
• White Box Testing
• Black Box Testing
• Grey Box Testing
White Box Testing
• White Box Testing conducts on the internal logic of the program.
• Programming skills are required. Generally Developer does this
testing.
• Ex: Unit Testing & Integration Testing
Black Box Testing
• Testing conducts on the functionality of the application whether it is
working according to the requirements or not.
• Ex: System Testing & UAT Testing
Grey Box Testing
• This is combination of both White and Black box testing.
• Ex: Database Testing
Black Box Testing Types
• GUI Testing
• Usability Testing
• Functional Testing
• Non-Functional Testing
What is GUI & GUI Testing?
• There are two types of interfaces in a computer application.
• Command Line Interface – where we type text (command) and computer
responds to that command.
• Graphical User Interface – where we interact with computer using images
rather than text.
• GUI Testing involves checking the screens with controls like Menus,
Buttons, Icons and all type of bars – tool bar, menu bar, dialog boxes
and Windows etc.
• During GUI Testing, Test Engineer validates Look & Feel of application.
Also they validate its easy to use and the navigation & shortcut keys
are working fine.
Checklist for GUI Testing
• It checks if all the basic elements are available in page or not.
• It checks the spelling of the objects.
• It checks alignments of the objects in page.
• It checks content displayed in web pages.
• It checks if the mandatory fields are highlighted or not.
• It checks consistency in background colour and font type and font size
etc.
Usability Testing
• Usability Testing also known as User Experience(UX) Testing, is a
testing method for measuring how easy and user-friendly a software
application is.
• Checks how easily end users are able to understand and operate the
application
Functional Testing
• FUNCTIONAL TESTING is a type of software testing that validates the
software system against the functional requirements/specifications.
• The purpose of Functional tests is to test each function of the software
application, by providing appropriate input, verifying the output against
the Functional requirements.
• Following is a step by step process on How to do Functional Testing :
• Understand the Functional Requirements
• Identify test input or test data based on requirements
• Compute the expected outcomes with selected test input values
• Execute test cases
• Compare actual and computed expected results
Checklist for Functional Testing
• Object Properties Coverage
• Input Domain Coverage
• Database Testing/Backend testing
• Error Handling Coverage
• Link Existence & Link Coverage
Object Properties Testing
• Every Object has certain Properties.
• Ex: Enable, Disable, Focus etc.
• During Functional testing Test Engineers validate properties of objects
at run time
Input Domain Testing
• During Input Domain Testing, Test Engineer validates data provided to
the application w.r.t value and length.
• Each of these stages has a definite Entry and Exit criteria, activities & deliverables associated with
it.
• Entry Criteria: Entry Criteria gives the prerequisite items that must be completed before testing can begin.
• Exit Criteria: Exit Criteria defines the items that must be completed before testing can be concluded
Requirement Analysis Phase
• Entry Criteria
• Requirements Document available (both functional and non functional)
• Acceptance Criteria Defined
• Application architectural document available
• Activity
• Analyse business functionality to know the business modules and module specific functionalities.
• Identify all transactions in the modules.
• Identify all user profiles
• Gather user interface/authentication, geographic spread requirements
• Identify Types of tests to be performed
• Prepare Requirement Traceability Matrix (RTM)
• Automation Feasibility analysis (if required)
• Exit Criteria
• Signed Off RTM
• Signed off Automation Feasibility Report
• Deliverables
• RTM
• Automation Feasibility Report (if required)
Test Planning Phase
• Entry Criteria
• Requirements Documents
• Requirement Traceability Matrix
• Test Automation Feasibility Document
• Activity
• Analyse various testing approaches available and finalise bet suited testing approach
• Preparation of Test Plan/Strategy document
• Test Tool selection
• Test Effort Estimation
• Resource Planning and determining roles and responsibilities
• Exit Criteria
• Approved Test Plan/Strategy document
• Signed off Effort Estimation document
• Deliverables
• Test Plan/Strategy document
• Effort Estimation document
Test Case Development Phase
• Entry Criteria
• Requirements Documents
• Requirement Traceability Matrix and Test Plan
• Automation Analysis Report
• Activity
• Create test cases, test design, automation scripts (wherever applicable)
• Review and baseline Test Cases and Automation Scripts
• Create Test Data
• Exit Criteria
• Reviewed and Signed Off Test Cases & Scripts
• Reviewed and Signed Off Test data
• Deliverables
• Test Cases & Scripts
• Test Data
Environment Setup Phase
• Entry Criteria
• System Design and Architecture documents
• Environment Setup Plan
• Activity
• Understand the required architecture, environment set-up document
• Setup test Environment and test data
• Perform smoke test on the build
• Accept/reject the build depending on smoke test result
• Exit Criteria
• Environment setup is working as per the plan and checklist
• Test data setup is complete
• Smoke test is successful
• Deliverables
• Environment ready with test data set up
• Smoke test result
Test Execution Phase
• Entry Criteria
• Baseline RTM, Test Plan, Test Case/Scripts
• Test environment ready
• Test data Setup done
• Activity
• Execute tests as per plan
• Document test results and log defects for failed cases
• Update Test plans and test cases (if necessary)
• Map defects to test cases in RTM
• Retest the fixed defects
• Regression Testing of Application
• Exit Criteria
• All tests planned are executed
• Defects logged and tracked to closure
• Deliverables
• Completed RTM with execution status
• Test cases updated with results
• Defect Reports
Test Cycle Closure Phase
• Entry Criteria
• Testing has been completed
• Test results and Defect logs
• Activity
• Evaluate cycle completion criteria based on – Time, Test Coverage, Cost, Software
Quality, Critical business Objectives
• Document the learning out of the Project
• Qualitative and quantitative reporting of Test Result Analysis
• Exit Criteria
• Test Closure report signed off by client
• Deliverables
• Test Closure Report
• Test Metrics
Requirement Traceability Matrix
• RTM is a document that maps and traces user requirement with test
cases.
• It captures all requirements proposed by the client and requirement
traceability in a single document.
• The main purpose of Requirement Traceability Matrix is to validate
that all requirements are checked via test cases such that no
functionality is unchecked during Software testing.
Test Case Contents
• Test Case Id – Typically a numeric or alphanumeric identifier that QA engineers and
testers use to group test cases into test suites.
• Test Case Title – A title that describes the functionality or feature that the test is
verifying.
• Description – It describes what the test intends to verify in one to two sentences.
• Pre-Condition – Any conditions that are necessary for the tester or QA engineer to
perform the test.
• Priority (P0,P1,P2,P3) – Priority of the test case with respect to Customer Requirement
• Steps/Actions – Detailed descriptions of the sequential actions that must be taken to
complete the test.
• Expected Result – An outline of how the system should respond to each test step.
• Actual Result – Actual output of the steps performed
• Test Data – Test data needed to execute each step
Characteristics of good test case
• A good test case has certain characteristics which are:
• Should be accurate and tests what it is intended to test
• No un-necessary steps should be included in it
• It must be traceable to requirements
• It should be independent i.e. you should be able to execute it in any order
without dependency on other test cases
• It should be simple and clear i.e. any tester should be able to understand it
and execute it.
Defect Reporting
• Any mismatched functionality found in an application is
Defect/Bug/Issue.
• During test execution, Test Engineers report mismatches as bugs to
development team using Defect Templates or Bug Reporting Tools
• Defect Reporting Tools:
• Jira
• Quality Centre (HP ALM)
• Bug Zilla
• DevTrack
• ClearQuest etc.
Defect Report Contents
• Defect ID – Unique identification number for defects
• Defect Description – Detailed description of defect including information about the
module in which defect was found
• Version – Version of the application in which defect was found
• Steps – Detailed steps along with screenshots to help developer reproduce the defect
• Date Raised – Date when defect is raised
• Detected By – Name/Id of the tester who raised the defect
• Status – Status of the defect
• Fixed By – name/ID of the developer who fixed it
• Date Closed – Date when the defect is closed
• Severity – describes the impact of defect on the application
• Priority – related to defect fixing urgency
Defect Management Process
Defect Discovery Phase