0% found this document useful (0 votes)
30 views

Manual Testing

Uploaded by

Kuldeep Arora
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Manual Testing

Uploaded by

Kuldeep Arora
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 62

Manual Testing

What is software?
• A Software is a collection of computer programs that helps us to
perform a task
• Types of Software:
1) System Software
Ex: Device drivers, OS, Servers etc.
2) Programming Software
Ex: Compiler, Debugger, Interpreter etc.
3) Application Software
Ex: Industrial Automation, Business software, Games etc.
Software Development Life Cycle (SDLC)
• SDLC, Software Development Life Cycle is a process used by Software
industry to design, develop and test high quality software(s).
• It aims to produce a high quality software that meets customer
expectations.
SDLC Model
• Waterfall Model
• Increment Model
• Spiral Model
• V-Model
• Agile Model
Waterfall Model
Incremental/Iterative Model
Spiral Model
V Model
Agile Model
What is Software Testing?
• Software Testing is part of Software Development Process
• Software Testing is an activity to detect and identify the defects in
software
• The objective of testing is to release quality product to the client
What is the need of testing?
• Ensure that software is bug free
• Ensure that system meets customer requirements and software
specification
• Ensure that system meets end user expectation
• Fixing the bugs identified after Software release is very expensive
Software Quality
• Quality Software is reasonably:
• Bug Free
• Delivered on time
• Within budget
• Meets requirements/expectations
• Maintainable
• Quality: Quality is defined as justification of all the requirements of a
customer in a product
• Note: Quality is not defined in the Product. It is defined in the customer’s
mind.
Error, Bug & Failure
• Error – Any incorrect human action that triggers a problem in the
system is called error.
• Defect/Bug – Deviation from the expected behaviour to the actual
behaviour is called bug.
• Failure – The deviation identified by end-user while using the system
is called a failure.
Why there are bugs in Software?
• Miscommunication
• Software Complexity
• Programming Errors
• Changing Requirements
• Lack of Skilled Tester
Etc..
Quality Assurance & Quality Control
Verification & Validation
Types of Software Testing
• White Box Testing
• Black Box Testing
• Grey Box Testing
White Box Testing
• White Box Testing conducts on the internal logic of the program.
• Programming skills are required. Generally Developer does this
testing.
• Ex: Unit Testing & Integration Testing
Black Box Testing
• Testing conducts on the functionality of the application whether it is
working according to the requirements or not.
• Ex: System Testing & UAT Testing
Grey Box Testing
• This is combination of both White and Black box testing.
• Ex: Database Testing
Black Box Testing Types
• GUI Testing
• Usability Testing
• Functional Testing
• Non-Functional Testing
What is GUI & GUI Testing?
• There are two types of interfaces in a computer application.
• Command Line Interface – where we type text (command) and computer
responds to that command.
• Graphical User Interface – where we interact with computer using images
rather than text.
• GUI Testing involves checking the screens with controls like Menus,
Buttons, Icons and all type of bars – tool bar, menu bar, dialog boxes
and Windows etc.
• During GUI Testing, Test Engineer validates Look & Feel of application.
Also they validate its easy to use and the navigation & shortcut keys
are working fine.
Checklist for GUI Testing
• It checks if all the basic elements are available in page or not.
• It checks the spelling of the objects.
• It checks alignments of the objects in page.
• It checks content displayed in web pages.
• It checks if the mandatory fields are highlighted or not.
• It checks consistency in background colour and font type and font size
etc.
Usability Testing
• Usability Testing also known as User Experience(UX) Testing, is a
testing method for measuring how easy and user-friendly a software
application is.
• Checks how easily end users are able to understand and operate the
application
Functional Testing
• FUNCTIONAL TESTING is a type of software testing that validates the
software system against the functional requirements/specifications.
• The purpose of Functional tests is to test each function of the software
application, by providing appropriate input, verifying the output against
the Functional requirements.
• Following is a step by step process on How to do Functional Testing :
• Understand the Functional Requirements
• Identify test input or test data based on requirements
• Compute the expected outcomes with selected test input values
• Execute test cases
• Compare actual and computed expected results
Checklist for Functional Testing
• Object Properties Coverage
• Input Domain Coverage
• Database Testing/Backend testing
• Error Handling Coverage
• Link Existence & Link Coverage
Object Properties Testing
• Every Object has certain Properties.
• Ex: Enable, Disable, Focus etc.
• During Functional testing Test Engineers validate properties of objects
at run time
Input Domain Testing
• During Input Domain Testing, Test Engineer validates data provided to
the application w.r.t value and length.

• There are two techniques in Input domain techniques:


• Equivalence Partitioning or Equivalence Class Partitioning
• In this technique, input data units are divided into equivalent partitions that can be used
to derive test cases which reduces time required for testing because of small number of
test cases. (min-1, between min and max, max+1)
• Boundary Value Analysis
• It is used to test boundary values because the input values near the boundary have
higher chances of error. (min-1, min, min+1, max-1, max, max+1)
Database Testing
• During Database testing Test Engineers validate the data w.r.t
database
• Validates DML Operations (Insert, Update, Delete & Select)
• SQL Language: DDL,DML, DCL etc.
• DDL – Create, Alter, Drop
• DML – Select, Insert, Update, Delete
• DCL – Rollback, Commit etc.
Error Handling Testing
• Validates error messages thrown by application when we provide
invalid data.
• Error messages should be clear and easy to understand for the User.
Link Coverage Testing
• Links Existence – Links placed in appropriate location or not.
• Links Execution – Link is navigating to appropriate page or not
• Types of Links:
• Internal Links (hyperlinks that point to another page on your website)
• External Links (hyperlinks that point to pages on another website)
• Broken Links (hyperlinks which are not working)
Non Functional testing
• NON-FUNCTIONAL TESTING is defined as a type of Software testing to
check non-functional aspects (performance, usability, reliability, etc.)
of a software application. An excellent example of non-functional test
would be to check how many people can simultaneously login into a
software.
• Non Functional Testing techniques:
• Performance Testing
• Security Testing
• Compatibility Testing
• Configuration testing
• Installation Testing
Performance Testing
• Load Testing – Testing speed of the system while increasing the load
gradually till the expected customer number (Positive Condition)
• Stress Testing – Testing how the system restarts after we increase the
load beyond expected number to break it (Negative Condition)
• Whether the system recovers to normalcy from abnormality caused due to
system breakdown.
• Volume Testing – Checks how much volume of data can be handled by
the system
Security Testing
• Testing security provided by the system
• Types:
• Authentication
• Access Control/Authorization
• Encryption/Decryption
Compatibility Testing
• Testing compatibility of the system w.r.t OS, H/W & Browsers.
• Operating System Compatibility
• Hardware Compatibility
• Browser Compatibility
• Forward & Backward Compatibility
Installation Testing
• Testing Installation of the application on Customer expected platforms
and check installation steps, navigation, how much space is occupied
in memory.
• Check Un-installation.
Testing Terminology
• Ad-hoc Testing
• Software Testing without proper planning and documentation
• Testing carried out with the knowledge of tester about application and tester tests
application randomly without following specification/requirements
• Re-Testing
• Testing the functionality repetitively is called re-testing
• Re-testing is introduced in following cases:
• Testing functionality with multiple inputs
• Testing functionality on different environments (meaning different Browsers & OS)
• Testing functionality in the modified build to confirm bug fixes are made correctly
• Regression testing
• It is a process of identifying various scenarios in modified build where there is a
chance of getting side effects and retesting these scenarios.
Testing Terminology cont…
• Sanity Testing
• Sanity testing is a kind of Software Testing performed after receiving a software
build, with minor changes in code, or functionality, to ascertain that the bugs
have been fixed and no further issues are introduced due to these changes.
• The goal is to determine that the proposed functionality works roughly as
expected.
• It’s narrow and deep.
• Smoke Testing
• Testing done to verify that the critical functionalities of software are working fine.
It is executed before any detailed functional testing.
• The main purpose of smoke testing is to reject a software application with
defects so that QA team does not waste time testing broken software application.
• It’s is shallow and wide.
Testing Terminology cont…
• End to End Testing
• Testing the overall functionalities of the system including the data integration
amongst all modules
• The purpose of end-to-end testing is testing whole software for
dependencies, data integrity and communication with other systems,
interfaces and databases to exercise complete production like scenario.
• Exploratory Testing
• It is a type of software testing where Test cases are not created in advance
but testers check system on the fly. They may note down ideas about what to
test before test execution.
• The focus of exploratory testing is more on testing as a “thinking” activity.
Testing Terminology cont…
• Positive Testing
• It is a type of testing which is performed on a software application by
providing the valid data sets as an input.
• It checks whether the software application behaves as expected with positive
inputs or not.
• Positive testing is performed in order to check whether the software
application does exactly what it is expected to do.
• Negative Testing
• Software testing type used to check the software application for unexpected
input data and conditions.
• The purpose of negative testing is to prevent the software application from
crashing due to negative inputs and improve the quality and stability.
Software Testing Life Cycle (STLC)
• STLC is a sequence of specific activities conducted during the testing process to ensure software
quality goals are met.
• It involves both verification and validation activities.

• Each of these stages has a definite Entry and Exit criteria, activities & deliverables associated with
it.
• Entry Criteria: Entry Criteria gives the prerequisite items that must be completed before testing can begin.
• Exit Criteria: Exit Criteria defines the items that must be completed before testing can be concluded
Requirement Analysis Phase
• Entry Criteria
• Requirements Document available (both functional and non functional)
• Acceptance Criteria Defined
• Application architectural document available
• Activity
• Analyse business functionality to know the business modules and module specific functionalities.
• Identify all transactions in the modules.
• Identify all user profiles
• Gather user interface/authentication, geographic spread requirements
• Identify Types of tests to be performed
• Prepare Requirement Traceability Matrix (RTM)
• Automation Feasibility analysis (if required)
• Exit Criteria
• Signed Off RTM
• Signed off Automation Feasibility Report
• Deliverables
• RTM
• Automation Feasibility Report (if required)
Test Planning Phase
• Entry Criteria
• Requirements Documents
• Requirement Traceability Matrix
• Test Automation Feasibility Document
• Activity
• Analyse various testing approaches available and finalise bet suited testing approach
• Preparation of Test Plan/Strategy document
• Test Tool selection
• Test Effort Estimation
• Resource Planning and determining roles and responsibilities
• Exit Criteria
• Approved Test Plan/Strategy document
• Signed off Effort Estimation document
• Deliverables
• Test Plan/Strategy document
• Effort Estimation document
Test Case Development Phase
• Entry Criteria
• Requirements Documents
• Requirement Traceability Matrix and Test Plan
• Automation Analysis Report
• Activity
• Create test cases, test design, automation scripts (wherever applicable)
• Review and baseline Test Cases and Automation Scripts
• Create Test Data
• Exit Criteria
• Reviewed and Signed Off Test Cases & Scripts
• Reviewed and Signed Off Test data
• Deliverables
• Test Cases & Scripts
• Test Data
Environment Setup Phase
• Entry Criteria
• System Design and Architecture documents
• Environment Setup Plan
• Activity
• Understand the required architecture, environment set-up document
• Setup test Environment and test data
• Perform smoke test on the build
• Accept/reject the build depending on smoke test result
• Exit Criteria
• Environment setup is working as per the plan and checklist
• Test data setup is complete
• Smoke test is successful
• Deliverables
• Environment ready with test data set up
• Smoke test result
Test Execution Phase
• Entry Criteria
• Baseline RTM, Test Plan, Test Case/Scripts
• Test environment ready
• Test data Setup done
• Activity
• Execute tests as per plan
• Document test results and log defects for failed cases
• Update Test plans and test cases (if necessary)
• Map defects to test cases in RTM
• Retest the fixed defects
• Regression Testing of Application
• Exit Criteria
• All tests planned are executed
• Defects logged and tracked to closure
• Deliverables
• Completed RTM with execution status
• Test cases updated with results
• Defect Reports
Test Cycle Closure Phase
• Entry Criteria
• Testing has been completed
• Test results and Defect logs
• Activity
• Evaluate cycle completion criteria based on – Time, Test Coverage, Cost, Software
Quality, Critical business Objectives
• Document the learning out of the Project
• Qualitative and quantitative reporting of Test Result Analysis
• Exit Criteria
• Test Closure report signed off by client
• Deliverables
• Test Closure Report
• Test Metrics
Requirement Traceability Matrix
• RTM is a document that maps and traces user requirement with test
cases.
• It captures all requirements proposed by the client and requirement
traceability in a single document.
• The main purpose of Requirement Traceability Matrix is to validate
that all requirements are checked via test cases such that no
functionality is unchecked during Software testing.
Test Case Contents
• Test Case Id – Typically a numeric or alphanumeric identifier that QA engineers and
testers use to group test cases into test suites.
• Test Case Title – A title that describes the functionality or feature that the test is
verifying.
• Description – It describes what the test intends to verify in one to two sentences.
• Pre-Condition – Any conditions that are necessary for the tester or QA engineer to
perform the test.
• Priority (P0,P1,P2,P3) – Priority of the test case with respect to Customer Requirement
• Steps/Actions – Detailed descriptions of the sequential actions that must be taken to
complete the test.
• Expected Result – An outline of how the system should respond to each test step.
• Actual Result – Actual output of the steps performed
• Test Data – Test data needed to execute each step
Characteristics of good test case
• A good test case has certain characteristics which are:
• Should be accurate and tests what it is intended to test
• No un-necessary steps should be included in it
• It must be traceable to requirements
• It should be independent i.e. you should be able to execute it in any order
without dependency on other test cases
• It should be simple and clear i.e. any tester should be able to understand it
and execute it.
Defect Reporting
• Any mismatched functionality found in an application is
Defect/Bug/Issue.
• During test execution, Test Engineers report mismatches as bugs to
development team using Defect Templates or Bug Reporting Tools
• Defect Reporting Tools:
• Jira
• Quality Centre (HP ALM)
• Bug Zilla
• DevTrack
• ClearQuest etc.
Defect Report Contents
• Defect ID – Unique identification number for defects
• Defect Description – Detailed description of defect including information about the
module in which defect was found
• Version – Version of the application in which defect was found
• Steps – Detailed steps along with screenshots to help developer reproduce the defect
• Date Raised – Date when defect is raised
• Detected By – Name/Id of the tester who raised the defect
• Status – Status of the defect
• Fixed By – name/ID of the developer who fixed it
• Date Closed – Date when the defect is closed
• Severity – describes the impact of defect on the application
• Priority – related to defect fixing urgency
Defect Management Process
Defect Discovery Phase

Defect Categorization Phase


Defect Severity
• Severity describes the seriousness of bug in terms of functionality
• In Software testing, defect severity can be defined as the degree of
impact a defect has on the development or operation of the application
under test
• Defect Severity can be classified into four categories:
• Critical – This defect indicates complete shut-down of the process, nothing can
proceed further.
• Major – It is a highly severe defect and collapses the system. However, certain
parts of the system remain functional.
• Medium – It causes some undesirable behaviour, but the system is still functional.
• Low – It won’t cause any major break-down of the system.
Defect Priority
• Priority is the importance of bug in terms of customer requirement.
• Priority states the order in which a defect should be fixed. Higher the
priority the sooner the bug should be resolved.
• Defect Priority can be classified into three categories:
• P1 (High) – The defect must be resolved as soon as possible as it affects the
system severely and cannot be used until it is fixed.
• P2 (Medium) – During the normal course of the development activities defect
should be resolved. It can wait until a new version is created.
• P3 (Low) – The Defect is an irritant but repair can be done once the more
serious Defect has been fixed.
Tips for determining Severity of defect
• Decide the frequency of occurrence: In some cases, if the occurrence of a minor-defect is
frequent in the code, it can be more severe. So from a user’s perspective, it is more serious even
though it is a minor defect.
• Isolate the defect: Isolating the defect can help to find out its severity of the impact
Defect Triage
• Defect Triage is a process that tries to do the re-balancing of the
process where the test team faces the problem of limited availability
of resources.
• When there are huge number of defects and limited testers to verify
them, defect triage helps to try to get as many defects resolved based
on defect parameters like severity and priority.
• Defect Triage Process:
Defect Triage Process
• Triage process includes the following steps:
• Reviewing all the defects including rejected defects by the team
• Initial assessment of the defects based on its priority and severity
• Prioritizing the defect based on the inputs
• Assign the defect to correct release by product manager
• Re-directs the defect to the correct owner/team for further actionDefect
Bug Life Cycle
States of Defects
• New – A bug is reported and is yet to be assigned to developer
• Assigned – Once the bug is posted by the tester, the lead of the tester approves the bug and assigns
the bug to the developer team
• Open – The developer starts analysing and works on the defect fix
• Need More Info – When the developer needs more information to reproduce the bug or fix the bug
• Fixed – Bug is fixed and waiting for validation
• Closed – Bug is fixed, validated and closed
• Rejected – Bug not genuine
• Deferred – Fix will be placed in future builds
• Duplicate – Two bugs with same detail
• Invalid – Not a valid bug
• Reopened – Bug still exists even after fixed by developer
QA/Testing Activities
• Understanding the requirements and functional Specifications of the application
• Identifying required test scenarios
• Designing Test cases to validate application
• Execute test cases to validate application
• Log Test results (How many test cases passed/failed)
• Defect reporting and tracking
• Retest fixed defects and perform Regression testing to validate any side effects
caused by defect fix
• Create Automation Script for Regression testing
• Provides recommendation on whether or not the application/system is ready for
production.

You might also like