0% found this document useful (0 votes)
3 views

Software Testing

The document provides an overview of testing documentation, emphasizing its importance in software testing processes. It details various types of test documents including test scenarios, test cases, test plans, and bug reports, along with their purposes and components. Additionally, it discusses test case design techniques, error identification, and popular bug tracking tools used in the industry.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Software Testing

The document provides an overview of testing documentation, emphasizing its importance in software testing processes. It details various types of test documents including test scenarios, test cases, test plans, and bug reports, along with their purposes and components. Additionally, it discusses test case design techniques, error identification, and popular bug tracking tools used in the industry.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 135

INTRODUCTION TO

Ch 0 1 TESTCASE DESIGN
Testing Documentation
Testing documentation is the documentation of artifacts that are created during or before
the testing of a software application. Documentation reflects the importance of processes for
the customer, individual and organization.

Careful documentation can save the time, efforts and wealth of the organization.

 Once the test document is ready, the entire test execution process depends on the test
document.
 The primary objective for writing a test document is to decrease or eliminate the doubts
related to the testing activities.
Types of test document

9/3/20XX Presentation Title 3


Test Scenarios
It is a document that defines the multiple ways or combinations of testing the application.
Generally, it is prepared to understand the flow of an application. It does not consist of any
inputs and navigation steps.
Test scenarios on the Login module
•Enter the valid login details (Username, password), and check that the home page is
displayed.
•Enter the invalid Username and password and check for the home page.
•Leave Username and password blank, and check for the error message displayed.
•Enter the valid Login, and click on the cancel, and check for the fields reset.
•Enter invalid Login, more than three times, and check that account blocked.
•Enter valid Login, and check that the Username is displayed on the home screen. 4
Test case

 It is an in-details document that describes step by step procedure to test an application.


 It consists of the complete navigation steps and inputs and all the scenarios that need to
be tested for the application.
Test plan

 It is a document that is prepared by the managers or test lead.


 It consists of all information about the testing activities.
 The test plan consists of multiple components such as Objectives, Scope, Approach, Test
Environments, Test methodology, Template, Role & Responsibility, Effort estimation,
Entry and Exit criteria, Schedule, Tools, Defect tracking, Test Deliverable, Assumption,
Risk, and Mitigation Plan or Contingency Plan.
5
Requirement Traceability Matrix (RTM)
• The Requirement traceability matrix [RTM] is a document which ensures that all the test
case has been covered.
• This document is created before the test execution process to verify that we did not miss
writing any test case for the particular requirement.

Test strategy
 It is used to verify the test types (levels) to be executed for the product and also describe
that what kind of technique has to be used and which module is going to be tested.
 The Project Manager can approve it. It includes the multiple components such as
documentation formats, objective, test processes, scope, and customer communication
strategy, etc. we cannot modify the test strategy
6
Test data
 It is data that occurs before the test is executed. It is mainly used when we are
implementing the test case.
 The test data can be used to check the expected result, which means that when the test
data is entered, the expected outcome will meet the actual result and also check the
application performance by entering the in-correct input data.

Bug report
 The bug report is a document where we maintain a summary of all the bugs which occurred
during the testing process.
 This is a crucial document for both the developers and test engineers because, with the
help of bug reports, they can easily track the defects, report the bug, change the status of
7
bugs which are fixed successfully, and also avoid their repetition in further process.
Test execution report
 It is the document prepared by test leads after the entire testing execution process is
completed.

8
Basic of Test case Design
 The test case is defined as a group of conditions or actions that performed
on software application ,under which a tester determines whether a
software application is working as per the customer's requirements or not.
 Test case designing includes preconditions, case name, input conditions, and expected
result.
 Designing a test case is most challenging assignment for test engineers
 Test case must be design on two criteria 1) reliability 2) validity
A set of test cases is considered to be reliable if it detect all errors
A set of test cases is considered to be valid if at least one test case reveal an
9
errors
Test case Design Technique

 A good test case design technique is impoítant to impíoving the quality of the softwaíe testing
píocess.
 ľhis helps to impíove the oveíall quality and effectiveness of the íeleased softwaíe.

ľ h e test case design techniques aíe bíoadly classified into thíee majoí categoíies.

1.Specification-Based techniques

2.Structure-Based techniques

3.Experience-Based techniques

10
1. Specification-Based oí Black-Box
techniques
ľhis technique suppoít the exteínal descíiption of the softwaíe such as technical specifications, design,

and client’s íequiíements to design test cases. ľ h e technique enables testeís to develop test cases that

píovide full test coveíage. ľ h e Specification-based oí black box test case design techniques aíe divided

fuítheí into 5 categoíies. ľhese categoíies aíe as follows:

Boundaíy Value Analysis (BVA)

Equivalence Paítitioning (EP)

Decision ľable ľesting

State ľíansition Diagíams


11
Use Case
2. Stíuctuíe-Based oí White-Box
techniques
ľ h e stíuctuíe-based oí white-box technique design test cases based on the inteínal
stíuctuíe of the softwaíe. ľhis technique exhaustively tests the developed code.
Developeís who have complete infoímation of the softwaíe code, its inteínal
stíuctuíe, and design help to design the test cases. ľhis technique is fuítheí divided
into five categoíies.

Statement ľesting & Coveíage

Decision ľesting Coveíage

Condition ľesting

Multiple Condition ľesting

All Path ľesting 12


3. Expeíience-Based
techniques
ľhese techniques aíe highly dependent on testeí’s expeíience to undeístand the most
impoítant aíeas of the softwaíe. ľ h e outcomes of these techniques aíe based on the
skills, knowledge, and expeítise of the people involved. ľ h e types of expeíience-based
techniques aíe as follows:

Eííoí Guessing

Exploíatoíy ľesting

13
IDENTICATION OF ERRORS A N D
BUGS IN APPLICATION
Generally error is an human incorrect action that produce incorrect result.
Error is of following type

 Syntax error : Syntax errors are mistakes in the source code, such as spelling and
punctuation errors, incorrect labels, and so on, which cause an error message to be
generated by the compiler.

 Logical Error : On compilation and execution of a program, desired output is not obtained
when certain input values are given. These types of errors which provide incorrect output but

appears to be error free are called logical errors.

14
Software B u g s
• A software bug is an error, flaw or fault in computer software that causes it to produce an incorrect or
unexpected result, or to behave in unintended ways. The process of finding and correcting bugs is
termed "debugging" and often uses formal techniques or tools to pinpoint bugs

• Software bugs are classified as follows

Functional Bugs : Functional bugs are related to the functionality of a piece of software.
Examples: A button doesn't submit the form, the search doesn't react to the user input, the app
crashes. Every time you perform an action and the website/app doesn't respond as you expected, it
might be a functional issue.

Logical Bugs : A logical bug disrupts the intended workflow of software and causes it to behave
incorrectly. These bugs can result in unexpected software behavior and even sudden crashes.

Logical bugs primarily take place due to poorly written code or misinterpretation of business logic.
9/3/20XX Presentation Title 15
Relation between error, bugs and failure

9/3/20XX Presentation Title 16


What is Error?
The Problem in code leads to errors, which means that a mistake can occur due to the
developer's coding error as the developer misunderstood the requirement or the requirement
was not defined correctly. The developers use the term error.

What is Fault?
The fault may occur in software because it has not added the code for fault tolerance, making
an application act up.

What is Failure?
Many defects lead to the software's failure, which means that a loss specifies a fatal issue in
software/ application or in its module, which makes the system unresponsive or broken.
In other words, we can say that if an end-user detects an issue in the product, then that
9/3/20XX Presentation Title 17
particular issue is called a failure.
Software bugs occur because of many reasons like

 Wrong coding

 Extra Coding

 Missing Coding

18
Example

9/3/20XX Presentation Title 19


Bug Tracking Tools

20
We have various types of bug tracking tools available in software testing that helps us to track
the bug, which is related to the software or the application.

Some of the most commonly used bug tracking tools are as follows:

21
Jira

 Jira is an open-source tool that is used for bug tracking, project management, and issue
tracking in manual testing.
 Jira includes different features, like reporting, recording, and workflow.
 In Jira, we can track all kinds of bugs and issues, which are related to the software and
generated by the test engineer.

22
Bugzilla
 It is most widely used by many organizations to track the bugs. It is an open-source tool,

which is used to help the customer, and the client to maintain the track of the bugs.

 It is also used as a test management tool because, in this, we can easily link other test case

management tools such as ALM, quality Centre, etc.

 It supports various operating systems such as Windows, Linux, and Mac.

23
Features of the Bugzilla tool

Bugzilla has some features which help us to report the bug easily:

•A bug can be list in multiple formats

•Email notification controlled by user preferences.

•It has advanced searching capabilities

•This tool ensures excellent security.

•Time tracking
9/3/20XX Presentation Title 24
BugNet

It is an open-source defect tracking and project issue management tool, which was written
in ASP.NET and C# programming language and support the Microsoft SQL database. The
objective of BugNet is to reduce the complicity of the code that makes the deployment easy.

Features of BugNet tool


The feature of BugNet tool are as follows:
•It will provide excellent security with simple navigation and administration.
•BugNet supports various projects and databases.
•With the help of this tool, we can get the email notification.
•This has the capability to manage the Project and milestone.
•This tool has an online support community

25
Redmine

it is an open-source tool which is used to track the issues and web-based project
management tool. Redmine tool is written in Ruby programing language and also compatible
with multiple databases like MySQL, Microsoft SQL, and SQLite.

Features of Redmine tool


Some of the common characteristics of Redmine tools are as follows:
•Flexible role-based access control
•Time tracking functionality
•A flexible issue tracking system
•Feeds and email notification
•Multiple languages support (Albanian, Arabic, Dutch, English, Danish and so on 26
MantisBT

MantisBT stands for Mantis Bug Tracker. It is a web-based bug tracking system, and it is also an open-source
tool. MantisBT is used to follow the software defects. It is executed in the PHP programing language.

Features of MantisBT
Some of the standard features are as follows:
•With the help of this tool, we have full-text search accessibility.
•Audit trails of changes made to issues
•It provides the revision control system integration
•vRevision control of text fields and notes
•Notifications
•Plug-ins
•Grap9h/3in/2g0XoXfrelationships between issues. Presentation Title 27
Trac
Another defect/ bug tracking tool is Trac, which is also an open-source web-based tool. It is
written in the Python programming language. Trac supports various operating systems such as
Windows, Mac, UNIX, Linux, and so on. Trac is helpful in tracking the issues for software
development projects.

We can access it through code, view changes, and view history. This tool supports multiple
projects, and it includes a wide range of plugins that provide many optional features,
which keep the main system simple and easy to use.

9/3/20XX Presentation Title 28


Backlog
The backlog is widely used to manage the IT projects and track the bugs. It is
mainly built for the development team for reporting the bugs with the
complete details of the issues, comments, updates and change of status. It is a
project management software.
Features of backlog tool are as follows:

• Gantt and burn down charts


•It supports Git and SVN repositories
• It has the IP access control feature.
• Support Native iOS and Android apps

9/3/20XX Presentation Title 29


Design Entry and
Exit criteria for
Test Case

30
Entíy Cíiteíia:
Entíy Cíiteíia gives the píeíequisite items that must be completed befoíe testing can
begin.

OR
Entry criterion is used to determine when a given test activity should start. It also
includes the beginning of a level of testing, when test design or when test
execution is ready to start.

Examples for Entry Criterion:


•Verify if the Test environment is available and ready for use.
•Verify if test tools installed in the environment are ready for use.
•Verify if Testable code is available.
•Verify if Test Data is available and validated for correctness of Data. 31
Exit Cíiteíia:
Exit Cíiteíia defines the items that must be completed befoíe testing can be concluded. Ex.
Afteí píessing submit button if webpage navigate the íesult page then this is Exit cíiteíia.

OR
Exit criterion is used to determine whether a given test activity has been completed or NOT. Exit criteria
can be defined for all of the test activities right from planning, specification and execution.
Exit criterion should be part of test plan and decided in the planning stage.

Examples of Exit Criteria:


• Verify if All tests planned have been run.
• Verify if the level of requirement coverage has been met .
• Verify if there are NO Critical or high severity defects that are left outstanding.
•Verify if all high risk areas are completely tested.
• Verify if software development activities are completed within the projected cost.
• Verify if software development activities are completed within the projected timelines. 32
Test case design entry criteria is before giving test inputs(means Precondition
before giving input)

Test case design exit criteria is expected test output(means Postcondition after
executing test case )

33
Design Test case using
Excel

34
W h y Excel?
Test case data can manage in excel. This excel spreadsheet arranged data in
row and Column format

Various Template are available to design test case , Normally this template are used to basic
or manual Testing Process.

Template may vary according to organization and requirement.

Refer Excel File For Template Format

35
What is Test Case Template?
A Test Case Template is a well-designed document for developing and better understanding of the test case
data for a particular test case scenario. A good Test Case template maintains test artifact consistency for the test team
and makes it easy for all stakeholders to understand the test cases. Writing test case in a standard format lessen the test
effort and the error rate. Test cases format are more desirable in case if you are reviewing test case from experts.

Test Case Field Description

Each test case should be represented by a unique ID.


Test case ID: To indicate test types follow some convention like
“TC_UI_1” indicating “User Interface Test Case#1.”

It is useful while executing the test.


•Low
Test Priority:
•Medium
•High
Determine the name of the main module or sub-
Name of the Module:
module being tested
Test Designed by: Tester’s Name
Date of test designed: Date when test was designed
Test Executed by: Who executed the test- tester
Date of the Test Execution: Date when test needs to be executed
Name or Test Title: Title of the test case
Description/Summary of Test: Determine the summary or test purpose in brief

Any requirement that needs to be done before


Pre-condition: execution of this test case. To execute this test case list
all pre-conditions
Determine any dependencies on test requirements or
Dependencies:
other test cases
Mention all the test steps in detail and write in the
order in which it requires to be executed. While writing
Test Steps:
test steps ensure that you provide as much detail as you
can
Use of Test Data as an input for the test case. Deliver
Test Data: different data sets with precise values to be used as an
input
Mention the expected result including error or message
Expected Results:
that should appear on screen
What would be the state of the system after running the
Post-Condition:
test case?
Actual Result: After test execution, actual test result should be filled

Mark this field as failed, if actual result is not as per


Status (Fail/Pass):
the estimated result
If there are some special condition which is left in
Notes:
above field
What is a Feature Testing?

• A Software feature can be defined as the changes made in the system to


add new functionality or modify the existing functionality

9/3/20XX Presentation Title 38


How to Effectively Test a Feature ?
•Understanding the Feature : One Should read the requirement or specification corresponding to
that feature thoroughly.
•Build Test Scenarios : Testers should Develop the test cases exclusively to test the feature. Hence,
the coverage, traceability can be maintained.
•Prepare Positive and Negative DataSets : Testers should have the test data covering all possible
negative, positive and boundary cases before the start of the testing.
•How it is Implemented : Testers should know how the feature has been implemented on
application layer and the relevant changed to the back end if any. This will give us clarity on the
impacted areas.
•Deploy the Build Early : Testers should start testing the feature early in the cycle and report the
defects and the same process should be repeated throughout the release builds.
41
Chapter - 2
Software Testing Strategies and Techniques
Testability - Characteristics lead to testable software.

Test characteristics Test Case Design for Desktop, Mobile,


Web application using Excel

White Box Testing - Basis path testing, Control Structure


Testing.

Black Box Testing- Boundary Value Analysis, Equivalence


partitioning.

Differences between BBT & WBT


• Testability - Characteristics lead to testable software
• Five Characteristics to Build Testability in Software
• Adding Simplicity
• Simplicity means creating the most straightforward possible solutions to the
problems at hand. Reducing the complexity of a feature to deliver only the
required value helps testing minimize the scope of functionality that needs to be
covered.
• Never be afraid to ask about removing complexity from business requirements;
less is more in this situation. As part of the definition of “done”, we need to have
expected results in every story. If that is not feasible, we need to at least have a
clear idea of customer expectations.
• Improve Observability
• Observing the software and understanding different patterns gives us a
tremendous advantage to catch gaps or errors. Exploratory Testing can improve
observability. Sometimes, we need to learn from our applications; observing is
core to exploring multiple behaviors and paths while testing.
• Improving log files and tracking allows us to monitor system events and recreate
problems, an additional benefit of enhancing the ongoing supportability.
• Control
• Control is critical for testability, particularly so if required to perform any test automation.
Cleanly controlling the functionality to manage the state changes within the system in a
deterministic way is valuable to any testing efforts and is a basic element of a test
automation strategy.
• I suggest focusing on what we can expect (expected results). A simple approach:
grabbing customer-centric scenarios and identifying specific outcomes. This way, we can
do test automation. Otherwise, automating something unexpected or unpredictable is
chaotic.

• Be Knowledgeable
• As testers, we must be subject matter experts on the application, take advantage of that
learning or new user experience, and collaborate. It is crucial to share the knowledge with
the rest of the team and continuously learn from others. This is a never-ending journey.
• Involving testers brings a wealth of testing knowledge and context to any software
discussion. Team members must work together to understand the essential quality
attributes, critical paths, core components, and associated risks on a design that allows the
team to mitigate those risks in the most effective way.
• Testing Stability
• It is tough to test a system with functional variability in high levels of
operational faults. Nothing hinders testing like an unstable system. We cannot
create automation test scripts or Performance test scripts with applications
continuously changing or failing.​
• Stability can be tricky. Unstable systems can destroy the application’s
reputation and result in financial loss. Therefore, it is essential to get a stable
version of the application to create our tests, before starting any
test automation.​
• Characteristics of Software Test
Each test has its own characteristics. The following points, however, should be
noted.
• High probability of detecting errors:
To detect maximum errors, the tester should understand the software
thoroughly and try to find the possible ways in which the software can fail.
For example, in a program to divide two numbers, the possible way in which
the program can fail is when 2 and 0 are given as inputs and 2 is to be divided
by 0. In this case, a set of tests should be developed that can demonstrate an
error in the division operator.
• No redundancy:
Resources and testing time are limited in software development process.
Thus, it is not beneficial to develop several tests, which have the same
intended purpose. Every test should have a distinct purpose.
• Choose the most appropriate test:
There can be different tests that have the same intent but due to certain
limitations such as time and resource constraint, only few of them are used.
In such a case, the tests, which are likely to find more number of errors,
should be considered.
• Moderate:
• test is considered good if it is neither too simp1e, nor too complex. Many
tests can be combined to form one test case. However this can increase
the complexity and leave many errors undetected. Hence, all tests should
be performed separately.
• What are the different techniques of Software Testing?

• Black Box Testing is a software testing method in which the internal structure/
design/ implementation of the item being tested is not known to the tester

• White Box Testing is a software testing method in which the internal structure/
design/ implementation of the item being tested is known to the tester.
Black Box Testing White Box Testing

It is a way of software testing in which the It is a way of testing the software in which
internal structure or the program or the the tester has knowledge about the internal
code is hidden and nothing is known about structure or the code or the program of the
it. software.

It is mostly done by software testers. It is mostly done by software developers.

No knowledge of implementation is needed. Knowledge of implementation is required.

It can be referred as outer or external It is the inner or the internal software


software testing. testing.

It is functional test of the software. It is structural test of the software.


This testing can be initiated on the basis This type of testing of software is started after
of requirement specifications document.​ detail design document.​
It is mandatory to have knowledge
No knowledge of programming is required.​
of programming.​
It is the behavior testing of the software.​ It is the logic testing of the software.​
It is applicable to the higher levels of testing of It is generally applicable to the lower levels of
software.​ software testing.​
It is also called closed testing.​ It is also called as clear box testing.​
It is least time consuming.​ It is most time consuming.​
It is not suitable or preferred for algorithm testing.​ It is suitable for algorithm testing.​

Data domains along with inner or


Can be done by trial and error ways and methods.​
internal boundaries can be better tested.​
Example: search something on google by using
Example: by input to check and verify loops​
keywords​
• Working process of white box testing:
• Input: Requirements, Functional specifications, design documents, source
code.
• Processing: Performing risk analysis for guiding through the entire process.
• Proper test planning: Designing test cases so as to cover entire code. Execute
rinse-repeat until error-free software is reached. Also, the results are
communicated.
• Output: Preparing final report of the entire testing process.

• White Box Testing techniques:


Statement coverage
Branch Coverage
Condition Coverage
Multiple Condition Coverage
Basis Path Testing
Loop Testing
• White Box Testing techniques:
1. Basis Path Testing:
Basis Path Testing is a white-box testing technique based on the control structure of a
program or a module. Using this structure, a control flow graph is prepared and the
various possible paths present in the graph are executed as a part of testing.
Therefore, by definition,
Basis path testing is a technique of selecting the paths in the control flow
graph, that provide a basis set of execution paths through the program or module.
Since this testing is based on the control structure of the program, it requires complete
knowledge of the program’s structure.
To design test cases using this technique, four steps are followed :
• Construct the Control Flow Graph
• Compute the Cyclomatic Complexity of the Graph
• Identify the Independent Paths
• Design Test cases from Independent Paths
Let’s understand each step one by one.
1. Control Flow Graph –
A control flow graph (or simply, flow graph) is a directed graph which represents the
control structure of a program or module. A control flow graph (V, E) has V number of
nodes/vertices and E number of edges in it. A control graph can also have :
• Junction Node – a node with more than one arrow entering it.
• Decision Node – a node with more then one arrow leaving it.
• Region – area bounded by edges and nodes (area outside the graph is also counted as a
region.).
• Below are the notations used while constructing a flow graph :
• Sequential Statements

• If – Then – Else –
• While – Do
• Cyclomatic Complexity –
The cyclomatic complexity V(G) is said to be a measure of the logical
complexity of a program. It can be calculated using three different formulae :
• 1. Formula based on edges and nodes
• V(G) = e - n + 2*P
• Where,
e is number of edges,
n is number of vertices,
P is number of connected components.
• For example, consider first graph given above
• where, e = 4, n = 4 and p = 1
So,
Cyclomatic complexity V(G)
= 4 - 4 + 2 * 1
= 2
2.Formula based on Decision Nodes :
V(G) = d + P
where,
d is number of decision nodes,
P is number of connected nodes.
For example, consider first graph given above,
where, d = 1 and p = 1
So,
Cyclomatic Complexity V(G)
= 1 + 1
= 2
3. Formula based on Regions :V(G) = number of regions in the graph
For example, consider first graph given above,
Cyclomatic complexity V(G)
= 1 (for Region 1) + 1 (for Region 2)
= 2
• Example 2
V(G) = 4 (Using any of the above formulae)
No of independent paths = 4
• #P1: 1 – 2 – 4 – 7 – 8
• #P2: 1 – 2 – 3 – 5 – 7 – 8
• #P3: 1 – 2 – 3 – 6 – 7 – 8
• #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
• Control Loop Testing:
• Loops are widely used and these are fundamental to many algorithms hence, their
testing is very important. Errors often occur at the beginnings and ends of loops.

• Simple loops: For simple loops of size n, test cases are designed that:
• Skip the loop entirely
• Only one pass through the loop
• 2 passes
• m passes, where m < n
• n-1 ans n+1 passes
• Nested loops: For nested loops, all the loops are set to their minimum count and
we start from the innermost loop. Simple loop tests are conducted for the
innermost loop and this is worked outwards till all the loops have been tested.
• Concatenated loops: Independent loops, one after another. Simple loop tests
are applied for each.
If they’re not independent, treat them like nesting.
• Black Box Testing Techniques
Black box testing is a type of software testing in which the
functionality of the software is not known. The testing is done
without the internal knowledge of the products.
Black box testing can be done in following ways:
1. Syntax Driven Testing
2. Equivalence partitioning
3. Boundary value analysis
4. Cause effect Graphing
5. Requirement based testing
6. Compatibility testing
• Equivalence Partitioning –
• It is often seen that many type of inputs work similarly so instead
of giving all of them separately we can group them together and
test only one input of each group.
• The idea is to partition the input domain of the system into a
number of equivalence classes such that each member of class
works in a similar way, i.e., if a test case in one class results in some
error, other members of class would also result into same error.
• The technique involves two steps:
1. Identification of equivalence class – Partition any input domain
into minimum two sets: valid values and invalid values. For example,
if the valid range is 0 to 100 then select one valid input like 49 and
one invalid like 104.
2. Generating test cases –
• (i) To each valid and invalid class of input assign unique identification
number.
(ii) Write test case covering all valid and invalid test case considering that
no two invalid inputs mask each other.
• To calculate the square root of a number, the equivalence classes will be:
(a) Valid inputs:
• Whole number which is a perfect square- output will be an integer.
• Whole number which is not a perfect square- output will be
decimal number.
• Positive decimals
• (b) Invalid inputs:
• Negative numbers(integer or decimal).
• Characters other that numbers like “a”,”!”,”;”,etc.
• Boundary Value Analysis
• Boundary value analysis is one of the widely used case design technique for
black box testing. It is used to test boundary values because the input values
near the boundary have higher chances of error.
• Whenever we do the testing by boundary value analysis, the tester focuses
on, while entering boundary value whether the software is producing correct
output or not.
• Boundary values are those that contain the upper and lower limit of a
variable. Assume that, age is a variable of any function, and its minimum
value is 18 and the maximum value is 30, both 18 and 30 will be considered
as boundary values.
• The basic assumption of boundary value analysis is, the test cases that are
created using boundary values are most likely to cause an error.
• Example
• There is 18 and 30 are the boundary values that's why tester pays more
attention to these values, but this doesn't mean that the middle values like
19, 20, 21, 27, 29 are ignored. Test cases are developed for each and every
value of the range.
• Testing of boundary values is done by making valid and invalid partitions.
Invalid partitions are tested because testing of output in adverse condition
is also essential.
• Let's understand via practical:
• Imagine, there is a function that accepts a number between 18 to 30, where
18 is the minimum and 30 is the maximum value of valid partition, the
other values of this partition are 19, 20, 21, 22, 23, 24, 25, 26, 27, 28 and
29.
• The invalid partition consists of the numbers which are less than 18 such as
12, 14, 15, 16 and 17, and more than 30 such as 31, 32, 34, 36 and 40.
• Tester develops test cases for both valid and invalid partitions to capture
the behavior of the system on different input conditions.
The software system will be passed in the test if it accepts a valid
number and gives the desired output, if it is not, then it is
unsuccessful. In another scenario, the software system should not
accept invalid numbers, and if the entered number is invalid, then it
should display error massage.
• Test Case Design
• A test case is a set of actions performed on a system to determine if it
satisfies software requirements and functions correctly. The purpose
of a test case is to determine if different features within a system are
performing as expected and to confirm that the system satisfies all
related standards, guidelines and customer requirements.
• A test case template is a document that comes under one of the test
artifacts, which allows testers to develop the test cases for a
particular test scenario in order to verify whether the features of an
application are working as intended or not. Test cases are the set of
positive and negative executable steps of a test scenario which has a
set of pre-conditions, test data, expected results, post-conditions, and
actual results.
• Test Scenario: Test Scenario gives the idea of what we have to test.
Test Scenario is like a high-level test case. Learn more on how to write
test scenario here.
• For example: Verify the login functionality of the Gmail account.
Test Case Template
• Test case ID: Unique ID is required for each test case. Follow some
conventions to indicate the types of the test. For Example, ‘TC_UI_1’
indicating ‘user interface test case #1
• Test priority (Low/Medium/High): This is very useful during test execution.
Test priorities for business rules and functional test cases can be medium or
higher, whereas minor user interface cases can be of a low priority. Testing
priorities should always be set by the reviewer.
• Module Name: Mention the name of the main module or the sub-module.
• Test Designed By Name of the Tester.
• Test Designed Date: Date when it was written.
• Test Executed By Name of the Tester who executed this test. To be filled only
after test execution.
• Test Execution Date: Date when the test was executed.
• Test Title/Name: Test case title. For example, verify the login page with a valid username
and password.
• Test Summary/Description: Describe the test objective in brief.
• Pre-conditions: Any prerequisite that must be fulfilled before the execution of this
test case. List all the pre-conditions in order to execute this test case successfully.
• Dependencies: Mention any dependencies on other test cases or test requirements.
• Test Steps: List all the test execution steps in detail. Write test steps in the order in
which they should be executed. Make sure to provide as many details as you can.
• Test Data: Use of test data as an input for this test case. You can provide different data sets
with exact values to be used as an input.
• Expected Result: What should be the system output after test execution? Describe the
expected result in detail including the message/error that should be displayed on the
screen.
• Post-condition: What should be the state of the system after executing this
test case?
• Actual result: The actual test result should be filled after test execution.
Describe the system behavior after test execution.
• Status (Pass/Fail): If the actual result is not as per the expected result, then
mark this test as failed. Otherwise, update it as passed.
• Notes/Comments/Questions: If there are any special conditions to support
the above fields, which can’t be described above or if there are any
questions related to expected or actual results then mention them here.
• Example
• Let’s assume that you are testing the login functionality of any web
application, say Facebook.
• Please Note Examples of Test Scenario and Test Cases
of
• Desktop Application, Web Application and Web
Application will be given in separate file
Chapter 3
Levels of Testing
• Unit testing
• Integration testing - Top-Down, Bottom-up integration
• System Testing - performance, regression, Load/Stress
testing, Security testing, Internationalization testing.
• Acceptance Testing- Alpha, Beta Testing
• Usability and accessibility testing - Configuration,
compatibility testing
1. Unit Testing
• It is a level of the software testing process where individual
units/components of a software/system are tested. The
purpose is to validate that each unit of the software
performs as designed.
• A unit is the smallest testable part of software. It usually
has one or a few inputs and usually a single output.
• Unit Testing is normally performed by software developers
themselves or their peers. In rare cases it may also be
performed by independent software testers.
2. Integration Testing
• It is a level of the software testing process where
individual units are combined and tested as a group.
The purpose of this level of testing is to expose faults
in the interaction between integrated units. Drivers
and stubs are used in integration testing.
• A driver piece of software which calls the software
under test, passing the test data as inputs.
• A stub is temporary or dummy piece of code that is
required by the software under test for it to operate
properly.
• Integration testing is done in two ways :
• Top-Down Testing :
• Components which are at the top layer are tested first and then it is
integrated with the just below components and ten tested it and so
on.
• A top-down approach is essentially the breaking down of a system
to gain insight into its compositional sub-system.
• The importance of top-down testing is design flow is detected,
quality improves, user satisfaction, requirements matches with the
developed software.
• Stubs are used in Top down testing
• Example –
In the top-down integration testing, if depth-first approach is adopted
then we will start integration from module M1. Then we will integrate
M2, then M3, M4, M5, M6, and at last M7.
• Bottom-up Testing :
• The component at the bottom level tested primarily. Afterwards
these lower level components integrated with just above
components to them and then combine testing is done.
• In a bottom-up approach the individual base elements of the
system are first specified in great detail.
• These elements are then linked together to form larger subsystem.
• The drivers are used in Bottom Testing
• Example –
In the last, modules or components are combined together to form cluster
1 and cluster 2. After this, each cluster is tested with the help of a
control program. The cluster is present below the high-level module or
driver. After testing, driver is removed and clusters are combined and
moved upwards with modules.
3. System Testing
• It is a level of the software testing process where a complete,
integrated system/software is tested.
• The purpose of this test is to evaluate the system’s compliance
with the specified requirements.

• Various Testing Types in System Testing


1. Performance Testing : This term is often used interchangeably
with ‘stress’ and ‘load’ testing. Performance Testing is done to check
whether the system meets the performance requirements.
Different performance and load tools are used to do this testing.
2. Regression Testing :
Testing an application as a whole for the modification in any module
or functionality is termed as Regression Testing. It is difficult to cover
all the system in Regression Testing, so typically automation testing
tools are used for these types of testing.
3. Load testing :
It is a type of non-functional testing and the objective of Load testing
is to check how much of load or maximum workload a system can
handle without any performance degradation.
Load testing helps to find the maximum capacity of the system under
specific load and any issues that cause the software performance
degradation. Load testing is performed using tools like JMeter,
LoadRunner, WebLoad, Silk performer etc.
4. Stress Testing :
This testing is done when a system is stressed beyond its specifications in
order to check how and when it fails. This is performed under heavy load
like putting large number beyond storage capacity, complex database
queries, continuous input to the system or database load.
5. Security testing :
It is a type of testing performed by a special team of testers. A system can
be penetrated by any hacking way.
• Security Testing is done to check how the software or application or
website is secure from internal and external threats. This testing
includes how much software is secure from the malicious program,
viruses and how secure and strong the authorization and authentication
processes are.
• It also checks how software behaves for any hackers attack and
malicious programs and how software is maintained for data security
after such a hacker attack.
6. Internationalization Testing :
Internationalization testing is a non-functional testing technique. It
is a process of designing a software application that can be adapted
to various languages and regions without any changes.
Localization, internationalization and globalization are highly
interrelated.
4. Acceptance testing :
• Acceptance testing a testing technique performed to determine
whether or not the software system has met the requirement
specifications. The main purpose of this test is to evaluate the
system's compliance with the business requirements and verify if it
is has met the required criteria for delivery to end users.
a. Alpha testing :
• Alpha testing takes place at the developer's site by the internal
teams, before release to external customers. This testing is
performed without the involvement of the development teams.
b. Beta Testing :
• Beta testing also known as user testing takes place at the end users
site by the end users to validate the usability, functionality,
compatibility, and reliability testing.
• Usability Testing :
It is a type of testing done from an end-user’s perspective to determine if the system
is easily usable.
• Accessibility Testing :
The aim of accessibility testing is to determine whether the software or application is
accessible for disabled people or not. Here disability means deaf, color blind, mentally
disabled, blind, old age and other disabled groups. Various checks are performed such
as font size for visually disabled, color and contrast for color blindness etc. -
• Configuration Testing :
Configuration testing is a method of testing a system under development on multiple
machines that have different combinations or configurations of hardware and
software. The performance of the system or an application is tested against each of
the supported hardware & software configurations.
• Compatibility Testing
It is a testing type in which it validates how software behaves and runs in a different
environment, web servers, hardware, and network environment. Compatibility
testing ensures that software can run on a different configuration, different database,
different browsers and their versions. Compatibility testing is performed by the
testing team.
Chapter 4
Testing Web Applications
• Testing Web Applications Syllaabus

• Dimension of Quality,
• Error within a WebApp Environment
• Testing Strategy for WebApp
• Test Planning
• The Testing Process –an overview
• What is Web Testing?
• Web Testing, or website testing is checking your web application or website
for potential bugs before its made live and is accessible to general public.
Web Testing checks for functionality, usability, security, compatibility,
performance of the web application or website.
• Web Application Testing – Strategies :
1. Functionality Testing - The below are some of the checks that
are performed but not limited to the below list:
ØVerify there is no dead page or invalid redirects.
ØFirst check all the validations on each field.
ØWrong inputs to perform negative testing.
ØVerify the workflow of the system.
ØVerify the data integrity.
• Web Application Testing – Techniques continued......
2. Usability testing - To verify how the application is easy to use with.
Test the navigation and controls.
Content checking.
Check for user intuition.

3. Interface testing - Performed to verify the interface and the dataflow from one
system to other.

4. Performance testing - Performed to verify the server response time and throughput
under various load conditions.
Load testing - It is the simplest form of testing conducted to understand
the behaviour of the system under a specific load. Load testing will result in
measuring important business critical transactions and load on the database,
application server, etc. are also monitored.
Stress testing - It is performed to find the upper limit capacity of the system and also
to determine how the system performs if the current load goes well above the
expected maximum.
• Performance testing Continued....
Soak testing - Soak Testing also known as endurance testing, is performed to determine
the system parameters under continuous expected load. During soak tests the parameters such as memory
utilization is monitored to detect memory leaks or other performance issues. The main aim is to discover
the system's performance under sustained use.
Spike testing - Spike testing is performed by increasing the number of users suddenly by a very large
amount and measuring the performance of the system. The main aim is to determine whether the system
will be able to sustain the work load.
6. Security testing - Performed to verify if the application is secured on web as data theft and unauthorized
access are more common issues and below are some of the techniques to verify the security level of the
system.
 Injection
 Broken Authentication and Session Management
Cross-Site Scripting (XSS)
Insecure Direct Object References
 Security Misconfiguration
 Sensitive Data Exposure
Missing Function Level Access Control
Cross-Site Request Forgery (CSRF)
 Using Components with Known Vulnerabilities
Unvalidated Redirects and Forwards
• The Testing Process
• What are the Different Phases in the Structured Software Testing
Life Cycle?
Requirement Analysis
The first step in the Software Testing Life Cycle is to identify which are
the features of the Software that can be tested and how.
Any requirement of the Software that is revealed to be un-testable is
identified at this stage, and subsequent mitigation strategies are
planned. The Requirements that are arrived at here can either be
Functional (related to the basic functions the software is supposed to
do) in nature or Non-Functional (related to system performance or
security availability).
Deliverables
• RTM – Requirement Traceability Matrix.
• Automation Feasibility Report
• Test Planning
Now that the testing team has a list of requirements that are to be
tested, the next step for them is to devise activities and resources, which
are crucial to the practicality of the testing process. This is where the
metrics are also identified, which will facilitate the supervision of the testing
process. A senior Quality Assurance Manager will be involved at this stage
to determine the cost estimates for the project. It is only after running the
plan by the QA manager that the Test Plan will be finalized.
Deliverables
• Test Plan or Strategy Document
• Effort Estimation Document
• Test Analysis
This stage answers to the ‘What are we testing question?’. The
test conditions are understood and accessed not just through the
requirements that have been identified at the first stage, but also
another related test basis like the product’s risks. Other factors that
are taken into account while arriving at suitable test conditions are –

• Different levels and depth of testing


• Complexity levels of the product
• Risks associated with the product and the project
• The involvement of the Software Development Life Cycle
• Skillset, knowledge, expertise, and experience of the team
• Availability of the different stakeholders.
• Test Design
If the Software Testing Process were answers to a series of questions (which it
is), this stage would answer the question – ‘How to go about testing the
Software?’
The answer, however, depends on a lot of tasks that need to be completed at
this point in the process.
These are –
• Working on with the predefined test conditions. This requires breaking down
of the test conditions into multiple sub-conditions so that all areas can get
their due coverage.
• Identifying and collecting all data related to the test, and using it to set up a
test environment conducive to the software.
• Developing metrics to track the requirements and test coverage.
• Test Implementation

Now that all the basic structuring work has been done, the next step is to
plan how the test structure that has been devised will be implemented.
This means that all test cases are to be arranged according to their priority
and a preliminary review is in order to ensure that all test cases are
accurate in themselves and in relation to other test cases.
If needed the test cases and test scripts will undergo an additional
reworking to work with the larger picture.
Deliverables
• Environment ready with test data set up
• Smoke Test results
• Test Execution

When all is said and done, this is where the real action begins. All the
planning and management culminates into this – the Execution of the
Software Test. This involves a thorough testing of the Software, yes, but also
a recording of the test results at every point of the execution process.
So, not only will you be keeping a record of the defects or errors as and when
they arise, but you will also be simultaneously tracking your progress with
the traceability metrics that have been identified in the earlier stages.
• Test Conclusion
This is where the Exit criteria begin by ensuring that all results of the
Software Testing Process are duly reported to the concerned stakeholders.
There are different ways of making regular reports, weekly or daily. A
consensus is to be arrived at between the stakeholders and the testers, to
ensure that parties are up-to-date with which stage is the Software Testing
Process at.
Depending on the Project Managers and their awareness of the Software
Testing Process, the reports can be intensely technical or written in
easily understandable non-technical language for a layman.
Deliverables
• Competed RTM with the execution status
• Test cases updated with results
• Defect Reports
• Test Cycle Closure
This last stage is more of seeing off of the Software Testing Process. It is
where you tick off the checklist and make sure all actions that were
started during the process have reached their completion.
This involves making concluding remarks on all actions of the testing
process with respect to their execution and/or mitigation.
Also, a revisiting of the entire Software Testing Process as it concludes,
will help the team in understanding and reviewing their activities so that
lessons can be learned from the testing process and similar mistakes (if
any) be avoided in the next Software Testing Cycle the team undertakes.
Deliverables
• Test Closure Report
• Test Metrics
• Test Plan
• A Test Plan is a detailed document that describes the test strategy,
objectives, schedule, estimation, deliverables, and resources required
to perform testing for a software product. Test Plan helps us
determine the effort needed to validate the quality of the application
under test. The test plan serves as a blueprint to conduct software
testing activities as a defined process, which is minutely monitored
and controlled by the test manager.
• As per ISTQB definition: “Test Plan is A document describing the
scope, approach, resources, and schedule of intended test activities.”
• How to write a Test Plan
• You already know that making a Test Plan is the most important task of Test
Management Process. Follow the seven steps below to create a test plan as
per IEEE 829
• Analyze the product
• Design the Test Strategy
• Define the Test Objectives
• Define Test Criteria
• Resource Planning
• Plan Test Environment
• Schedule & Estimation
• Determine Test Deliverables
• Step 1) Analyze the product
• How can you test a product without any information about it? The answer
is Impossible. You must learn a product thoroughly before testing it.
• The product under test is Guru99 banking website. You should research
clients and the end users to know their needs and expectations from the
application
• Who will use the website?
• What is it used for?
• How will it work?
• What are software/ hardware the product uses?
• Step 2) Develop Test Strategy
• Test Strategy is a critical step in making a Test Plan in Software Testing. A Test
Strategy document, is a high-level document, which is usually developed by
Test Manager. This document defines:
• The project’s testing objectives and the means to achieve them
• Determines testing effort and costs
• Step 2.1) Define Scope of Testing
• Before the start of any test activity, scope of the testing should be known.
You must think hard about it.
• The components of the system to be tested (hardware, software,
middleware, etc.) are defined as “in scope“
• The components of the system that will not be tested also need to be clearly
defined as being “out of scope.”
• Defining the scope of your testing project is very important for all
stakeholders. A precise scope helps you
• Give everyone a confidence & accurate information of the testing you are
doing
• All project members will have a clear understanding about what is tested and
what is not
• Step 2.2) Identify Testing Type
• A Testing Type is a standard test procedure that gives an expected test
outcome.
• Each testing type is formulated to identify a specific type of product bugs.
But, all Testing Types are aimed at achieving one common goal “Early
detection of all the defects before releasing the product to the customer”
• There are tons of Testing Types for testing software product. Your
team cannot have enough efforts to handle all kind of testing. As Test
Manager, you must set priority of the Testing Types
• Which Testing Types should be focused for web application testing?
• Which Testing Types should be ignored for saving cost?
• Step 2.3) Document Risk & Issues
• Risk is future’s uncertain event with a probability of occurrence and
a potential for loss. When the risk actually happens, it becomes the ‘issue’.
• In the article Risk Analysis and Solution, you have already learned about the
‘Risk’ analysis in detail and identified potential risks in the project.
• In the QA Test Plan, you will document those risks

• Step 2.4) Create Test Logistics


• In Test Logistics, the Test Manager should answer the following questions:
• Who will test?
• When will the test occur?
• Step 3) Define Test Objective

• Test Objective is the overall goal and achievement of the test execution.
The objective of the testing is finding as many software defects as possible;
ensure that the software under test is bug free before release.
• To define the test objectives, you should do 2 following steps
• List all the software features (functionality, performance, GUI…) which may
need to test.
• Define the target or the goal of the test based on above features
• Step 4) Define Test Criteria

• Test Criteria is a standard or rule on which a test procedure or test judgment


can be based. There’re 2 types of test criteria as following
• Suspension Criteria
• Specify the critical suspension criteria for a test. If the suspension criteria are
met during testing, the active test cycle will be suspended until the criteria
are resolved.
• Exit Criteria
• It specifies the criteria that denote a successful completion of a test phase.
The exit criteria are the targeted results of the test and are necessary before
proceeding to the next phase of development. Example: 95% of all critical test
cases must pass.
• Step 5) Resource Planning
• Resource plan is a detailed summary of all types of resources required to
complete project task. Resource could be human, equipment and
materials needed to complete a project
• The resource planning is important factor of the test planning because
helps in determining the number of resources (employee, equipment…)
to be used for the project. Therefore, the Test Manager can make the
correct schedule & estimation for the project.
• Step 6) Plan Test Environment
• What is the Test Environment
• A testing environment is a setup of software and hardware on which the
testing team is going to execute test cases. The test environment consists
of real business and user environment, as well as physical environments,
such as server, front end running environment.
• Step 7) Schedule & Estimation
• In the article Test estimation, you already used some techniques to
estimate the effort to complete the project. Now you should include
that estimation as well as the schedule to the Test Planning
• In the Test Estimation phase, suppose you break out the whole
project into small tasks and add the estimation for each task as below

Task Members Estimate effort

Create the test specification Test Designer 170 man-hour

Perform Test Execution Tester, Test Administrator 80 man-hour

Test Report Tester 10 man-hour

Test Delivery 20 man-hour

Total 280 man-hour


• Step 8) Test Deliverables
• Test Deliverables is a list of all the documents, tools and other components
that has to be developed and maintained in support of the testing effort.
• There are different test deliverables at every phase of the software
development lifecycle.
• Test deliverables are provided before testing phase.
Test plans document.
Test cases documents
Test Design specifications.
• Test deliverables are provided during the testing
Test Scripts
Simulators.
Test Data
Test Traceability Matrix
Error logs and execution logs.
• Test deliverables are provided after the testing cycles is over.
Test Results/reports
Defect Report
Installation/ Test procedures guidelines
Release notes
Chapter-5
Agile Testing
Syllabus Chap:5 Agile Testing

• Agile Testing,
• Difference between Traditional and Agile testing,
• Agile principles and values
• Agile Testing Quadrants
• Automated Tests.
The Agile Manifesto

• We are uncovering better ways of developing software by


doing it and helping others do it.
Through this work we have come to value:
• Individuals and interactions over processes and tools
• Working software over comprehensive documentation
• Customer collaboration over contract negotiation
• Responding to change over following a plan
• The 12 principles articulated in the Agile Manifesto are:
• Satisfying customers through early and continuous delivery of
valuable work.
• Breaking big work down into smaller tasks that can be completed
quickly.
• Recognizing that the best work emerges from self-organized teams.
• Providing motivated individuals with the environment and support
they need and trusting them to get the job done.
• Creating processes that promote sustainable efforts.
• Maintaining a constant pace for completed work.
• Welcoming changing requirements, even late in a project.
• Assembling the project team and business owners on a daily basis
throughout the project.
• The 12 principles articulated in the Agile Manifesto continue...
• Having the team reflect at regular intervals on how to become more
effective, then tuning and adjusting behavior accordingly.
• Measuring progress by the amount of completed work.
• Continually seeking excellence.
• Harnessing change for a competitive advantage.
https://www.guru99.com/agile-testing-a-beginner-s-guide.html
• What is Agile Testing?
• It is a testing practice that follows the rules and principles of agile
software development.
• Unlike the Waterfall method, Agile Testing can begin at the start of
the project with continuous integration between development and
testing.
• Agile Testing methodology is not sequential (in the sense it’s
executed only after coding phase) but continuous.
• Difference between Traditional and Agile testing
Traditional Testing Agile Testing

1. Traditional testing follows a top-down approach and


1. Whereas, the process of agile testing follows an
a more predictive model, wherein testing is executed
iterative approach and an adaptive model.
step-by-step.
2. Agile testing follows a philosophy of test first,
2. Here, testing is performed once the process of
wherein defects are fixed during each sprint and then
software development is accomplished/completed.
released.
3. Team tests different modules of the software 3. Here, the team works together and collaborate in an
separately first. open workspace.
4. The requirements stated in traditional testing are 4. Has fixed, yet flexible requirements that adapt to
concrete and are not easily modified. changing business & user requirements easily.
5. Unlike traditional testing, in agile testing
5. If any changes or modifications are implemented,
modifications are implemented during the next sprint
they are done in the next release of the module.
of the software testing cycle.
6. Here, unit testing is executed for each module 6. The agile team is integrated with the Scrum team,
followed by integration and system testing. which helps get more accurate results.
7. Tools are considered a luxury in traditional
7. Tools are used frequently to keep up with the pace
testing, as the focus is majorly on
of development and to deliver results quickly.
performing manual testing.

8. Ensures efficient, effective, and timely risk


8. Risk management is quite averse.
management.

9. During this types of testing, feedbacks are mainly 9. Accurate and efficient feedback is offered, which
taken from end users once the process of testing is provides a better understanding of the testing process
completed. and ensures the quality of the product.

10. The interaction among team members is scarce as 10. Most importantly, there is a continuous interaction
testing is executed in phases. among team members.

11. Requires comprehensive and extensive


11. Requires minimum documentation and reporting.
documentation and reporting.

12. Traditional testing is a time consuming process, 12. Agile testing prevents expenditure of excessive
which usually costs more efforts and money. time, efforts, as well as money.

13. Though ensures the quality of the product, it 13. Ensures rapid delivery as well as quality of the
leads to delay in product delivery. software.
• Agile Testing Quadrants
• Agile testing quadrants is consider to be a tool or a manual, designed
by the Brain Marick, which divides the whole agile testing
methodology into four basic quadrants. Agile Testing Quadrants help
the whole team to communicate and deliver a high quality product
in no time. With the help of Agile Testing Quadrants the whole
testing process can be explained in a very easy to understand
language and the whole team can effectively work on the product.
These Quadrants are:
• Quadrant 1: Technology-facing tests that support the team
• Quadrant 2: Business-facing tests that support the team
• Quadrant 3: Business-facing tests that critique the product
• Quadrant 4: Technology-facing tests that critique the product
• The agile testing quadrants separate the whole process in four Quadrants
and help to understand how agile testing is performed.
• Agile Quadrant I – The internal code quality is the main focus in this
quadrant, and it consists of test cases which are technology driven and are
implemented to support the team, it includes
• 1. Unit Tests
• 2.Component Tests
• Agile Quadrant II – It contains test cases that are business driven and are
implemented to support the team. This Quadrant focuses on the
requirements. The kind of test performed in this phase is
• 1. Testing of examples of possible scenarios and workflows
• 2. Testing of User experience such as prototypes
• 3. Pair testing
• Agile Quadrant III – This quadrant provides feedback to quadrants one and two. The test cases can be
used as the basis to perform automation testing. In this quadrant, many rounds of iteration reviews
are carried out which builds confidence in the product. The kind of testing done in this quadrant is
1. Usability Testing
2. Exploratory Testing
3. Pair testing with customers
4. Collaborative testing
5. User acceptance testing
• Agile Quadrant IV – This quadrant concentrates on the non-functional requirements such as
performance, security, stability, etc. With the help of this quadrant, the application is made to deliver
the non-functional qualities and expected value.
1. Non-functional tests such as stress and performance testing
2. Security testing with respect to authentication and hacking
3. Infrastructure testing
4. Data migration testing
5. Scalability testing
6. Load testing
Quadrant 1:

• Quadrant 1 consists of all the test cases that are technology driven.
These are performed in order to support the team.
• Developers involvement is very important in this quadrant as
quality of code is the main focus here.
• Quadrant 1 is associated with Automated testing, and covers the
tests such as Unit tests, Component test, API tests and Web
Services testing.
• Instant feedback is obtained in this quadrant so that quality of code
can be improved easily.
• This quadrant helps to improve the design of the product without
affecting its functionality.
Quadrant 2:

• Quadrant 2 consists of all the test cases that are business driven and
are performed to support team as well as the customers.
• Most of the projects working starts from this quadrant.
• The main focus of this quadrant is on the business requirements.
• The tester is greatly involved with the customer to gather the
requirements in order to build test cases accordingly.
• Quadrant 2 is associated with Functional testing, story testing,
prototypes & simulations and pair testing.
• In this quadrant both manual and automated testing is involved to
work on business requirements easily.
Quadrant 3:
• Quadrant 3 consists of all the test cases that are business
driven and are performed to Critique the product.
• The main focus of this quadrant is to provide feedback to
the Quadrant 1 and Quadrant 2.
• Manual testing based on tester logical thinking, intuitions
and user requirements is done to evaluate
the application.
• Quadrant 3 is associated with Pair testing with
customers, Exploratory Testing, Usability Testing, User
Acceptance Testing, Collaborative Testing and alpha &
beta testing.
Quadrant 4:

• Quadrant 4 consists of all the test cases that are technology-driven


and are performed to critique the product
• Quadrant 4 focus mainly on the non-functional requirements such
as performance, security, stress, maintainability, stability etc.
• This quadrant is responsible to deliver the final product to the
customer.
• This quadrant is associated with performance testing, load testing,
stress testing, maintainability testing, infrastructure testing, data
migration testing, security testing, reliability testing,
recovery testing and many more.
Agile Automation Testing

• Agile Automation Testing in software development is an approach


of using test automation in agile methodologies.
• The purpose of agile automation testing is to make the software
development process more effective and efficient while maintaining
the quality and time as well as resource consumption.
• Thus, the implementation of such a process requires a lot of
coordination and collaboration between teams.
• Agile Test Automation bolsters quality assurance and quickens
application delivery.
• Some of the core practices of agile test automation can be listed as
follows:
• Automation based on coverage: The scope of test automation depends on
the amount of code that has to be covered. As part of the test automation
execution system, test traceability can be easily understood through test
automation runs that are based on code-coverage.
• System level automation: In an agile workflow, dependent on team input
and user feedback, the UI is bound to experience many changes and
multiple versions. So in terms of UI maintenance, test automation tends to
be very time-consuming. In order to keep maintenance costs down and
enhance overall coverage, automation needs to be conducted at the level of
systems and services.
• Development driven by testing: Testers need to work closely with product
dev teams in cases where the testers first design automation tests and make
those tests the foundation for the source code. Implementation of such
testing therefore requires persistent collaboration between the different
teams.
• Automated testing before its manual counterpart: Before automated testing
was as widespread as it is now, a round of manual testing was necessary
before implementing a round of test automation. But in today’s fast-paced
market with rigid demands, teams don’t really have the time to engage in
manual testing. They dive straight into the automated environment, but in
spite of the efficiency and thoroughness of this process, it would be a good
idea for testers to conduct one manual run-through to confirm the stability of
the application and get rid of any glitches that the program may have
ignored.
• Choice of Tool: In an agile workflow, poor decisions and erroneous choices
can have detrimental effects that could take a long time to reverse. Selecting
the right tools for the job is absolutely key to ensuring successful test runs,
and testers can choose from a wide variety of commercial and open-source
solutions. Apart from the suitability of a particular tool to a particular
automation issue, testers need to take into account a number of other
potential problems, such as integration capabilities, installation
requirements, overall cost, maintenance, and compatibility with the testing
environment.
• Verifying test automation code: The automation code itself needs to be
tested to ensure consistency and high quality. The code needs to be verified
top to bottom, and all issues must be eliminated before implementing a
test of any product. In an agile workflow, the pressing lack of time means
that the code has to be flawless, and has to guarantee low maintenance
costs, reliability, and robustness. In test automation, each step (tool
choice, framework design, data generation, test design, code review,
execution, maintenance, etc.) is handled in a sequential flow, which means
that any automation program is conducted in a traditional testing
environment by one single tester who takes care of each step.
• Sharing code to encourage code usage across teams: Development, build,
and operations teams should ideally be kept in the loop with regard to any
given automation code. The advantages of such transparency are numerous
– a general increase in the focus on product quality, shorter test and dev
cycles, and the free sharing of knowledge to facilitate an efficient workflow.
Bringing in people from the non-testing departments brings in
new perspectives and approaches to dealing with potential issues, and so
the automation code is more likely to be reliable.

You might also like