0% found this document useful (0 votes)
28 views26 pages

Test Design Techniques

1. Login with valid credentials to test successful login functionality 2. Attempt login with invalid username at the minimum boundary to test invalid login handling 3. Attempt login with empty password field at the maximum boundary to test mandatory field validation The first test case covers the valid login equivalence partition. The second and third test cases cover the invalid login partitions by testing at the minimum and maximum boundaries - invalid username and empty password respectively. This provides coverage of the key functionality and edge cases for the login page using equivalence partitioning and boundary value analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views26 pages

Test Design Techniques

1. Login with valid credentials to test successful login functionality 2. Attempt login with invalid username at the minimum boundary to test invalid login handling 3. Attempt login with empty password field at the maximum boundary to test mandatory field validation The first test case covers the valid login equivalence partition. The second and third test cases cover the invalid login partitions by testing at the minimum and maximum boundaries - invalid username and empty password respectively. This provides coverage of the key functionality and edge cases for the login page using equivalence partitioning and boundary value analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Software Quality Assurance

Pavithra Subashini
Senior lecturer
Faculty of Computing
Test case is defined as
• A set of test inputs, execution conditions and expected results,
developed for a particular objective.
• Documentation specifying inputs, predicted results and a set of
execution conditions for a test item.
• Specific inputs that will be tried and the procedures that will be
followed when the software tested.
• Sequence of one or more subtests executed as a sequence as the
outcome and/or final state of one subtests is the input and/or initial
state of the next.
• Specifies the pretest state of the AUT and its environment, the test
inputs or conditions.
• The expected result specifies what the AUT should produce from
the
test inputs.
Test Cases

• Test Case ID • Expected result


• Test Title • Post Condition
• Test Summary/ Objective/ • Actual result
Description • Status / Pass/fail
• Test Case Steps • Notes /Comments
• Test Data
Goo

Find Defects
• Have high probability of finding a new defect.
• Unambiguous tangible result that can be inspected.
• Repeatable and predictable
• Traceable to requirements or design documents
• Push systems to its limits
• Execution and tracking can be automated
• Do not mislead
• Feasible
Test Case Design Technique

Black-box testing (or functional testing):


• Equivalence partitioning
• Boundary value analysis
• Cause-effect graphing
• Behavioural testing
• Random testing
Black-box: Three major approaches
• Analysis of the input/output domain of the program:
Leads to a logical partitioning of the input/output domain into
‘interesting’ subsets

• Analysis of the observable black-box behavior:


Leads to a flow-graph-like model, which enables application of
techniques from the white-box world (on the black-box model)

• Techniques like risk analysis, random input, stress testing


Black-box : Equivalence Partitioning

Equivalence Partitioning also called as equivalence class partitioning. It is


abbreviated as ECP. It is a software testing technique that divides the input
test data of the application under test into each partition at least once of
equivalent data from which test cases can be derived.
An advantage of this approach is it reduces the time required for performing
testing of a software due to less number of test cases.
Black-box : Equivalence Partitioning

Example:
If one application is accepting input range from 1 to 100, using equivalence class we
can divide inputs into the classes, for example, one for valid input and another for
invalid input and design one test case from each class.
In this example test cases are chosen as below:
• One is for valid input class i.e. selects any value from input between ranges 1 to
100. So here we are not writing hundreds of test cases for each value. Any one
value from this equivalence class should give you the same result.
• One is for invalid data below lower limit i.e. any one value below 1.
• One is for invalid data above upper limit i.e. any value above 100.
Boundary

•Boundary value analysis is a type of black box or specification based testing


technique in which tests are performed using the boundary values.
• Boundary Testing comes after the Equivalence Class Partitioning.
• Testing between extreme ends or boundaries between partitions of the
input values.
Example:
Input data 1 to 10 (boundary value)
Test input data 0, 1, 2 to 9, 10, 11
Decision Table

• A decision table is a good way to deal with combinations of Inputs


• Also referred as cause- effect table
• Focused on business logic or business rules
• Decision tables provide a systematic way of stating complex business rules,

Note:- Number of possible Combinations is given by 2 ^ n , where n is the


number of Inputs.
In the above example :- we would need 2 ^ 2 = 4 combinations
Decision Table

Advantages:-
1) It provide complete coverage of test cases which help to reduce the
rework on writing test scenarios & test cases.
2) Any complex business flow can be easily converted into the test scenarios
& test cases using this technique.
3) Simple to understand and everyone can use this method design the test
scenarios & test cases.
4) These tables guarantee that we consider every possible combination of
condition values. This is known as its “completeness property”.
Cause Effect Graph
A “Cause” represents a distinct input condition that brings about an internal
change in the system.

An “Effect” represents an output condition, a system transformation or a state


resulting from a combination of causes.

It is also known as Ishikawa diagram as it was invented by Kaoru Ishikawa or


fish bone diagram because of the way it looks.

Circumstances - under which Cause-Effect Diagram used


• To Identify the possible root causes, the reasons for a specific effect,
problem, or outcome.
• To Relate the interactions of the system among the factors affecting
a
particular process or effect.
• To Analyze the existing problems so that corrective action can be taken at
Benefits:
• It Helps us to determine the root causes of a problem or quality using a
structured approach.
• It Uses an orderly, easy-to-read format to diagram cause-and-effect
relationships.
• It Indicates possible causes of variation in a process.
• It Identifies areas, where data should be collected for further study.
• It Increases knowledge of the process by helping everyone to learn
more
about the factors at work and how they relate.
• Make use of a tester’s skill intuition and experience in testing similar
applications to identify defects that may not be easy to capture by the more
formal techniques. It is usually done after more formal techniques are
completed.
• Mainly used during release of application
• Usually used by the experts
• Its less reproducible
• Coverage is limited by experience, knowledge and time available
• Types of defects depends on the scenarios that were guessed
• Checklist based testing
• Typical conditions to try include division by zero, blank (or no) input, empty
files and the wrong kind of data (e.g. alphabetic characters where numeric
are required).
Random Testing
What is Random Testing?
Random Testing, also known as monkey testing, is a form of functional
black box testing that is performed when there is not enough time to
write and execute the tests.
Random Testing Characteristics:
• Random testing is performed where the defects are NOT identified in regular
intervals.
• Random input is used to test the system's reliability and performance.
• Saves time and effort than actual test efforts.
• Other Testing methods cannot be used to.
Random Testing Steps:
• Random Inputs are identified to be evaluated against the system.
• Test Inputs are selected independently from test domain.
• Tests are Executed using those random inputs.
• Record the results and compare against the expected outcomes.
• Reproduce/Replicate the issue and raise defects, fix and retest.
Items in test cases
• Test Case ID

• Test Description/Test case name

• Prerequisite

• Test Steps

• Input Data

• Expected Result

• Actual Result

• Status

• Severity

• Priority

• Executed by

• Comments
Project details
• Project name

• Module name

• Created by

• Created date

• Reviewed by

• Reviewed date
Sample 1
Equivalence Partitioning
Boundary value analysis
Open excel sheet
Start test case writing
Activity

• Write test cases to test below web page.

You might also like