Test Case Design: 01.00 Developed By: Revised By: Approved By: HOT - Olga Barbu Date: Date: Date
Test Case Design: 01.00 Developed By: Revised By: Approved By: HOT - Olga Barbu Date: Date: Date
Version: 01.00
Developed by: Date:
Revised by: Date:
Approved by: HOT – Olga Barbu Date:
Test Case Design
Table of Contents
1 Revision history ................................................................................................ 4
2 Introduction ..................................................................................................... 5
3 Objectives ........................................................................................................ 5
4 Software Testing Best Practices ............................. Error! Bookmark not defined.
4.1 The Best Practices for Designing Test Case ................... Error! Bookmark not defined.
4.2 Best Practices for Functional Test Design ...................... Error! Bookmark not defined.
4.2.1 Review Test Plans to Identify Candidates for Automation .. Error! Bookmark not defined.
4.2.2 Define a Common Structure and Templates for Creating Tests ....................................... 41
4.2.3 Define Test Script Naming Conventions ........................................................................... 41
4.2.4 Design Flexible Scripts with Defined Purpose ..................... Error! Bookmark not defined.
4.2.5 Design Modular Scripts ..................................................................................................... 41
4.2.6 Design Reusable Scripts .................................................................................................... 42
4.2.7 Make Test Scripts Independent of the Operating Environment ...................................... 42
4.2.8 Make Test Scripts Independent of Test Data ................................................................... 42
5 General Guidelines in writing Test Cases ......................................................... 17
6 Tips for Writing Test Cases .......................................... Error! Bookmark not defined.
7 Test Case Writing Best Practices ................................... Error! Bookmark not defined.
7.1 Test Case Style ............................................................... Error! Bookmark not defined.
7.2 Required Test Case Content........................................... Error! Bookmark not defined.
7.3 Additional Test Case Content......................................... Error! Bookmark not defined.
7.4 New Feature Coverage................................................................................................ 22
7.5 A Final Word. 2016......................................................... Error! Bookmark not defined.
5.1 Format of Test Case ........................................................... Error! Bookmark not defined.
8 How to write effective Test cases, procedures and definitions ..........Error! Bookmark not
defined.
8.1 FUNCTIONAL TESTING BEST PRACTICES ........................ Error! Bookmark not defined.
5.2 Test Case Attributes........................................................................................................ 30
5.3 Test Case Design Styles ................................................................................................... 31
5.4 Test Case Writing Style ................................................................................................... 32
5.5 Reuse of Test Cases ........................................................................................................ 34
5.6 Test Cases Prioritization ................................................................................................. 36
5.7 Test Cases Maintenance ................................................................................................. 38
IN YOUR ZONE 2
Test Case Design
IN YOUR ZONE 3
Test Case Design
1 Revision history
IN YOUR ZONE 4
Test Case Design
2 Introduction
This document defines the best practices recommended for the Test Analysis and Design discipline at
Endava. The target audience is Test Engineers interested in or involved in Test Analysis and Design
activities.
3 Objectives
The purpose of this document is to help Test Engineers in creating effective Test Cases and to see the
best practices of Test Case design. General recommendations are presented how to generate Test
Cases from Use Cases.
After the Master Test Plan is approved, the testing team will be moving to the next phase. ‘Test
design’. During this phase, all the three listed documents will be prepared.
IN YOUR ZONE 5
Test Case Design
1. Tester should ask questions, contribute feedback and make suggestions. It´s important to analyse the
documentation and ask for unclear requirements.
2. Tester should be proactive.
IN YOUR ZONE 6
Test Case Design
Example
Test Scenario: Validate the login page
Test Case 1: Enter a valid username and password
Test Case 2: Reset your password
Test Case 3: Enter invalid credentials
IN YOUR ZONE 7
Test Case Design
The exhaustive testing is not possible due to large number of data combinations and large number
of possible paths in the software. Scenario testing makes sure that end to end functionality of
application under test is working as expected and ensures that all business flows are working as
expected. As Scenarios are nothing but the User journeys, in scenario testing tester puts themselves
in the end users shoes to check and perform the action as how they are using application under test.
The preparation of scenarios is the most important part. The tester needs to consult or take help
from the Client, Business Users, BAs (Business Analyst) or Developers. Once these test scenarios are
determined, test cases can be written for each scenario. Test scenarios are the high level concept of
what to test.
So…
3. Tester must have a good understanding of the business and functional requirement of the application.
Scenarios are very critical to business, as test cases are derived from test scenarios. So any miss in Test
Scenario would lead to missing of Test Cases as well. That is why Scenario writer plays an important
role in project development. A single mistake can lead to a huge loss in terms of cost and time.
4. Tester must have gone through the requirements carefully. In case of any doubts or clarification, POCs
(Point of Contact) should be contacted.
5. Understand the project work flow, and wireframes (if available) and relate the same to the
requirement.
IN YOUR ZONE 8
Test Case Design
Test Case in simple terms refers to a documentation which specifies input, pre-conditions, set of execution
steps and expected result. A good test case is the one which is effective at finding defects and also covers
most of the scenarios/combinations on the system under test.
A test case has components that describes an input, action or event and an expected response, to determine
if a feature of an application is working correctly.
Test Case Writing is an activity which has a great impact on the testing phase and this makes test cases
an important part of the test execution process.
The Test Plan describes what to test while a Test Case describes how to perform a particular test. It is
necessary to develop a test case for each test listed in the test plan.
Knowing how to write good test cases is extremely important and it doesn’t take too much of your effort
and time to write effective test scripts! You just need to follow certain guidelines while writing test cases
Step 1: Detailed Study of the System Under Test
Before writing test cases, it is very important to have a detailed knowledge about the system which
you are testing. It can be any application or any software. Try and get as much information as possible
through available documentation such as use cases, user guides tutorials, or by having hands on the
software itself.
Gather all the possible positive scenarios and also the odd cases which might break the system such as
stress testing, uncommon combinations of inputs etc.
IN YOUR ZONE 9
Test Case Design
IN YOUR ZONE 10
Test Case Design
IN YOUR ZONE 11
Test Case Design
It is desirable to cover as many branches as possible; testing data can be generated such that all
branches in the program source code are tested at least once
Path testing: all paths in the program source code are tested at least once - test data can be designed
to cover as many cases as possible
Negative API testing:
o Testing data may contain invalid parameter types used to call different methods
o Testing data may consist in invalid combinations of arguments which are used to call the
program's methods
Confidentiality: All the information provided by clients is held in the strictest confidence and is not
shared with any outside parties. As a short example, if an application uses SSL, you can design a set of
test data which verifies that the encryption is done correctly.
Integrity: Determine that the information provided by the system is correct. To design suitable test
data you can start by taking an in depth look at the design, code, databases and file structures.
Authentication: Represents the process of establishing the identity of a user. Testing data can be
designed as different combination of usernames and passwords and its purpose is to check that only
the authorized people are able to access the software system.
Authorization: Tells what the rights of a specific user are. Testing data may contain different
combination of users, roles and operations in order to check only users with sufficient privileges are
able to perform a particular operation.
IN YOUR ZONE 12
Test Case Design
After completing the test case design before start the test case execution, it is necessary preparing the
Requirement Traceability Matrix. This RTM document is generally used to ensure the Test Coverage. First
all the High level requirements should be drilled down into several low level requirements. Then the Test
Cases should be mapped for all the low level requirements. All the requirements should have minimum of
one test case and maximum of n- test cases. If all the requirements are mapped with the test cases, then
we can ensure the Test Coverage. If some of the requirements doesn’t have the test case(s), testing team
will be preparing the test cases for that particular requirement(s) and will be mapping into the
requirements before start the test case execution. In this case, the testing team will not miss any
functionality during the testing phase from the requirement specification.
IN YOUR ZONE 13
Test Case Design
Create traceability Test Basis & Test Cases at every moment is a good practice. We can calculate the
requirements testing coverage.
Write Test Cases before the implementation of the requirements Write Test Cases for all the requirements
Test Cases should map precisely to the requirements and not be an enhancement to the requirement
IN YOUR ZONE 14
Test Case Design
4.6 To design the test environment set-up and identify and required
infrastructure and tools.
These steps
1. Understand the test requirements thoroughly and educate the test team members.
2. Connectivity should be checked before the initiation of the testing
3. Check for the required hardware and software, licenses
4. Browsers and versions
5. Planning out the Scheduled use of the test environment.
6. Automation tools and their configurations.
Summary:
A testing environment is a setup of software and hardware on which the test team will conduct
the testing
For test environment, key area to set up includes
o System and applications
o Test data
o Database server
o Front end running environment, etc.
Few challenges while setting up test environment include,
o Remote environment
o Combined usage between teams
o Elaborate setup time
o Ineffective planning for resource usage for integration
o Complex test configuration
IN YOUR ZONE 15
Test Case Design
IN YOUR ZONE 16
Test Case Design
The main goal of Test Cases development is to validate the testing coverage of the software product
under test. The following recommendations should be considered:
3. Test Cases should map precisely to the requirements and not be an enhancement to the
requirement.
The Test Analyst must use a predefined, standardized way of documenting the Test Cases. This
format will improve the traceability and also will facilitate the communication with the Test
Engineers who will execute the tests.
• Clear objective
Next are presented the factors that should be considered during the Test Case design and
development process:
IN YOUR ZONE 17
Test Case Design
Like any set of best practices, they will be most effective when you customize them to meet your needs.
Test Case Writing is an activity which has solid impact on the whole testing phase. This fact makes the task of
documenting TCs, very critical and subtle. So, it should be properly planned first and must be done in well-
organized manner. The person who is documenting the TCs must keep in mind that, this activity is not for him
or her only, but a whole team including other testers and developers, as well as the customer will be directly
and indirectly affected by this work.
There are some important and critical factors related to this major activity. Let us have a bird’s eye view of
those factors first.
b. Test Cases are prone to distribution among the testers who will execute these:
Normally there are several testers who test different modules of a single application. So the TCs are divided
among them according to their owned areas of application under test.
IN YOUR ZONE 18
Test Case Design
according to the business logic of AUT, a single TC may contribute in several test conditions and a single test
condition may consist of multiple TCs.
The clearest area of any application where this behaviour can definitely be observed is the interoperability
between different modules of same or even different applications.
e. Test Cases are prone to distribution among developers (especially in TC driven development
environment):
An important fact about TCs is that these are not only to be utilized by the testers. In normal case, when a bug
is under fix by the developers, they are indirectly using the TC to fix the issue. Similarly, where the TCD
development is followed, TCs are directly used by the developers to build their logic and cover all scenarios,
addressed by TCs, in their code.
So, keeping in mind, here are some tips to write test cases:
Do not Assume
Do not assume functionality and features of your software application while preparing test case.
Stick to the Specification Documents.
It's not possible to check every possible condition in your software application. Testing techniques help you
select a few test cases with the maximum possibility of finding a defect.
Boundary Value Analysis (BVA): As the name suggests it's the technique that defines the testing of boundaries
for specified range of values.
Equivalence Partition (EP): This technique partitions the range into equal parts/groups that tend to have the
same behaviour.
State Transition Technique: This method is used when software behaviour changes from one state to another
following particular action.
IN YOUR ZONE 19
Test Case Design
Error Guessing Technique: This is guessing/anticipating the error that may arise while testing. This is not a
formal method and takes advantages of a tester's experience with the application
IN YOUR ZONE 20
Test Case Design
IN YOUR ZONE 21
Test Case Design
Depends of the systems requirements the test case framework can be designed. Typically, the test cases
should cover functionality, compatibility, UI interface, fault tolerance, and performance of several
categories.
The test case should generate the same results every time no matter who tests it
Establish good feature coverage. Consider the following when trying to achieve good coverage
of a feature:
o The importance of the feature and how often the end user or customer may use it.
o The commonality of the feature. If a feature includes basic functionality that is
commonly performed, then including other features in the test case will make better use
of the time spent testing the feature.
Plan for regression testing. Keep in mind that new feature test cases may be used to perform
future regression testing after the initial release. Test cases covering functionality that will not
require testing going forward need to be identified as such, so that those tests aren't included in
future releases. Optionally, instead of writing a 'run once' test case, create a checklist to test the
functionality.
Consider checklists. Checklists can be written to test certain types of new features, like those
that require a large amount of setup and configuration but don't need many steps to execute the
test. Checklists are also useful for testing features that require in-depth validation of voluminous
data, fields, security options, and items that are only tested once.
IN YOUR ZONE 22
Test Case Design
Firstly, there are levels in which each test case will fall in order to avoid duplication efforts, which
are mentioned below:
Level 1: In this level you will write the basic test cases from the available specification and user
documentation.
Level 2: This is the practical stage in which writing test cases depend on actual functional and
system flow of the application.
Level 3: This is the stage in which you will group some test cases and write a test procedure. Test
procedure is nothing but a group of small test cases maximum of 10.
Level 4: Automation of the project. This will minimize human interaction with system and thus QA
can focus on current updated functionalities to test rather than remaining busy with regression
testing.
So you can observe a systematic growth from no testable item to an Automation suit.
Test Case – a Test Case consists of the following components which describe a Test Item under test:
(Note: Mandatory fields are marked with a red asterisk)
Use the same naming convention for all Test Cases in a project.
Include “TC” + Test Case identifier and Title - high level description of functionality under test (a
short phrase describing the test scenario).
A useful way of formulating a test case title is the Action Target Scenario method, where:
Action – a verb that describes what you are doing, some good examples are: create, delete,
ensure, edit, open, populate, observe, login, etc.
Target – the focus of your test, usually a screen, object entity, program, etc
IN YOUR ZONE 23
Test Case Design
Scenario – The rest of what your test is about and how you distinguish multiple test cases for the
same Action and Target.
Example1: Create – Task – title is not supplied
Create – Task – title is the maximum allowable length
The Test Case identifier should be unique not only within the concrete Test Suite which contains
that Test Case, but also within all Test Suites of the test project.
The covered Use Case ID/ User Story ID/requirement ID should be part of the Test Case identifier.
Example2:
“TC01.01.02.03 TC01.01.02.03 Create duplicate user” Test Case is part of the “TS 01.01.02 Create
user – negative approach” Test Suite.
First part, “01.01” - is the unique Use Case ID/User Story ID/requirement ID covered by the Test
Suite (and it is part of the Test Suite name too),
Second part, “02” – is the Test Suite unique number.
Third part, “03”- is the Test Case unique number per Test Suite.
As adding a new Test Case to the Test Suite, the maximum number of the third part among existing
Ids plus 1 is assigned as Test Case unique number.
Example3: For use case “UC 01.01 Create user” one parent Test Suite is created, TS 01.01 Create
User, with two children Test Suites:
TS 01.01.01 Create user-positive scenario, that includes:
- TC01.01.01.01 Create user - alphanumeric characters for username and password,
- TC01.01.01.02 Create user - capitals letters in username and password,
- TC01.01.01.03 Create user - max length fields with mixed alphanumeric and capitals for username
and password
- TS 01.01.02 Create user –negative approach, which includes:
- TC01.01.02.01Create user without username,
- TC01.01.02.02Create user without Password,
- TC01.01.02.03 Create duplicate user,
- TC01.01.02.04Create user - username exceeds max length,
- TC01.01.02.05Create user – password exceeds max length,
etc.
Note: If Test Case covers the integration between several requirements/use cases, these must be
listed in the Test Case description field and only the main Use Case ID used in Test Case name.
IN YOUR ZONE 24
Test Case Design
2. Test description (not mandatory, but strongly recommended): Describe the purpose of the test,
what test condition will be verified by the Test Case.
If Test Case covers the integration between several requirements/use cases, these will have to be
listed.
3. Environment: List specific hardware, software and network configuration that needs to be set up
for the test.
4. Preconditions: These are required when some additional preparation is necessary before
executing the Test Case: software configuration, hardware configuration and resources needed,
security access, tools.
IN YOUR ZONE 25
Test Case Design
The tester should see the named GUI screen or web page. The general correctness of the
page should be testable based on the feature description.
verify CONDITION
The tester should see that the condition has been satisfied. This type of step usually follows
a "see" step at the end of the test case.
verify CONTENT [is VALUE]
The tester should see the named content on the current page, the correct values should be
clear from the test data, or given explicitly. This type of step usually follows a "see" step at
the end of the test case.
perform TEST-CASE-NAME
This is like a subroutine call. The tester should perform all the steps of the named test case
and then continue on to the next step of this test case.
Every test case must include a verify step at the end so that the expected output is very clear. A test
case can have multiple verify steps in the middle or at the end. Having multiple verify steps can be
useful if you want a smaller number of long tests rather than a large number of short tests.
6. Expected Results*: The description of what one expects the function to do.
Expected Results is the information we use to decide as to whether a Test Case passed or failed (by
comparing to Actual Results). Additionally, expected results may be added to individual steps when
needed for clarity
6.1 Precision and clarity play a critical role in defining Expected Results.
Example:
As executing the Test Case, what if the error message says, "Please provide postcode," while it
should say, "Your postcode is invalid"?
It is not necessarily required to provide the exact text of an error message, because this text can be
often changed, so, for maintainability purposes, it is recommended to define Expected Result like:
"Verify that the error message about an invalid postcode is displayed".
Note: If the customer requires a concrete message to be displayed and it is part of the requirements,
then it must be included in Expected Results for verification.
1. Each Test Case checks only one testing idea, but two or more expected results are totally
acceptable if there is a need to perform several verifications for that testing idea.
IN YOUR ZONE 26
Test Case Design
To verify one test idea, system should meet two expected results:
1. Split the test idea in two separate ideas and create two Test Cases.
2. Don't change the test idea and have two Expected Results in one Test Case. The Test Case would
pass if (and only if) both Actual Results match the corresponding Expected Results. In all other cases,
the Test Case would fail.
From practical point of view, there are many situations when the second approach gives possibility
to save time and effort for Test Cases creation and maintenance.
IN YOUR ZONE 27
Test Case Design
Expected results for “Login” and “Navigate to..” steps are not required in the above Test Cases, as
the purpose of the tests is to verify that the user is able to successfully create orders/ keywords.
Login and page displaying should be verified in separate Test Cases.
Good example:
TC ID Execution Steps Expected Results
TC01.01 Verify 1.1.Login as Customer User Info message is displayed that
Customer User is able 1.2 Navigate to Create Order page. the order is successfully created.
to create order (http://test/CreateOrder.aspx ) Newly created order is displayed
Complete all fields with valid data. in the orders list.
Submit data.
7 Test Data: Test data required for executing the test case.
Additionally, depending on test management tool used, a Test Case can include:
1. Assigned keywords. It is recommended for all Test Cases to have assigned at least one value from
the following sets of keywords: - Complexity (indicate the size and effort of a Test Case; it can be
used for test effort estimation).
IN YOUR ZONE 28
Test Case Design
- Functionality (is useful especially when similar functionality is implemented in different application
modules. In case of a change to it, the keyword will help Test Engineer to quickly identify all affected
Test Cases and also will help to decide which Test Cases are required or not for a specific test
execution round. Example of keyword: Advanced Search).
- Test Plan Type.
The following Test Plan types are typically used in Endava projects:
- Smoke Test Plan, that includes Test Cases commonly run on each new build to catch regressions;
they cover critical path Test Cases that are targeted on features and functionalities that the user
sees and uses frequently; all components and all features should be briefly tested; these Test Cases
are usually automated.
- Basic Functional Test Plan, that includes Test Cases run before every release; they cover the
minimum basic functionality of the system under test.
- Detailed Test Plan, that includes Test Cases run before a major release of the system; they cover
detailed aspects of the system’ functionality.
The keywords will help to determine whether to run or not a Test Case at a specific test execution
phase and to easily identify all test cases which cover a certain area.
3. Additional customizable fields. The additional fields are used as per project needs. Some of the
recommended ones are:
Editable upon Creation: Type (Manual|Automated); Estimated Execution Time, Priority.
Editable upon Execution: Browser; Actual Execution time.
4. Set up and clean up. Test cases should include set up and clean up information as needed. If pre-
conditions fields are used, they should be concise and only include useful information that describes
the conditions that must be met before performing the test.
5. A Scope field. Use the Scope field to capture information about test boundaries, related features,
additional information about the test, and included or excluded items (for example, operating
systems, database types, client types, and release or sprint specific tests).
6. Examples and screenshots. Test cases should contain examples and screenshots, where needed,
to help testers better understand how to execute the test and what they should be seeing. It may
IN YOUR ZONE 29
Test Case Design
also be helpful to include examples of commands to run, a list of directories where files on specific
operating systems can be found, and any other examples that may aid testers.
The designed Test Cases should strike the best possible balance between being:
- Non-redundant: Be practical and have a low redundancy. Any feature or functionality to be tested
should not be repeated in different Test Cases. Two Test Cases should not find the same defect.
- Clear: Clear flow of events; clear correspondence between execution steps and expected results;
unambiguously defined execution steps and expected results.
- Detailed: Test Cases should contain detailed steps that are needed to test a particular function; no
missing execution steps; no unnecessary execution steps.
- Accurate: Test Case should be without any drawbacks like spelling mistake, use the system exact
functionality/GUI names.
- Short and Simple language: The Test Case should be short rather than lengthy and it should be
written in simple language, so that any person is able to understand the scope of each Test Case.
- Evolvable: They should be well structured and maintainable; neither too simple nor too complex;
separated Test Cases for positive and negative scenarios. Test Case is a basic unit of testing and it is
a general practice to perform estimation of test effort using Test Case enumeration technique.
Generally, it is recommended to limit Test Cases to 15 execution steps.
- Complete: Test Cases should cover all the features/functionalities that have to be tested:
Test Cases have been developed for all requirements. Test Cases cover the whole described
functionality, not just a part of it.
Test Cases have been developed for all non-functional requirements.
Test Cases have been developed to cover any changed requirement.
Test Cases have been developed for all basic flows in use cases.
Test Cases have been developed for all alternative flows in use cases, positive and negative
testing, boundary and forced error (if applicable).
Test Cases consider the Use Case preconditions; include, exclude dependency relations on
Use Case diagrams.
Test Cases have been developed to cover the dependencies between Use Cases.
Test Cases have been developed for all major/critical issues found in previous releases and
for all issues reported by the stakeholders.
IN YOUR ZONE 30
Test Case Design
- Enable traceability: each Test Case can be traced back to a requirement/use case. References from
Test Cases to a requirement/use case are very helpful to:
Quickly identify which coverage item(s) a Test Case is covering, for example if the execution of
the Test Case provokes a failure.
Quickly identify requirements/use cases that are not covered with Test Cases yet.
Quickly identify the Test Case(s) that may be affected if a coverage item, let’ say a
requirement, is changed.
- Repeatable: The result of the Test Case should be always the same, no matter how many times it
has been executed before.
- Self-cleaning – returns the test environment to clean state. For example if the test requires date
change on the database server, the date should be reset to the correct one after the test is
completed.
2. Independent style: each Test Case is self-contained, does not rely on any other Test Cases. A
good way to check the independence of Test Cases in a Test Suite is to change the order in which the
Test Cases are executed.
Advantages: Any number of Test Cases can be executed in any order.
Note: Independence is a characteristic of a good Test Case.
Disadvantages: Larger and more complex Test Cases, harder to design, create and maintain.
The chosen approach should fit the project needs, time constraints, the audience (i.e. team size).
IN YOUR ZONE 31
Test Case Design
Example:
TC01.01 Verify successful registration on Website
1. Go to the registration page on the registration Website.
2. Enter your name in the registration application.
3. Access the registration List menu option.
4. Verify your name is now in the list.
Advantages:
Disadvantages:
A low-level Test Case is a Test Case with specific values defined for both input and expected result.
The documentation of a detailed (low-level) Test Case must at least include:
- Unique identification (with a reference to the test basis)
- Execution preconditions
- Execution steps: data and actions
- Expected results
- Examples or Screenshots
IN YOUR ZONE 32
Test Case Design
Example (detailed version of the Test Cases presented above in high-level style):
TC ID Execution Steps Expected Results
TC01.01 Verify 1. Launch an IE browser and access 1. Registration page is displayed.
successful Registration page URL, The First Name field is highlighted
registration on http://www.test.com by default.
Website 2. Enter the name “John” in the First 2. The Last Name field is now
Name field. highlighted.
Hit the tab key on your keyboard.
3. The name John Doe is present
3. Enter the name “Doe” in the Last in the list.
Name field.
Hit “Enter” on your keyboard.
Access “registration List” menu option.
IN YOUR ZONE 33
Test Case Design
Advantages:
- Repetitive;
- it can be executed even by a tester that is just learning the application;
- is easier to determine pass or fail criteria ;
- easier to automate;
- is useful when developing tests for complex functionality areas, like calculation of rates by
different mathematical formulas, testing charts and graphs, data import by adding/removing specific
tags in XML files etc.
Disadvantages:
Depending on the size of the application being tested, and given the limited schedules and budgets,
writing detailed Test Cases may be too time consuming, in which case the high-level test descriptions
may be sufficient. The chosen approach should fit the project and product size, time constraints, the
audience (i.e. experienced versus non-experienced test team) etc. Anyway, take into consideration
that excessive detailing of steps can cause difficulty in Test Case maintenance, while excessive
abstraction can cause difficulty in Test Case execution.
Effective Test Case design includes Test Cases that rarely overlap, but instead provide effective
coverage with minimal duplication of effort.
Before designing Test Cases, the test analyst must analyze the test basis in order to:
- Identify any patterns of similar action, common requirements, events used by several
transactions/functionalities.
- Capture these patterns in a suite of common Test Cases, so they can be reused and recombined to
execute various functional paths, avoiding duplication of test-creation efforts.
The following are a few frequently occurring test-design patterns that are suitable for reuse:
IN YOUR ZONE 34
Test Case Design
Add item.
Read and verify existence of identical unchanged data.
Modify and verify matching modified data.
Delete and verify removal of item.
IN YOUR ZONE 35
Test Case Design
The goal of the test-design process should be to reduce the “reinvention of the wheel” by
recognizing, capturing and reusing common test issues.
Examples:
- The pagination functionality that should be checked on all pages/pop-ups where the number of
entities exceeds <n> items per page.
- Order of items in any list
- Order of buttons on any page/pop-up
Other recommendations:
- Combine in a separate Test Suite all generic functional Test Cases that are common to the entire
application and should be run before major deliveries only.
Example: Menu options displaying, Help files content, translations.
- Combine in separate Test Suites Test Cases that test non-functional requirements: UI
requirements, security and performance.
Test Cases prioritization is an important consideration for effectively creating Test Cases. It has the
role to reduce the overall number of Test Cases just to the “required” ones as it is obvious that it is
impractical and inefficient to re-execute all Test Cases at every product change. Test Cases
prioritization helps the test manager to:
IN YOUR ZONE 36
Test Case Design
Before prioritizing the Test Cases, the test analyst must focus on:
As prioritising Test Cases, the test analyst must take into consideration how critical each Test Case is
to the product, which Test Cases should be executed first and which are less important to execute.
The Test Case prioritization criteria can differ from project to project. Re-prioritization of Test Cases
can occur for each new product release.
It is demonstrated that the top 10% - 15% of the Test Cases uncover 75%-90% of the significant
defects. Risk prioritisation is a method of choosing the 10% -15% which are the most critical Test Cases.
The following points should be considered:
- product risks;
- customer assigned priority to a specific requirement;
- requirement complexity;
- recently affected functionality by major faults/failures;
- Test Cases dependencies.
High priority - Allocated to all tests that must be executed in any case:
- Test Cases that check the core functionality and are needed to be executed for each build;
- Test Cases that check system critical issues revealed in previous product versions;
- Test Cases that cover highly impacted areas of functionality by latest code/environment changes;
- Test Cases that failed in the last tests execution session;
- Test Cases that cover areas where many issues are usually found (bugs clustering testing principle).
Example: If a Test Case has been designed for regression testing of each release and the Test Case
covers critical functionality then the High priority must be assigned to the Test Case.
Medium priority - Allocated to the tests which can be executed only when time permits:
Low priority - Allocated to the tests which, even if not executed, have low impact on the product
quality; Test Cases executed only when a full regression is required:
IN YOUR ZONE 37
Test Case Design
- Test Cases that cover alternative use cases path that are rarely used in operational system;
- Test Cases that cover functionality with minor changes that will be rarely used;
- Test Cases that historically always pass.
Prioritisation is also important in designing Test Cases. It helps to focus first on most important
areas. For example, select to design first system Test Cases that cover:
Due to changes in requirements, design or implementation, Test Cases become often obsolete, out-
of-date. Given the pressures of having to complete the testing, testers continue their tasks without
ever revisiting the Test Cases. The problem is that if the Test Cases become outdated, the initial
work creating these tests is wasted, and additional manual tests executed without having a Test
Case in place cannot be repeated.
Next recommendations should be considered in order to solve this problem:
- As requirements change, the Test Engineers must adjust Test Cases accordingly.
- Test Cases must be modified to accommodate the additional information, details that surface
during the architecture and design phases, and sometimes even during the implementation
phase (as issues are discovered that should have been recognized earlier).
- System functionality may change or be enhanced during the development life cycle. This may
affect several Test Cases which must be redesigned to verify the new functionality. Each TC
modified upon a change request should have in the description the record (email; meeting
minutes, Use Case ID) that describes the change.
- As defects are found and corrected, Test Cases must be updated to reflect the changes and
additions to the system. Sometimes fixes of defects change the way the system works. For
example, Test Cases may have included workarounds to accommodate a major system bug. Once
the fix is done, the Test Cases must be modified to adapt to the change and to verify that the
functionality is now implemented correctly.
- When a new scenario is encountered, it must be evaluated, assigned a priority, and added to the
set of Test Cases.
Test Cases must evolve during the entire software development lifecycle.
It is well known that automated software testing is a good way to increase the effectiveness,
efficiency and coverage of software testing. Once automated tests are created, they can easily be
repeated and extended to perform tasks that are too complex or even impossible for manual testing.
IN YOUR ZONE 38
Test Case Design
But it is not possible to automate all testing, so it is important to determine what Test Cases should
be automated first.
How to decide what Test Cases to automate?
The decision to automate Test Cases grounds on the following principles:
Recommendations:
Irrespective on project size, a Test Suite must contain positive tests for all requirements/use cases.
Boundary tests and forced error tests are strongly recommended for critical functionalities. It is
recommended for a Test Suite to contain a collection of Test Cases that fully test a Requirement/Use
Case:
- Positive test
- Boundary tests
- Forced error tests
- Test Cases that cover dependencies between use cases/requirements
Note: If secondary paths are not under test case design scope, they must be included in exploratory
testing.
IN YOUR ZONE 39
Test Case Design
Example:
For “UC01.01.01 Manage Regions” Use Case “TS01.01.01 Manage Regions” Test Suite is created.
2. Test Suite description: Describe the general idea of what will be tested by this Test Suite, what
functionality will be verified.
Example: Description for Test Suite “UC02.06.07.04 Go to next/previous entity details”:
"Main Actor: Customer User.
Test that the actor is able to scroll through different entity details either to next details or to
previous details from the list with data."
In this section can be included (if required ) global settings, information on the system configuration
to be used during testing, tools required to execute all Test Cases included in Test Suite.
Example:
Note: Link to a requirements document from Intranet can be included too if required.
Some best practices are provided to assist in your design of functional tests:
Before we start a project, we always set to work crafting the best user stories – i.e. short, simple
descriptions of particular features in the app as desired and described from a user’s perspective.
Testing then against these criteria at a later stage becomes a much more simplified and streamlined
exercise.
Use predefined criteria (like the following list) to identify test plans that are candidates for automation:
Can you automate the entire test plan using preconfigured automation functionality?
Do you need to rearrange the test plan to better suit automation?
Can you reuse existing scripts, or modify scripts from sample libraries?
IN YOUR ZONE 40
Test Case Design
The best user stories are invariably the simplest, and normally take the form of a very straightforward
formula:
Examples:
• As a customer, I want to review my shopping cart before purchase so I’m confident to spend
From here, we begin the build, and then, when the time comes, embrace rounds of functional testing at
each stage of development, always keeping those basic user stories that we started out with in mind.
IN YOUR ZONE 41
Test Case Design
the script. In addition, Test Case modules can have a specified number of iterations. For example, a test
case module to add a new account can be constructed to execute an indefinite number of times without
changing any script.
Unit testing is performed on the smallest elements of a system, such as individual classes within an
application. Each component is tested to ensure that it properly handles input and output under
normal operation, borderline use cases, and error conditions.
Integration testing looks at a sub-division of a system, to ensure that all processes within it are
working together smoothly.
Regression testing is a two-part process, applied on fixed code. First, a confirmation test verifies the
integrity of the fix itself, then a regression test on the application as a whole confirms whether or
not the applied fix has broken any of the program's functionality.
IN YOUR ZONE 42
Test Case Design
Smoke tests may be run as a final check, when the collaboration between developers and testers
results in changes in code close to a finished product. The testing ensures that these changes have
not destabilised the overall structure of the application or caused potentially fatal errors prior to
release.
Usability testing validates each part of the software's GUI (buttons, text boxes, etc.) for their
visibility, interaction, ease of use, and for compliance with relevant standards.
Browser compatibility tests are employed on Web and mobile applications to ensure the software's
performance on various types and versions of browsers. The effects of changing server integration
and links to third-party systems may also be tested.
As functional testing is often time-consuming, a hybrid approach combining the best fit of several
relevant testing methods is wise.
Scenario testing also involves recreating conditions for when network connections are less than
perfect, when a user only has limited battery power, when there are incoming calls, text messages
and other alerts that may pop up.
“If you are testing a mobile application that targets multiple devices, forget about emulators and
simulators and get your hands on some real devices. If the test team includes more than two people,
get two of each device, put them on the local wireless network and get testing.
Enable automated code review for test coverage, complexity, duplication and style for virtually any
programming language is a good way. (i.e. Code Climate for automated code testing).
IN YOUR ZONE 43
Test Case Design
The testing team should also act as a mediator between the development team and the user
community, as feedback and usability issues are reported back from beta tests, and ongoing version
releases. Reports should give a feature-by-feature view of the overall health and defects of an
application that can be used as a template for its improvement.
Flows are structured into steps. Each step should explain what the actor does and what the system
does in response; it should also be numbered and have a title. Alternate flows always specify where
they start in the basic flow and where they go when they end.
IN YOUR ZONE 44
Test Case Design
As an example, see the full set of Use Case scenarios for the next diagram:
IN YOUR ZONE 45
Test Case Design
Note: A use case may have dependency with another use case, which would require interface and
interaction. Ensure that this dependency and associated use case flows are captured in the Test Case
design.
IN YOUR ZONE 46
Test Case Design
Manage Regions
(Edit Region operation only will be studied below to simplify the example)
Actor=Global Administrator.
Basic Flow
1. Login to CisionPoint application.
This Use Case starts when Actor log in to CisionPoint application.
2. Actor asks for Manage Regions functionality.
3. System verifies the User permissions.
System ensures that currently logged on user is associated with Regions function -> Manage Regions
permission.
4. System identifies all available Regions.
System displays to the user the identified list, ordered ascendingly by Region Name.
Create function is enabled and Delete function is disabled. If there are more than 20 Regions, then
the system should apply pagination.
5. Actor requests details of a Region.
User clicks on region name link to display region details.
6. System presents details of the requested Region.
Region details are opened in Create/Edit Region pop-up.
7. Actor makes necessary amendments to the Region settings.
Where necessary, it updates the list with available languages for Region (UC01.01.03 Select available
Languages for Region).
8. Actor submits changes.
9. System validates data.
System ensures that Regions settings are the valid ones.
10. System saves changes.
System make persistent changes applied by user to the Region and updates automatically
corresponding settings to all Customers belonging to the current Region and settings of Customer
Users under this Region.
11. System reloads the updated regions list.
System refreshes the list with available Regions and displays it to the user ordered in ascending
alphabetical order.
Alternate Flows
1. Actor does not have assigned Manage Regions permission of Regions function.
After Step2 of the Basic Flow, system hides Create and Delete functions and disables Save function
from Region Details.
2. Actor wants to cleanup provided settings.
IN YOUR ZONE 47
Test Case Design
Test Scenario/ Manage Valid Unique Man- All Clear Cancel Expected
Case Condition Regions data region datory fields action action Result
IN YOUR ZONE 48
Test Case Design
TC3 Scenario2: User with no – N/A N/A N/A N/A N/A N/A User is not able to edit the
Manage Region permission region. System disable the
Save functionality on
Region Details pop-up.
TC3 Scenario3: User cleans up + + - + + + - Default settings of the
the provided Region Region are re-established
details, region name not from DB.
unique
TC4 Scenario3: User cleans up + - + + + + - Default settings of the
the provided Region Region are re-established
details, invalid data in from DB.
fields
TC5 Scenario3: User cleans up + + + - + + - Default settings of the
the provided Region Region are re-established
details, missing mandatory from DB.
data
TC6 Scenario4: + + + + + - + The system redirects the
User cancels the update user to the list with
Region operation Regions
TC7 Scenario5: User provides + + - + - - - System informs the user
duplicate region name about this and asks him to
provide a different Region
name
TC8 Scenario6: User does not + + + - - - - System asks the user to
provide all mandatory complete all mandatory
values: does not provide fields: 'Please complete all
any mandatory value mandatory fields’
IN YOUR ZONE 49
Test Case Design
8 Functional analysis
Test Cases are developed based on test basis analysis (requirements, business scenarios, use cases).
In order to create effective Test Cases, the Test Analyst must understand the details and the
complexity of the application.
Even when detailed requirements are available in a project, the interdependency between
requirements is not always obvious. The test analyst must analyse how any change to any part of the
application affects the rest of the application. It is not enough to create Test Cases that just verify
aspects of the change itself. An effective Test Case design must also cover all other areas affected by
this change.
IN YOUR ZONE 50
Test Case Design
The above steps comprise a good basic test. However, something is missing in order to fully verify
this requirement. The question that needs to be answered is how is the system otherwise affected
when the customer name is changed. Is there another screen, functionality, or path that uses or is
dependent upon the customer name? If so, it will be necessary next to determine how those other
parts of the application are affected.
Some examples:
1. Verify that the "Create Customer user" functionality in the Customer Users module is now using
this changed customer name.
- Add a customer user record, and verify that the new record has been saved using the new
customer name.
- Perform any other possible functions making use of the changed customer name, to verify that it
does not adversely affect any other previously working functionality.
Analysis and testing must continue until all affected areas have been identified and tested. After the
functional tests have been defined and numerous testing paths through the application have been
derived, additional test design techniques must be applied to narrow down the inputs for the
functional steps to be executed during testing.
IN YOUR ZONE 51
Test Case Design
· Define dependencies with other Test Cases - pre requisite for the Test Case
· Define input data (if any)
· Define output of the Test Case
· Define pass/fail/partial pass criteria
6. Document the Test Case
7. Walk through (dry run) the Test Case on the application
8. Review Test Case design to identify
· Missed conditions and paths
· Need for more Test Cases
· Defects in existing Test Cases
9. Update the Test Case design
10. Verify the Test Case design and close the review findings
Test Case(s) pertaining to a specific user interface and user interface path can be grouped together
to form a Test Suite.
IN YOUR ZONE 52
Test Case Design
10 Bibliography
1. http://www.testingeducation.org/BBST/BBST--IntroductiontoTestDesign.html
2. http://www.slideshare.net/warsha.agarwal/test-case-writing-best-practices-presentation-
945906
3. http://www.slideshare.net/guru__123/test-case-training
4. http://www.onestoptesting.com/equivalence-partitioning/
5. http://www.onestoptesting.com/test-cases/designing.asp
6. http://www.onestoptesting.com/boundary-value-analysis/
7. http://softestserv.ca/RBT_Cause-Effect_Graphing2.pdf
8. http://istqb.org/download/attachments/2326555/ISTQB+Glossary+of+Testing+Terms+2+1.pdf
9. http://msdn.microsoft.com/en-us/library/cc514239.aspx
10. Software testing. An ISTQB-ISEB Foundation Guide. Second Edition, 2010.
11. BS ISO/IEC 27001:2005
IN YOUR ZONE 53
Test Case Design
11 Glosary
Test objectives - List of all the business processes that the application is required to support; list of
standards for which there is required compliance; list of non-functional requirements: usability levels,
performance indicators, security aspects, etc.
Test condition - an item or event of the system that could be verified by one or more Test Cases, e.g.
a function, transaction, feature.
Test analysis and design – the process of identifying the test conditions for each test objective and
creation of Test Cases that exercise the identified test conditions.
Test basis – All documents from which the requirements of a component or system can be inferred;
the documentation on which test cases are based. May consist of functional specifications, user
requirements, business scenarios, use cases.
11.1.1 Test Case – a set of test inputs, execution preconditions, expected results
and execution post-conditions, developed for a particular objective or test
condition such as to exercise a particular program path or verify compliance
with a specific requirement. A test case has components that describes an
input, action or event and an expected response, to determine if a feature of
an application is working correctly.
Test Suite – a collection of one or more Test Cases for the software under test, where the post
condition of one test is often used as the precondition for the next one
Test data - Data that exists (for example, in a database) before a test is executed, and that
affects or is affected by the component or system under test.
Test item - The individual element to be tested. There usually is one test object and many test
items.
Effective Test Case - Test Case designed to catch specific faults and that has a good possibility of
revealing a defect.
Test Case design technique – a method used to derive or select a good set of tests from the total
number of all possible tests for a given system.
Use case - describes a sequence of actions performed by a system to provide an observable result of
value to a person or another system using the product under development.
Use case scenario - an instance of a use case, or a complete "path" through the use case.
IN YOUR ZONE 54