0% found this document useful (0 votes)
130 views54 pages

Test Case Design: 01.00 Developed By: Revised By: Approved By: HOT - Olga Barbu Date: Date: Date

This document outlines best practices for test case design. It discusses reviewing test basis documents, identifying test scenarios, preparing test cases and data, and designing the test environment. The objectives are to help test engineers create effective test cases and follow best practices for test case design, including generating test cases from use cases. Test scenarios represent what will be tested while test cases describe how it will be tested. Identifying the right test scenarios is important as test cases are derived from them.

Uploaded by

wilyor6369
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
130 views54 pages

Test Case Design: 01.00 Developed By: Revised By: Approved By: HOT - Olga Barbu Date: Date: Date

This document outlines best practices for test case design. It discusses reviewing test basis documents, identifying test scenarios, preparing test cases and data, and designing the test environment. The objectives are to help test engineers create effective test cases and follow best practices for test case design, including generating test cases from use cases. Test scenarios represent what will be tested while test cases describe how it will be tested. Identifying the right test scenarios is important as test cases are derived from them.

Uploaded by

wilyor6369
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Test Case Design

Version: 01.00
Developed by: Date:
Revised by: Date:
Approved by: HOT – Olga Barbu Date:
Test Case Design

Table of Contents
1 Revision history ................................................................................................ 4
2 Introduction ..................................................................................................... 5
3 Objectives ........................................................................................................ 5
4 Software Testing Best Practices ............................. Error! Bookmark not defined.
4.1 The Best Practices for Designing Test Case ................... Error! Bookmark not defined.
4.2 Best Practices for Functional Test Design ...................... Error! Bookmark not defined.
4.2.1 Review Test Plans to Identify Candidates for Automation .. Error! Bookmark not defined.
4.2.2 Define a Common Structure and Templates for Creating Tests ....................................... 41
4.2.3 Define Test Script Naming Conventions ........................................................................... 41
4.2.4 Design Flexible Scripts with Defined Purpose ..................... Error! Bookmark not defined.
4.2.5 Design Modular Scripts ..................................................................................................... 41
4.2.6 Design Reusable Scripts .................................................................................................... 42
4.2.7 Make Test Scripts Independent of the Operating Environment ...................................... 42
4.2.8 Make Test Scripts Independent of Test Data ................................................................... 42
5 General Guidelines in writing Test Cases ......................................................... 17
6 Tips for Writing Test Cases .......................................... Error! Bookmark not defined.
7 Test Case Writing Best Practices ................................... Error! Bookmark not defined.
7.1 Test Case Style ............................................................... Error! Bookmark not defined.
7.2 Required Test Case Content........................................... Error! Bookmark not defined.
7.3 Additional Test Case Content......................................... Error! Bookmark not defined.
7.4 New Feature Coverage................................................................................................ 22
7.5 A Final Word. 2016......................................................... Error! Bookmark not defined.
5.1 Format of Test Case ........................................................... Error! Bookmark not defined.
8 How to write effective Test cases, procedures and definitions ..........Error! Bookmark not
defined.
8.1 FUNCTIONAL TESTING BEST PRACTICES ........................ Error! Bookmark not defined.
5.2 Test Case Attributes........................................................................................................ 30
5.3 Test Case Design Styles ................................................................................................... 31
5.4 Test Case Writing Style ................................................................................................... 32
5.5 Reuse of Test Cases ........................................................................................................ 34
5.6 Test Cases Prioritization ................................................................................................. 36
5.7 Test Cases Maintenance ................................................................................................. 38

IN YOUR ZONE 2
Test Case Design

5.8 Test Cases for Automation ............................................................................................. 38


5.9 Test Suites ....................................................................................................................... 39
9 Business Logic Based Test Design .................................................................... 44
7.1 Test Case Elaboration ..................................................................................................... 44
7.2 Example of Test Cases for a specific Use Case ............................................................... 46
10 Functional analysis ......................................................................................... 50
11 User Interface Test Design .............................................................................. 51
11.1.1 Best Practice for writing good Test Case Example. ............. Error! Bookmark not defined.
11.1.2 Test Case Management Tools.............................................. Error! Bookmark not defined.
11.1.3 Resources............................................................................. Error! Bookmark not defined.
12 Top 10 Negative Test Cases .................................... Error! Bookmark not defined.
13 Basic Security Control ............................................ Error! Bookmark not defined.
14 Bibliography ................................................................................................... 53
15 Glosary ........................................................................................................... 54

IN YOUR ZONE 3
Test Case Design

1 Revision history

Revision Date of Description of modifications Author


revision

01.01 20.02.2017 Initial version of document

IN YOUR ZONE 4
Test Case Design

2 Introduction
This document defines the best practices recommended for the Test Analysis and Design discipline at
Endava. The target audience is Test Engineers interested in or involved in Test Analysis and Design
activities.

3 Objectives
The purpose of this document is to help Test Engineers in creating effective Test Cases and to see the
best practices of Test Case design. General recommendations are presented how to generate Test
Cases from Use Cases.

4 The Best Practices for Test Design

After the Master Test Plan is approved, the testing team will be moving to the next phase. ‘Test
design’. During this phase, all the three listed documents will be prepared.

IN YOUR ZONE 5
Test Case Design

 Test Basis Reviewing


 Test Scenario Identification
 Test Case Preparation
 Test Data Preparation
 Test Coverage
 To design the test environment set-up and identify and required infrastructure and tools.

4.1 Test Basis Reviewing


The test basis is the information we need in order to start the test analysis and create our own test
cases. Basically it’s a documentation on which test cases are based, such as requirements, design
specifications, product risk analysis, architecture and interfaces. We can use the test basis
documents to understand what the system should do once built. Test Basis Prevents defects
appearing in the code at execution moment, all the requirements are translated in terms of testable
items.
In this point…

1. Tester should ask questions, contribute feedback and make suggestions. It´s important to analyse the
documentation and ask for unclear requirements.
2. Tester should be proactive.

IN YOUR ZONE 6
Test Case Design

4.2 Test Scenario Identification


Test Scenario is made up of two words (Test & Scenario). Where Test means to verify or validate
and Scenario means any user journey. When combine it says verify user journey. It is also called Test
Condition or Test Possibility means any functionality that can be tested.

Test Scenario is what to be tested and a Test Case is how to be tested.

Example
Test Scenario: Validate the login page
 Test Case 1: Enter a valid username and password
 Test Case 2: Reset your password
 Test Case 3: Enter invalid credentials

IN YOUR ZONE 7
Test Case Design

The exhaustive testing is not possible due to large number of data combinations and large number
of possible paths in the software. Scenario testing makes sure that end to end functionality of
application under test is working as expected and ensures that all business flows are working as
expected. As Scenarios are nothing but the User journeys, in scenario testing tester puts themselves
in the end users shoes to check and perform the action as how they are using application under test.

The preparation of scenarios is the most important part. The tester needs to consult or take help
from the Client, Business Users, BAs (Business Analyst) or Developers. Once these test scenarios are
determined, test cases can be written for each scenario. Test scenarios are the high level concept of
what to test.

So…

3. Tester must have a good understanding of the business and functional requirement of the application.
Scenarios are very critical to business, as test cases are derived from test scenarios. So any miss in Test
Scenario would lead to missing of Test Cases as well. That is why Scenario writer plays an important
role in project development. A single mistake can lead to a huge loss in terms of cost and time.
4. Tester must have gone through the requirements carefully. In case of any doubts or clarification, POCs
(Point of Contact) should be contacted.
5. Understand the project work flow, and wireframes (if available) and relate the same to the
requirement.

Things to note while writing Test Scenario:


1. Test Scenarios should be reviewed by the Product Manager/Business Analyst or anyone else who
understands the requirements really well.
2. Domain knowledge is important to get a deeper understanding of the application.
3. Test scenarios must cover the negative and out-of-the-box testing with a ‘Test to Break’ attitude.
4. Scenario mapping should be done to make sure that each and every requirement is directly mapped to
number of scenarios. It helps in avoiding any miss.
5. Ensure that every identified scenario is a story in itself.

IN YOUR ZONE 8
Test Case Design

4.3 Test Case Preparation

Test Case in simple terms refers to a documentation which specifies input, pre-conditions, set of execution
steps and expected result. A good test case is the one which is effective at finding defects and also covers
most of the scenarios/combinations on the system under test.

A test case has components that describes an input, action or event and an expected response, to determine
if a feature of an application is working correctly.

Test Case Writing is an activity which has a great impact on the testing phase and this makes test cases
an important part of the test execution process.
The Test Plan describes what to test while a Test Case describes how to perform a particular test. It is
necessary to develop a test case for each test listed in the test plan.

Knowing how to write good test cases is extremely important and it doesn’t take too much of your effort
and time to write effective test scripts! You just need to follow certain guidelines while writing test cases
Step 1: Detailed Study of the System Under Test
 Before writing test cases, it is very important to have a detailed knowledge about the system which
you are testing. It can be any application or any software. Try and get as much information as possible
through available documentation such as use cases, user guides tutorials, or by having hands on the
software itself.
 Gather all the possible positive scenarios and also the odd cases which might break the system such as
stress testing, uncommon combinations of inputs etc.

Step 2: Written in Simple Language


 The test cases should be written in a simple and understandable language. They must be clear and
concise as the author of test case may not execute them.
 Exact and consistent names for e.g. of forms, or fields under test must be used to avoid ambiguity.

IN YOUR ZONE 9
Test Case Design

Step 3: Test Case Template


Test case template is a document in which tester write their Test Cases. It looks like below:

Let us look at each parameters should included good test cases:


 Test Case ID: It is the unique number given to test case in order to be identified. This field is defined by
the type of system we are testing.
 Test Case Name: This can contain
 Name of the feature you are testing
 Requirement number from the specifications
 Requirement name as classified in client’s document
 This gives more specific name like particular Button or text box name, for which that particular Test
Case is related. For Test Case we will specify to which Requirement it belongs to.
 Description: This field has the summary what respective test case is going to do. It explains what
attribute is under test and under what condition
 Pre-Conditions: Any prerequisites or preconditions that must be fulfilled prior to executing the test
should be defined clearly.
Pre-conditions could be:
 A certain page that a user needs to be on
 A certain data that should be in the system
 A certain action to be performed before “execution steps” can be executed on that particular
system.
 Execution Steps: These are the steps to be performed on the system under test to get the desired
results. Steps must be defined clearly and must be accurate. They are written and executed number
wise. This is very important part in Test Case because it gives the clear picture what you are doing on
the specific object. This is the navigation for this Test Case. For e.g.
1) Navigate to gmail.com
2) In the ’email’ field, enter the email of the registered user.
3) Click the ‘Next’ button.
4) Enter the password of the registered user
5) Click ‘Sign In’
 Expected Result: This is the result of execution steps performed. It specifies what the specification or
user expects from that particular action. Expected results should be clear for each expectation of a
Test Case so that we can specify pass or fail criteria for each expectation.
 Actual Result: This field has the actual outcomes after the execution steps were performed on the
system under test. We will test the actual application against each Test Case and if it matches the
Expected result then we will say it as “As Expected” else we will write what actually happened after
doing those action.
 Status: It simply indicates Pass or Fail status of that particular Test Case
 Passed – If the expected and actual results match.
 Failed – if the actual result and expected result do not match

IN YOUR ZONE 10
Test Case Design

 Not Tested – The test case has not been executed


 Comments: This column is for additional information. So for e.g. if status is set to “cannot be tested”
then tester can give the reason in this column.

4.4 Test Data


Test Data will be used to validate the entire application and this can be a live data from production or the
one which will be prepared by the testing team. This data can be a combination of positive & negative.
 For keep your data intact for any test environment: Best way to keep your valuable input data
collection intact is to keep personal copies of the same data. It may be of any format like inputs to be
provided to the application, input files such as word file, and excel file or other photo files.
 Check if your data is not corrupted: Before executing any test case on existing data make sure that
data is not corrupted and application can read the data source.
 Test data can be said to be ideal if for the minimum size of data set all the application errors get
identified. Try to prepare test data that will incorporate all application functionality, but not
exceeding cost and time constraint for preparing test data and running tests.

Below, there are some suggestions:

 Test Data for White Box Testing


In white box testing, test data is derived from direct examination of the code to be tested. Test data may
be selected by taking into account the following things:

IN YOUR ZONE 11
Test Case Design

 It is desirable to cover as many branches as possible; testing data can be generated such that all
branches in the program source code are tested at least once
 Path testing: all paths in the program source code are tested at least once - test data can be designed
to cover as many cases as possible
 Negative API testing:
o Testing data may contain invalid parameter types used to call different methods
o Testing data may consist in invalid combinations of arguments which are used to call the
program's methods

 Test Data for Performance Testing


Performance testing is the type of testing which is performed in order to determine how fast system
responds under a particular workload. The goal of this type of testing is not to find bugs, but to eliminate
bottlenecks. An important aspect of performance testing is that the set of sample data used must be
very close to 'real' or 'live' data which is used on production. The following question arises: ‘Ok, it’s good
to test with real data, but how do I obtain this data?’ The answer is pretty straightforward: from the
people who know the best – the customers. They may be able to provide some data they already have
or, if they don’t have an existing set of data, they may help you by giving feedback regarding how the
real-world data might look like. In case you are in a maintenance testing project you could copy data
from the production environment into the testing bed. It is a good practice to anonymize (scramble)
sensitive customer data like Social Security Number, Credit Card Numbers, Bank Details etc. while the
copy is made.

 Test Data for Security Testing


Security testing is the process that determines if an information system protects data from malicious
intent. The set of data that need to be designed in order to fully test a software security must cover the
following topics:

 Confidentiality: All the information provided by clients is held in the strictest confidence and is not
shared with any outside parties. As a short example, if an application uses SSL, you can design a set of
test data which verifies that the encryption is done correctly.
 Integrity: Determine that the information provided by the system is correct. To design suitable test
data you can start by taking an in depth look at the design, code, databases and file structures.
 Authentication: Represents the process of establishing the identity of a user. Testing data can be
designed as different combination of usernames and passwords and its purpose is to check that only
the authorized people are able to access the software system.
 Authorization: Tells what the rights of a specific user are. Testing data may contain different
combination of users, roles and operations in order to check only users with sufficient privileges are
able to perform a particular operation.

IN YOUR ZONE 12
Test Case Design

 Test Data for Black Box Testing


In Black Box Testing the code is not visible to the tester. Your functional test cases can have test data
meeting following criteria -

 No data: Check system response when no data is submitted


 Valid data: Check system response when Valid test data is submitted
 Invalid data: Check system response when Invalid test data is submitted
 Illegal data format: Check system response when test data is in invalid format
 Boundary Condition Data set: Test data meeting bounding value conditions
 Equivalence Partition Data Set: Test data qualifying your equivalence partitions.
 Decision Table Data Set: Test data qualifying your decision table testing strategy
 State Transition Test Data Set: Test data meeting your state transition testing strategy
 Use Case Test Data: Test Data in-sync with your use cases.

4.5 Test Coverage

After completing the test case design before start the test case execution, it is necessary preparing the
Requirement Traceability Matrix. This RTM document is generally used to ensure the Test Coverage. First
all the High level requirements should be drilled down into several low level requirements. Then the Test
Cases should be mapped for all the low level requirements. All the requirements should have minimum of
one test case and maximum of n- test cases. If all the requirements are mapped with the test cases, then
we can ensure the Test Coverage. If some of the requirements doesn’t have the test case(s), testing team
will be preparing the test cases for that particular requirement(s) and will be mapping into the
requirements before start the test case execution. In this case, the testing team will not miss any
functionality during the testing phase from the requirement specification.

IN YOUR ZONE 13
Test Case Design

Sample requirement Traceability Matrix document:

Create traceability Test Basis & Test Cases at every moment is a good practice. We can calculate the
requirements testing coverage.
Write Test Cases before the implementation of the requirements Write Test Cases for all the requirements
Test Cases should map precisely to the requirements and not be an enhancement to the requirement

IN YOUR ZONE 14
Test Case Design

4.6 To design the test environment set-up and identify and required
infrastructure and tools.

These steps

1. Understand the test requirements thoroughly and educate the test team members.
2. Connectivity should be checked before the initiation of the testing
3. Check for the required hardware and software, licenses
4. Browsers and versions
5. Planning out the Scheduled use of the test environment.
6. Automation tools and their configurations.

Summary:

 A testing environment is a setup of software and hardware on which the test team will conduct
the testing
 For test environment, key area to set up includes
o System and applications
o Test data
o Database server
o Front end running environment, etc.
 Few challenges while setting up test environment include,
o Remote environment
o Combined usage between teams
o Elaborate setup time
o Ineffective planning for resource usage for integration
o Complex test configuration

IN YOUR ZONE 15
Test Case Design

IN YOUR ZONE 16
Test Case Design

5 General Guidelines in writing Test Cases

The main goal of Test Cases development is to validate the testing coverage of the software product
under test. The following recommendations should be considered:

1. Write Test Cases before the implementation of the requirements.


The reason is that during the process of developing Test Cases issues in requirements or design can
be revealed. Actually Test Case design is considered a static testing technique that can be used to
discover defects.

2. Write Test Cases for all the requirements.


Note: Writing Test Cases requires a good understanding of the functionality/feature to be tested.

3. Test Cases should map precisely to the requirements and not be an enhancement to the
requirement.

The Test Analyst must use a predefined, standardized way of documenting the Test Cases. This
format will improve the traceability and also will facilitate the communication with the Test
Engineers who will execute the tests.

Each Test Case must define:

• Clear objective

• Detailed execution steps

• Clear and unambiguously defined expected results

Next are presented the factors that should be considered during the Test Case design and
development process:

5.1 Test Case Writing Best Practices


Regardless of the process or tool used to generate test cases, every QA team should have documented best
practices to help promote consistency in the quality of test cases and the resulting work produced.

IN YOUR ZONE 17
Test Case Design

Like any set of best practices, they will be most effective when you customize them to meet your needs.

Test Case Writing is an activity which has solid impact on the whole testing phase. This fact makes the task of
documenting TCs, very critical and subtle. So, it should be properly planned first and must be done in well-
organized manner. The person who is documenting the TCs must keep in mind that, this activity is not for him
or her only, but a whole team including other testers and developers, as well as the customer will be directly
and indirectly affected by this work.

There are some important and critical factors related to this major activity. Let us have a bird’s eye view of
those factors first.

a. Test Cases are prone to regular revision and update:


Whenever, requirements are altered, TCs need to be updated. Yet, it is not only the change in requirement
that may cause revision and update to TCs.
During the execution of TCs, many ideas arise in the mind, many sub-conditions of a single TC cause update
and even addition of TCs. Moreover, during regression testing several fixes and/or ripples demand revised or
new TCs.

b. Test Cases are prone to distribution among the testers who will execute these:
Normally there are several testers who test different modules of a single application. So the TCs are divided
among them according to their owned areas of application under test.

c. Test Cases are prone to clustering and batching:


It is normal and common that TCs belonging to a single test scenario usually demand their execution in some
specific sequence or in the form of group. There may be some TCs pre-requisite of other TCs. Similarly,

IN YOUR ZONE 18
Test Case Design

according to the business logic of AUT, a single TC may contribute in several test conditions and a single test
condition may consist of multiple TCs.

d. Test Cases have tendency of inter-dependence:


In medium to large applications with complex business logic, this tendency is more visible.

The clearest area of any application where this behaviour can definitely be observed is the interoperability
between different modules of same or even different applications.

e. Test Cases are prone to distribution among developers (especially in TC driven development
environment):
An important fact about TCs is that these are not only to be utilized by the testers. In normal case, when a bug
is under fix by the developers, they are indirectly using the TC to fix the issue. Similarly, where the TCD
development is followed, TCs are directly used by the developers to build their logic and cover all scenarios,
addressed by TCs, in their code.

So, keeping in mind, here are some tips to write test cases:

 Identify The Scope And Purpose Of Testing


First key point before start writing effective test cases is that you need to identify the testable
requirements. You need to understand the purpose of testing & you must understand the features
and user requirements.

 Do not Assume

Do not assume functionality and features of your software application while preparing test case.
Stick to the Specification Documents.

 Implement Testing Techniques

It's not possible to check every possible condition in your software application. Testing techniques help you
select a few test cases with the maximum possibility of finding a defect.

Boundary Value Analysis (BVA): As the name suggests it's the technique that defines the testing of boundaries
for specified range of values.

Equivalence Partition (EP): This technique partitions the range into equal parts/groups that tend to have the
same behaviour.

State Transition Technique: This method is used when software behaviour changes from one state to another
following particular action.

IN YOUR ZONE 19
Test Case Design

Error Guessing Technique: This is guessing/anticipating the error that may arise while testing. This is not a
formal method and takes advantages of a tester's experience with the application

 Test Cases need to be simple steps, transparent and easy to understand.


The steps in the test cases should be detailed & to the point, so new tester can easily execute the test
case with ease. The purpose and scope of test cases should be well defined in the test case i.e. Test cases
should be self-explanatory. Along with detailed steps all pre-requisite test data should be specified in the
test case itself. Test cases should be reviewed by peer members.

 Test Cases Should Be Valid, Brief And Short


Test cases should have only necessary and valid steps. If single test case having too many test steps to be
performed then this may lose concentration & aim of test case. So, single test case should have one
expected result and not try to cover too many expected results. Create Test Case with thinking of end
users point of view. Don’t repeat the test cases, if same steps are needed to execute then provide the
test case id in the pre-requisite step.

 Test Cases Should Be Traceable


Each and every test case should be traceable and it should be linked to “Requirement ID”, which help to
check the 100% coverage of requirements & this will help to identify the tester is performing testing for
all requirements and no any requirement is missed to test. One more key advantage of doing linking
“Requirement ID” to test case is, if any of the requirement gets changed then based on this mapping we
can easily do the impact analysis & change the impacted test cases accordingly.

 Test Cases Should Be Maintainable


Test cases should be written in such a way that it should be easy to maintain. Let’s take scenario where
after completion of writing test cases the requirement gets changed, then tester should effortlessly able
to maintain the test suite of test cases. Each test case should have unique identification number which
help to link the test cases with defects and requirements.

 Define How To Perform Testing Activities


Before start writing the effective test cases, you should first start defining the effect test scenarios and to
start defining effective test scenarios you should have better understanding of functional requirements of
an application. You must know about what all operations can be used, so that the test scenarios & test
cases can be written based on these important guidelines.

IN YOUR ZONE 20
Test Case Design

 Implement Testing Techniques – Positive And Negative Test Cases


While writing test cases few test case design methods should be used like equivalence classes portioning,
boundary value analysis, normal and abnormal conditions. You should also consider negative testing,
failure conditions and error handling which could help to discover most probable defects in the code.
Don’t assume any functionality, write the test cases with reference to requirement specification
document.

 Preparation Of Test Data For Diversity Of Test Cases


Sometimes, variety of test data (pre-requisite) is required for executing the test cases like valid data to
test positive test cases, invalid data to test negative test cases, legitimate invalid data to test boundary
value conditions, illegal and abnormal data to test error handling and recovery test cases.

 Preparation Of Non-Functional Testing Aspect


Different set of test cases should be prepared for the basic performance testing like multi-user operations
and capacity testing. Also cover the test cases for Browser supports in case of Web application testing.
Test cases for Security aspects should be cover for example user permission, session management,
logging methods.

 Preparation Of Usability Testing Aspect


Test cases should cover the usability aspects in terms of ease of use. The test cases should cover for
overall style & color of the page & check same against the signed off mock-up designs if any or need to
check the style with overall style guidelines. Basic spellings, English grammars, population of dependent
drop-down list values should be cover under usability testing aspects.

 After documenting Test cases, review once again:


Go to the start and review all the TCs once. Think rationally and try to dry run your TCs. Evaluate that all
the Steps you have mentioned are clearly understandable, and expected results are in harmony with
those steps.
The test data specified in TCs is feasible not only for actual testers but is according to real time
environment too. Ensure that there is no dependency conflict among TCs and also verify that all
references to other TCs/artefacts/GUIs are accurate because, testers may be in great trouble otherwise.

IN YOUR ZONE 21
Test Case Design

 Never Forget the End User


The most important stakeholder is the ‘End User’ who will actually use the AUT. So, never forget him at
any stage of TCs writing. In fact, End User should not be ignored at any stage throughout the SDLC. So,
during the identification of test scenarios, never overlook those cases which will be mostly used by the
user or are business critical even of less frequent use.

 Define The Test Case Framework

Depends of the systems requirements the test case framework can be designed. Typically, the test cases
should cover functionality, compatibility, UI interface, fault tolerance, and performance of several
categories.

 Repeatable and self-standing

The test case should generate the same results every time no matter who tests it

5.2 New Feature Coverage


Keep these tips in mind when analysing your test coverage for new features.

 Establish good feature coverage. Consider the following when trying to achieve good coverage
of a feature:
o The importance of the feature and how often the end user or customer may use it.
o The commonality of the feature. If a feature includes basic functionality that is
commonly performed, then including other features in the test case will make better use
of the time spent testing the feature.
 Plan for regression testing. Keep in mind that new feature test cases may be used to perform
future regression testing after the initial release. Test cases covering functionality that will not
require testing going forward need to be identified as such, so that those tests aren't included in
future releases. Optionally, instead of writing a 'run once' test case, create a checklist to test the
functionality.
 Consider checklists. Checklists can be written to test certain types of new features, like those
that require a large amount of setup and configuration but don't need many steps to execute the
test. Checklists are also useful for testing features that require in-depth validation of voluminous
data, fields, security options, and items that are only tested once.

IN YOUR ZONE 22
Test Case Design

5.3 Test Case Format

Firstly, there are levels in which each test case will fall in order to avoid duplication efforts, which
are mentioned below:
Level 1: In this level you will write the basic test cases from the available specification and user
documentation.
Level 2: This is the practical stage in which writing test cases depend on actual functional and
system flow of the application.
Level 3: This is the stage in which you will group some test cases and write a test procedure. Test
procedure is nothing but a group of small test cases maximum of 10.
Level 4: Automation of the project. This will minimize human interaction with system and thus QA
can focus on current updated functionalities to test rather than remaining busy with regression
testing.
So you can observe a systematic growth from no testable item to an Automation suit.

5.4 Test Case Fields

Test Case – a Test Case consists of the following components which describe a Test Item under test:
(Note: Mandatory fields are marked with a red asterisk)

1. Test Case Name: Should be unique.

Use the same naming convention for all Test Cases in a project.

Include “TC” + Test Case identifier and Title - high level description of functionality under test (a
short phrase describing the test scenario).

A useful way of formulating a test case title is the Action Target Scenario method, where:
Action – a verb that describes what you are doing, some good examples are: create, delete,
ensure, edit, open, populate, observe, login, etc.
Target – the focus of your test, usually a screen, object entity, program, etc

IN YOUR ZONE 23
Test Case Design

Scenario – The rest of what your test is about and how you distinguish multiple test cases for the
same Action and Target.
Example1: Create – Task – title is not supplied
Create – Task – title is the maximum allowable length

The Test Case identifier should be unique not only within the concrete Test Suite which contains
that Test Case, but also within all Test Suites of the test project.
The covered Use Case ID/ User Story ID/requirement ID should be part of the Test Case identifier.
Example2:
“TC01.01.02.03 TC01.01.02.03 Create duplicate user” Test Case is part of the “TS 01.01.02 Create
user – negative approach” Test Suite.

First part, “01.01” - is the unique Use Case ID/User Story ID/requirement ID covered by the Test
Suite (and it is part of the Test Suite name too),
Second part, “02” – is the Test Suite unique number.
Third part, “03”- is the Test Case unique number per Test Suite.

As adding a new Test Case to the Test Suite, the maximum number of the third part among existing
Ids plus 1 is assigned as Test Case unique number.

Example3: For use case “UC 01.01 Create user” one parent Test Suite is created, TS 01.01 Create
User, with two children Test Suites:
TS 01.01.01 Create user-positive scenario, that includes:
- TC01.01.01.01 Create user - alphanumeric characters for username and password,
- TC01.01.01.02 Create user - capitals letters in username and password,
- TC01.01.01.03 Create user - max length fields with mixed alphanumeric and capitals for username
and password
- TS 01.01.02 Create user –negative approach, which includes:
- TC01.01.02.01Create user without username,
- TC01.01.02.02Create user without Password,
- TC01.01.02.03 Create duplicate user,
- TC01.01.02.04Create user - username exceeds max length,
- TC01.01.02.05Create user – password exceeds max length,
etc.
Note: If Test Case covers the integration between several requirements/use cases, these must be
listed in the Test Case description field and only the main Use Case ID used in Test Case name.

IN YOUR ZONE 24
Test Case Design

2. Test description (not mandatory, but strongly recommended): Describe the purpose of the test,
what test condition will be verified by the Test Case.
If Test Case covers the integration between several requirements/use cases, these will have to be
listed.

3. Environment: List specific hardware, software and network configuration that needs to be set up
for the test.

4. Preconditions: These are required when some additional preparation is necessary before
executing the Test Case: software configuration, hardware configuration and resources needed,
security access, tools.

5. Execution Steps: Detailed description of every step of execution.


Define one single action per execution step.

Format of Test Steps


Each step can be written very terse using the following keywords:
login [as ROLE-OR-USER]
Log into the system with a given user or a user of the given type. Usually only stated
explicitly when the test case depends on the permissions of a particular role or involves a
workflow between different users.
visit LOCATION
Visit a page or screen. For web applications, LOCATION may be a hyperlink. The location
should be a well-known starting point (e.g., the Login screen), drilling down to specific pages
should be part of the test.
enter FIELD-NAME [as VALUE] [in SCREEN-LOCATION]
Fill in a named form field. VALUE can be a literal value or the name of a variable defined in
the "Test Data" section. The FIELD-NAME itself can be a variable name when the UI field for
that value is clear from context, e.g., "enter password".
enter FIELDS
Fill in all fields in a form when their values are clear from context or when their specific
values are not important in this test case.
click "LINK-LABEL" [in SCREEN-LOCATION]
Follow a labelled link or press a button. The screen location can be a predefined panel name
or English phrase. Predefined panel names are based on GUI class names, master template
names, or titles of boxes on the page.
click BUTTON-NAME [in SCREEN-LOCATION]
Press a named button. This step should always be followed by a "see" step to check the
results.
see SCREEN-OR-PAGE

IN YOUR ZONE 25
Test Case Design

The tester should see the named GUI screen or web page. The general correctness of the
page should be testable based on the feature description.
verify CONDITION
The tester should see that the condition has been satisfied. This type of step usually follows
a "see" step at the end of the test case.
verify CONTENT [is VALUE]
The tester should see the named content on the current page, the correct values should be
clear from the test data, or given explicitly. This type of step usually follows a "see" step at
the end of the test case.
perform TEST-CASE-NAME
This is like a subroutine call. The tester should perform all the steps of the named test case
and then continue on to the next step of this test case.
Every test case must include a verify step at the end so that the expected output is very clear. A test
case can have multiple verify steps in the middle or at the end. Having multiple verify steps can be
useful if you want a smaller number of long tests rather than a large number of short tests.

6. Expected Results*: The description of what one expects the function to do.

Expected Results is the information we use to decide as to whether a Test Case passed or failed (by
comparing to Actual Results). Additionally, expected results may be added to individual steps when
needed for clarity

Next recommendations regarding Expected Results should be considered:

6.1 Precision and clarity play a critical role in defining Expected Results.

Example:

The Expected Result states: "Verify if error message is displayed."

As executing the Test Case, what if the error message says, "Please provide postcode," while it
should say, "Your postcode is invalid"?

It is not necessarily required to provide the exact text of an error message, because this text can be
often changed, so, for maintainability purposes, it is recommended to define Expected Result like:
"Verify that the error message about an invalid postcode is displayed".

Note: If the customer requires a concrete message to be displayed and it is part of the requirements,
then it must be included in Expected Results for verification.

6.2 Number of Expected Results per Test Case:

1. Each Test Case checks only one testing idea, but two or more expected results are totally
acceptable if there is a need to perform several verifications for that testing idea.

IN YOUR ZONE 26
Test Case Design

Example: Test Idea: “Payment can be performed by MasterCard credit card”.

To verify one test idea, system should meet two expected results:

1. In DB, cc_transaction table, in “MasterCard” column “1” value is registered.


2. Credit card balance is reduced by the amount equal to the amount of the payment.

There are two options:

1. Split the test idea in two separate ideas and create two Test Cases.

2. Don't change the test idea and have two Expected Results in one Test Case. The Test Case would
pass if (and only if) both Actual Results match the corresponding Expected Results. In all other cases,
the Test Case would fail.
From practical point of view, there are many situations when the second approach gives possibility
to save time and effort for Test Cases creation and maintenance.

6.3 Expected Results meet the Test Case purpose.


Example:
First Test Case: Test that Customer User is able to create an order.
Second Test Case: Test that Customer User is able to create keywords.
Example when expected results are defined for each execution step, even if they don’t meet the Test
Case purpose:
TC ID Execution Steps Expected Results
TC01.01 Verify 1. Login as Customer User 1. Successful login.
Customer User is able 2. Navigate to Create Order page. 2. Create Order page is
to create order (http://test/CreateOrder.aspx) displayed.
3. Complete all fields with valid Info message is
data. Submit data.
4. Navigate to Orders List page.
(http://test/Order.aspx)
Verify that the created order is present
in the orders list.
TC02.01 Verify 1. Login as Customer User 1. Successful login.
Customer User is able 2. Navigate to Create Keyword 2. Create Keyword page is
to create keyword page. displayed.
(http://test/CreateKeyword.asp 3. Info message is displayed
x) that the keyword is
3. Complete all fields with valid successfully created.
data. Submit data. 4. Keywords list page is
4. Navigate to Keywords List page. displayed.
(http://test/Keywords.aspx) 5. Newly created keyword is
5. Verify that the created keyword displayed in the keywords
is present in the keywords list. list.

IN YOUR ZONE 27
Test Case Design

Expected results for “Login” and “Navigate to..” steps are not required in the above Test Cases, as
the purpose of the tests is to verify that the user is able to successfully create orders/ keywords.
Login and page displaying should be verified in separate Test Cases.

Good example:
TC ID Execution Steps Expected Results
TC01.01 Verify 1.1.Login as Customer User Info message is displayed that
Customer User is able 1.2 Navigate to Create Order page. the order is successfully created.
to create order (http://test/CreateOrder.aspx ) Newly created order is displayed
Complete all fields with valid data. in the orders list.
Submit data.

2.1 Navigate to Orders List page.


(http://test/Order.aspx)
2.2 Verify that the created order is
present in the orders list.
TC02.01 Verify Login as Customer User Info message is displayed that the
Customer User is able Navigate to Create Keyword page. keyword is successfully created.
to create keyword (http://test/CreateKeyword.aspx) Newly created keyword is
Complete all fields with valid data. displayed in the keywords list.
Submit data.

2.1 Navigate to Keywords List page.


(http://test/Keywords.aspx)
2.2 Verify that the created keyword is
present in the keywords list.

7 Test Data: Test data required for executing the test case.

8 Additional Test Case elements


Including the following content in your test cases can provide better visibility into both the
requirements and the intent of the test, helping to ensure optimal test results.

Additionally, depending on test management tool used, a Test Case can include:
1. Assigned keywords. It is recommended for all Test Cases to have assigned at least one value from
the following sets of keywords: - Complexity (indicate the size and effort of a Test Case; it can be
used for test effort estimation).

IN YOUR ZONE 28
Test Case Design

- Functionality (is useful especially when similar functionality is implemented in different application
modules. In case of a change to it, the keyword will help Test Engineer to quickly identify all affected
Test Cases and also will help to decide which Test Cases are required or not for a specific test
execution round. Example of keyword: Advanced Search).
- Test Plan Type.
The following Test Plan types are typically used in Endava projects:
- Smoke Test Plan, that includes Test Cases commonly run on each new build to catch regressions;
they cover critical path Test Cases that are targeted on features and functionalities that the user
sees and uses frequently; all components and all features should be briefly tested; these Test Cases
are usually automated.
- Basic Functional Test Plan, that includes Test Cases run before every release; they cover the
minimum basic functionality of the system under test.
- Detailed Test Plan, that includes Test Cases run before a major release of the system; they cover
detailed aspects of the system’ functionality.
The keywords will help to determine whether to run or not a Test Case at a specific test execution
phase and to easily identify all test cases which cover a certain area.

2. Attachments: The Attachment option is used when appropriate: large information/description,


some requirements extract; test data like files to be used on import functionality verification, etc.
Adding links to each requirement being validated or tested makes it easier to find and review the
requirement and understand the intent of the test. It also establishes traceability for auditing and
internal review purposes.

3. Additional customizable fields. The additional fields are used as per project needs. Some of the
recommended ones are:
Editable upon Creation: Type (Manual|Automated); Estimated Execution Time, Priority.
Editable upon Execution: Browser; Actual Execution time.

4. Set up and clean up. Test cases should include set up and clean up information as needed. If pre-
conditions fields are used, they should be concise and only include useful information that describes
the conditions that must be met before performing the test.

5. A Scope field. Use the Scope field to capture information about test boundaries, related features,
additional information about the test, and included or excluded items (for example, operating
systems, database types, client types, and release or sprint specific tests).

6. Examples and screenshots. Test cases should contain examples and screenshots, where needed,
to help testers better understand how to execute the test and what they should be seeing. It may

IN YOUR ZONE 29
Test Case Design

also be helpful to include examples of commands to run, a list of directories where files on specific
operating systems can be found, and any other examples that may aid testers.

5.5 Test Case Attributes

The designed Test Cases should strike the best possible balance between being:

- Effective: Have a high probability of detecting errors.

- Non-redundant: Be practical and have a low redundancy. Any feature or functionality to be tested
should not be repeated in different Test Cases. Two Test Cases should not find the same defect.

- Clear: Clear flow of events; clear correspondence between execution steps and expected results;
unambiguously defined execution steps and expected results.

- Detailed: Test Cases should contain detailed steps that are needed to test a particular function; no
missing execution steps; no unnecessary execution steps.

- Accurate: Test Case should be without any drawbacks like spelling mistake, use the system exact
functionality/GUI names.

- Short and Simple language: The Test Case should be short rather than lengthy and it should be
written in simple language, so that any person is able to understand the scope of each Test Case.

- Evolvable: They should be well structured and maintainable; neither too simple nor too complex;
separated Test Cases for positive and negative scenarios. Test Case is a basic unit of testing and it is
a general practice to perform estimation of test effort using Test Case enumeration technique.
Generally, it is recommended to limit Test Cases to 15 execution steps.

- Complete: Test Cases should cover all the features/functionalities that have to be tested:

 Test Cases have been developed for all requirements. Test Cases cover the whole described
functionality, not just a part of it.
 Test Cases have been developed for all non-functional requirements.
 Test Cases have been developed to cover any changed requirement.
 Test Cases have been developed for all basic flows in use cases.
 Test Cases have been developed for all alternative flows in use cases, positive and negative
testing, boundary and forced error (if applicable).
 Test Cases consider the Use Case preconditions; include, exclude dependency relations on
Use Case diagrams.
 Test Cases have been developed to cover the dependencies between Use Cases.
 Test Cases have been developed for all major/critical issues found in previous releases and
for all issues reported by the stakeholders.

IN YOUR ZONE 30
Test Case Design

- Enable traceability: each Test Case can be traced back to a requirement/use case. References from
Test Cases to a requirement/use case are very helpful to:

 Quickly identify which coverage item(s) a Test Case is covering, for example if the execution of
the Test Case provokes a failure.
 Quickly identify requirements/use cases that are not covered with Test Cases yet.
 Quickly identify the Test Case(s) that may be affected if a coverage item, let’ say a
requirement, is changed.

- Repeatable: The result of the Test Case should be always the same, no matter how many times it
has been executed before.

- Self-cleaning – returns the test environment to clean state. For example if the test requires date
change on the database server, the date should be reset to the correct one after the test is
completed.

5.6 Test Case Design Styles


Test Cases can be designed in:
1. Cascading style: Test Cases built on each other. For example, the first Test Case exercises a
particular feature of the software and then leaves the system in a state such that the second Test
Case can be executed.
Example:
- Create entity Test Case
- Search entity Test Case
- Update entity Test Case
- Delete entity Test Case
Advantages: Test Cases are simpler and smaller. The output of one Test Case becomes the input of
the next Test Case. Arranging Test Cases in a right order saves time during test execution because
redundancy has been eliminated.
Disadvantages: If one Test Case fails, the subsequent tests may be invalid or blocked.

2. Independent style: each Test Case is self-contained, does not rely on any other Test Cases. A
good way to check the independence of Test Cases in a Test Suite is to change the order in which the
Test Cases are executed.
Advantages: Any number of Test Cases can be executed in any order.
Note: Independence is a characteristic of a good Test Case.
Disadvantages: Larger and more complex Test Cases, harder to design, create and maintain.

The chosen approach should fit the project needs, time constraints, the audience (i.e. team size).

IN YOUR ZONE 31
Test Case Design

5.7 Test Case Writing Style


Test Cases can be written in:
1. High-level style.
A high-level Test Case is a Test Case defining what to test in general terms, without specific values
for input data and expected results.
The documentation of a high-level Test Case includes:
- Unique identification (with a reference to the test basis)
- Description

Example:
TC01.01 Verify successful registration on Website
1. Go to the registration page on the registration Website.
2. Enter your name in the registration application.
3. Access the registration List menu option.
4. Verify your name is now in the list.
Advantages:

- takes less time to write


- gives the tester greater flexibility in execution
- is more appropriate when tests are executed by testers with a vast knowledge of the
application.
- is more appropriate in projects where is no time for detailed Test Cases development due to
time constraints (for ex. Agile projects).

Disadvantages:

- it can sometimes make repeatability difficult


- increases the probability of the test execution outside the scope of the purpose of the test
- requires a good system knowledge in order to be properly executed

2. Detailed style (low-level)

A low-level Test Case is a Test Case with specific values defined for both input and expected result.
The documentation of a detailed (low-level) Test Case must at least include:
- Unique identification (with a reference to the test basis)
- Execution preconditions
- Execution steps: data and actions
- Expected results
- Examples or Screenshots

IN YOUR ZONE 32
Test Case Design

Example (detailed version of the Test Cases presented above in high-level style):
TC ID Execution Steps Expected Results
TC01.01 Verify 1. Launch an IE browser and access 1. Registration page is displayed.
successful Registration page URL, The First Name field is highlighted
registration on http://www.test.com by default.
Website 2. Enter the name “John” in the First 2. The Last Name field is now
Name field. highlighted.
Hit the tab key on your keyboard.
3. The name John Doe is present
3. Enter the name “Doe” in the Last in the list.
Name field.
Hit “Enter” on your keyboard.
Access “registration List” menu option.

IN YOUR ZONE 33
Test Case Design

Advantages:

- Repetitive;
- it can be executed even by a tester that is just learning the application;
- is easier to determine pass or fail criteria ;
- easier to automate;
- is useful when developing tests for complex functionality areas, like calculation of rates by
different mathematical formulas, testing charts and graphs, data import by adding/removing specific
tags in XML files etc.

Disadvantages:

- it takes more time to write and to maintain them;


- a repeated manual execution by the same person can lead to “robot” syndrome, restrict the
tester’s ability to “think outside the box”;
- The risk to miss some issues because the tester is looking only at the Expected Results.

Depending on the size of the application being tested, and given the limited schedules and budgets,
writing detailed Test Cases may be too time consuming, in which case the high-level test descriptions
may be sufficient. The chosen approach should fit the project and product size, time constraints, the
audience (i.e. experienced versus non-experienced test team) etc. Anyway, take into consideration
that excessive detailing of steps can cause difficulty in Test Case maintenance, while excessive
abstraction can cause difficulty in Test Case execution.

5.8 Reuse of Test Cases

Effective Test Case design includes Test Cases that rarely overlap, but instead provide effective
coverage with minimal duplication of effort.

Before designing Test Cases, the test analyst must analyze the test basis in order to:
- Identify any patterns of similar action, common requirements, events used by several
transactions/functionalities.
- Capture these patterns in a suite of common Test Cases, so they can be reused and recombined to
execute various functional paths, avoiding duplication of test-creation efforts.

The following are a few frequently occurring test-design patterns that are suitable for reuse:

 CRUD Pattern (Create – Read – Update – Delete):


Identify a record or field upon which to operate (based on input name and parameter info).
Generate randomized item from equivalence classes.
Verify nonexistence.

IN YOUR ZONE 34
Test Case Design

Add item.
Read and verify existence of identical unchanged data.
Modify and verify matching modified data.
Delete and verify removal of item.

 Data-Type Validation Pattern


Identify item with type characteristics (for example, a data field) at an abstract level; this should
not be limited to simple data types, but should include common business data types (for example,
telephone number, address, ZIP code, customer, Social Security number, calendar date, and so on).
Enumerate the generic business rules that are associated with the type.
Define equivalence partitions and boundaries for the values for each business rule.
Select test-case values from each equivalence class.

 Input Boundary Partition Pattern


Enumerate and select an input item.
Select a "valid" equivalence partition.
Apply a lookup or random generation of a value within that partition, and use it for test.

 Heuristic of Error-Mode Detection Pattern


Enumerate past human-error modes.
Select a mode that has observed recurrence.
Identify a scope in which the failure mode might apply, and routinely test for that failure until you
are convinced that it is not manifested.

 File Test Pattern


Identify and define files and file semantics to be evaluated.
Enumerate failure modes for files.
Identify system response for each failure mode to verify (create an Oracle, list).
Select each failure mode, and apply it in turn to the designated file.
Verify response.

 Generic Window UI Operations


Open/Close/Maximize/Minimize.
Active/Inactive.
Validate modality.
Validate proper display and behavior when on top or behind.
Expand/Resize.
Move.

IN YOUR ZONE 35
Test Case Design

The goal of the test-design process should be to reduce the “reinvention of the wheel” by
recognizing, capturing and reusing common test issues.

Examples:
- The pagination functionality that should be checked on all pages/pop-ups where the number of
entities exceeds <n> items per page.
- Order of items in any list
- Order of buttons on any page/pop-up

Example of reused generic pagination Test Cases:

Other recommendations:
- Combine in a separate Test Suite all generic functional Test Cases that are common to the entire
application and should be run before major deliveries only.
Example: Menu options displaying, Help files content, translations.
- Combine in separate Test Suites Test Cases that test non-functional requirements: UI
requirements, security and performance.

5.9 Test Cases Prioritization

Test Cases prioritization is an important consideration for effectively creating Test Cases. It has the
role to reduce the overall number of Test Cases just to the “required” ones as it is obvious that it is
impractical and inefficient to re-execute all Test Cases at every product change. Test Cases
prioritization helps the test manager to:

IN YOUR ZONE 36
Test Case Design

- decide which Test Cases should be included in a test plan;


- decide Test Cases execution order to assure that the most important Test Cases have been
executed first (especially useful when test time is reduced);
- increase testing effectiveness by earlier defects detection.

Before prioritizing the Test Cases, the test analyst must focus on:

- identifying the essential scenarios that must be tested in any case.


- identifying the risk/consequences of not testing some of the scenarios.

As prioritising Test Cases, the test analyst must take into consideration how critical each Test Case is
to the product, which Test Cases should be executed first and which are less important to execute.
The Test Case prioritization criteria can differ from project to project. Re-prioritization of Test Cases
can occur for each new product release.

It is demonstrated that the top 10% - 15% of the Test Cases uncover 75%-90% of the significant
defects. Risk prioritisation is a method of choosing the 10% -15% which are the most critical Test Cases.
The following points should be considered:

- product risks;
- customer assigned priority to a specific requirement;
- requirement complexity;
- recently affected functionality by major faults/failures;
- Test Cases dependencies.

Recommendations for Test Cases prioritization:

High priority - Allocated to all tests that must be executed in any case:

- Test Cases that check the core functionality and are needed to be executed for each build;
- Test Cases that check system critical issues revealed in previous product versions;
- Test Cases that cover highly impacted areas of functionality by latest code/environment changes;
- Test Cases that failed in the last tests execution session;
- Test Cases that cover areas where many issues are usually found (bugs clustering testing principle).

Example: If a Test Case has been designed for regression testing of each release and the Test Case
covers critical functionality then the High priority must be assigned to the Test Case.

Medium priority - Allocated to the tests which can be executed only when time permits:

- Test Cases that covers alternative use case paths;


- Test Cases that cover functionalities with changes but which are rarely used;
- Test invalid inputs, forced-errors tests.

Low priority - Allocated to the tests which, even if not executed, have low impact on the product
quality; Test Cases executed only when a full regression is required:

IN YOUR ZONE 37
Test Case Design

- Test Cases that cover alternative use cases path that are rarely used in operational system;
- Test Cases that cover functionality with minor changes that will be rarely used;
- Test Cases that historically always pass.

Prioritisation is also important in designing Test Cases. It helps to focus first on most important
areas. For example, select to design first system Test Cases that cover:

 High priority use cases or features;


 Software components that are currently available for testing (rather than specifying tests on
components that cannot actually be tested yet);
 Features that must work properly before other features can be exercised (e.g., if login does not
work, anything else that requires a logged in user cannot be tested).

5.10 Test Cases Maintenance

Due to changes in requirements, design or implementation, Test Cases become often obsolete, out-
of-date. Given the pressures of having to complete the testing, testers continue their tasks without
ever revisiting the Test Cases. The problem is that if the Test Cases become outdated, the initial
work creating these tests is wasted, and additional manual tests executed without having a Test
Case in place cannot be repeated.
Next recommendations should be considered in order to solve this problem:
- As requirements change, the Test Engineers must adjust Test Cases accordingly.
- Test Cases must be modified to accommodate the additional information, details that surface
during the architecture and design phases, and sometimes even during the implementation
phase (as issues are discovered that should have been recognized earlier).
- System functionality may change or be enhanced during the development life cycle. This may
affect several Test Cases which must be redesigned to verify the new functionality. Each TC
modified upon a change request should have in the description the record (email; meeting
minutes, Use Case ID) that describes the change.
- As defects are found and corrected, Test Cases must be updated to reflect the changes and
additions to the system. Sometimes fixes of defects change the way the system works. For
example, Test Cases may have included workarounds to accommodate a major system bug. Once
the fix is done, the Test Cases must be modified to adapt to the change and to verify that the
functionality is now implemented correctly.
- When a new scenario is encountered, it must be evaluated, assigned a priority, and added to the
set of Test Cases.
Test Cases must evolve during the entire software development lifecycle.

5.11 Test Cases for Automation

It is well known that automated software testing is a good way to increase the effectiveness,
efficiency and coverage of software testing. Once automated tests are created, they can easily be
repeated and extended to perform tasks that are too complex or even impossible for manual testing.

IN YOUR ZONE 38
Test Case Design

But it is not possible to automate all testing, so it is important to determine what Test Cases should
be automated first.
How to decide what Test Cases to automate?
The decision to automate Test Cases grounds on the following principles:

 Repetitive tests that run for each build.


 Frequently used functionality/critical functionality that must always work.
 Tests that require multiple data sets.
 Tests that are complex to perform manually, take a lot of time and effort.
 Tests that run on several different hardware or software platforms and configurations.

Recommendations:

- Automation Test Cases should be written in detailed (low-level) writing style.


- All Test Cases should have an additional field, Execution Type, with Automate and Manual
options. This field should be editable only on TC creation.

5.12 Test Suites


One Test Case can verify only one testing idea. For each requirement/Use Case there can be a large
variety of testing ideas that are consequently implemented in many Test Cases. The collection of all
Test Cases that check a specific requirement/Use Case (for example “UC02.06.02 Contact Details”
Use Case) is called Test Suite. Test Cases in Test Suite should be in a correct business scenario order.

Irrespective on project size, a Test Suite must contain positive tests for all requirements/use cases.
Boundary tests and forced error tests are strongly recommended for critical functionalities. It is
recommended for a Test Suite to contain a collection of Test Cases that fully test a Requirement/Use
Case:

- Positive test
- Boundary tests
- Forced error tests
- Test Cases that cover dependencies between use cases/requirements

Note: If secondary paths are not under test case design scope, they must be included in exploratory
testing.

Format of Test Suite:

1. Test Suite Name: Should be unique.


Use the same naming convention for all Test Suites in a project.
Include “TS” + Test Suite identifier + Use case name/description of the requirement under test.
The Test Suite identifier is a unique number given to Test Suites in order to be identified and
represent the Use Case ID/User Story ID/Requirement ID that is covered.

IN YOUR ZONE 39
Test Case Design

Example:
For “UC01.01.01 Manage Regions” Use Case “TS01.01.01 Manage Regions” Test Suite is created.

2. Test Suite description: Describe the general idea of what will be tested by this Test Suite, what
functionality will be verified.
Example: Description for Test Suite “UC02.06.07.04 Go to next/previous entity details”:
"Main Actor: Customer User.
Test that the actor is able to scroll through different entity details either to next details or to
previous details from the list with data."

In this section can be included (if required ) global settings, information on the system configuration
to be used during testing, tools required to execute all Test Cases included in Test Suite.
Example:

“Access to Nordic Media Agent: https://mediaagent.ne.cision.com/login.htm”.

Note: Link to a requirements document from Intranet can be included too if required.

6 Functional Testing Best Practices

Some best practices are provided to assist in your design of functional tests:

6.1 To Begin: User Stories

Before we start a project, we always set to work crafting the best user stories – i.e. short, simple

descriptions of particular features in the app as desired and described from a user’s perspective.

Testing then against these criteria at a later stage becomes a much more simplified and streamlined

exercise.

Use predefined criteria (like the following list) to identify test plans that are candidates for automation:
Can you automate the entire test plan using preconfigured automation functionality?
Do you need to rearrange the test plan to better suit automation?
Can you reuse existing scripts, or modify scripts from sample libraries?

IN YOUR ZONE 40
Test Case Design

The best user stories are invariably the simplest, and normally take the form of a very straightforward
formula:

As a <type of user>, I want <some goal> so that <some reason>

Examples:

• As a teacher, I want reminder notifications so I don’t miss deadlines.

• As a customer, I want to review my shopping cart before purchase so I’m confident to spend

From here, we begin the build, and then, when the time comes, embrace rounds of functional testing at
each stage of development, always keeping those basic user stories that we started out with in mind.

6.2 Define a Common Structure and Templates for Creating Tests


Before testing begins, define templates, standards, and naming conventions for test plan documents and
automation scripts. This will make it easier in the long-run to correlate test plans to test scripts, to follow
the logic of test steps, and to maintain test instructions.
See section 5

6.3 Define Test Script Naming Conventions


When creating a standard template for test plan documents and test scripts, define naming conventions
that reflect the test procedures. Make sure that the names are meaningful and illustrate the logical
operations being performed. The use of naming conventions makes it easier to identify scripts, read the
script logic, and understand how scripts are organized during the maintenance phase.
For script modules, names should be logically expressive of the purpose of the module and test steps.
Additionally, the name can be correlated to the title of the corresponding test plan document. For script
variables, follow standard naming guidelines. For example, use the g_ prefix for global variables and the
str_ prefix for strings.
See section 5

6.4 Design Modular Scripts


A module is an independent reusable test component comprised of inter-related entities. Conceptual
script modules may be defined by native functionality of a particular test tool, or by a script developer
who writes reusable test routines. Examples of modules include a login or query, or the creation of a new
Contact or Service Request.
Each automation script should consist of small logical modules rather than having one continuous
unstructured sequence of test steps. This approach makes scripts easier to maintain in the long-run, and
allows rapid creation of new scripts from a library of proven modules.
Modules also can increase the independence of a script from the test environment.
You can categorize modules broadly as RequiredToPass, Critical, and Test Case. Typical RequiredToPass
modules are setup scripts and scripts that launch the application. Failure of RequiredToPass modules
should result in the entire script being aborted. Critical modules include navigating to the client area,
inserting test data, and so on. Failure of Critical modules may affect certain test cases. Test Case modules
are where you perform the test cases. The test cases should be able to fail without affecting the execution
of the whole script. You should be able to rerecord or insert a test case without changing other modules in

IN YOUR ZONE 41
Test Case Design

the script. In addition, Test Case modules can have a specified number of iterations. For example, a test
case module to add a new account can be constructed to execute an indefinite number of times without
changing any script.

6.5 Design Reusable Scripts


Reusability is necessary for building a library of test cases that can be shared between developers and
reused for subsequent test cycles. You can improve reusability using a variety of strategies including
script modularization, parameterization, and external definition of global functions and constants.

6.6 Make Test Scripts Independent of the Operating Environment


Develop and use strategies to create environment-independent test scripts. Design your test scripts so
that they are capable of running on disparate hardware configurations, operating systems, language
locales, database platforms, and browser versions.

6.7 Make Test Scripts Independent of Test Data


When authoring a test script, do not leave any hard-coded data values in the script. Instead, replace the
hard-coded data with variables that reference an external data source. This procedure is generally called
parameterization. Parameterizing your test scripts makes them independent of the data structure of the
application being tested. Without parameterization, scripts can stop running due to database conflicts
and dependencies.
Parameterization also allows you to switch data tables dynamically, if necessary. Store the data used by
test scripts in external data tables. Then use the data table import function within the test script to
import data. This feature can be useful for multilingual testing, allowing you to switch the data table
based on the current language environment.
NOTE: The column names and structure of the external data table must match the variable names used
in the script.

6.8 Other Kinds of Tests


While functional testing acts at the interface between an application and the system and users
associated with it, there are other kinds of testing that contribute to software development and
Quality Assurance (QA).

Unit testing is performed on the smallest elements of a system, such as individual classes within an
application. Each component is tested to ensure that it properly handles input and output under
normal operation, borderline use cases, and error conditions.

Integration testing looks at a sub-division of a system, to ensure that all processes within it are
working together smoothly.

Regression testing is a two-part process, applied on fixed code. First, a confirmation test verifies the
integrity of the fix itself, then a regression test on the application as a whole confirms whether or
not the applied fix has broken any of the program's functionality.

IN YOUR ZONE 42
Test Case Design

Smoke tests may be run as a final check, when the collaboration between developers and testers
results in changes in code close to a finished product. The testing ensures that these changes have
not destabilised the overall structure of the application or caused potentially fatal errors prior to
release.

Usability testing validates each part of the software's GUI (buttons, text boxes, etc.) for their
visibility, interaction, ease of use, and for compliance with relevant standards.

Browser compatibility tests are employed on Web and mobile applications to ensure the software's
performance on various types and versions of browsers. The effects of changing server integration
and links to third-party systems may also be tested.

As functional testing is often time-consuming, a hybrid approach combining the best fit of several
relevant testing methods is wise.

6.9 Testing by Hand


Scenario testing is of great importance when it comes to manual testing, for its purpose is to
anticipate and emulate the real-world occurrences that users will face when using the app. This
helps testers to evaluate the program's real-world adaptability, as well as helping to test many
functions that are not frequently used or tested (or perhaps aren't tested thoroughly enough).

Scenario testing also involves recreating conditions for when network connections are less than
perfect, when a user only has limited battery power, when there are incoming calls, text messages
and other alerts that may pop up.

“If you are testing a mobile application that targets multiple devices, forget about emulators and
simulators and get your hands on some real devices. If the test team includes more than two people,
get two of each device, put them on the local wireless network and get testing.

6.10 Automated Testing


Automation is generally the preferred option for software testing. Tests can be re-used, and scripts
written to perform repetitive tasks. The tests can cover a wider range of issues with greater
accuracy, and provide formal processes for detecting and reporting on any defects found.

Enable automated code review for test coverage, complexity, duplication and style for virtually any
programming language is a good way. (i.e. Code Climate for automated code testing).

IN YOUR ZONE 43
Test Case Design

6.11 Clear and Accurate Reporting


The communications linking the testing and development teams should be established at the outset
of a project, with feedback and reporting in clearly defined terms which are agreed upon by all.

The testing team should also act as a mediator between the development team and the user
community, as feedback and usability issues are reported back from beta tests, and ongoing version
releases. Reports should give a feature-by-feature view of the overall health and defects of an
application that can be used as a template for its improvement.

7 Business Logic Based Test Design


Software products business logic and functionality are often documented using use cases. A use case
describes the interaction between an actor (User of the system) and the system so that the actor can
achieve desired results.
Use cases represent an excellent input for functional Test Case design for integration, system and
acceptance testing.
The most important part of a Use Case for generating Test Cases is the flow of events. The two main
parts of the flow of events are:
-the Basic flow of events that covers what "normally" happens when the use case is performed
- Alternate flows of events that cover an optional or exceptional behaviour and also variations of the
normal behaviour.

Flows are structured into steps. Each step should explain what the actor does and what the system
does in response; it should also be numbered and have a title. Alternate flows always specify where
they start in the basic flow and where they go when they end.

7.1 Test Case Elaboration


Next it will be described the process of generating Test Cases from a fully detailed use case:
1. Study the use case:
· Identify entry condition (Pre condition)
· Identify inputs required
· Identify exit conditions (Post condition)
· Identify output
· Study the normal flow
· Identify exceptions and study alternate flows
· Identify constraints on the use case
2. Analyse normal flow

IN YOUR ZONE 44
Test Case Design

· Explore normal flow into implementation detail (if required)


· Link and read along with user interface design and data model
(if available)
· Identify the sub-path(s) in the normal flow
· Identify the data set that executes a given normal flow path
3. Analyse alternate (exception) flows
· Explore each alternate (exception) flow
· Link and read along with user interface design and data model
(if required)
· Identify sub-path(s) in each alternate flow
· Identify data set that executes a given alternate flow path
4. Design Test Cases
· Define test environment set up
· Define test execution steps and input actions
· Define input data set and expected results
· Define pass/fail/partial pass criteria
· Define dependencies (pre requisite) with other Test Cases
5. Document the Test Cases
6. Walk through (dry run) the Test Cases on the application (if available)
7. Review Test Cases design for completeness and corrections
· Identify missed exceptions and paths
· Identify need for more Test Cases
· Identify defects in existing Test Cases
8. Update Test Cases design
9. Verify Test Cases design and close the review findings

As an example, see the full set of Use Case scenarios for the next diagram:

IN YOUR ZONE 45
Test Case Design

Scenario 1 Basic Flow


Scenario 2 Basic Flow ->Alternate Flow 1
Scenario 3 Basic Flow ->Alternate Flow 1->Alternate Flow 2
Scenario 4 Basic Flow ->Alternate Flow 3
Scenario 5 Basic Flow ->Alternate Flow 3->Alternate Flow 1
Scenario 6 Basic Flow ->Alternate Flow 3->Alternate Flow 1->Alternate Flow 2
Scenario 7 Basic Flow ->Alternate Flow 4
Scenario 8 Basic Flow ->Alternate Flow 3->Alternate Flow 4

Note: A use case may have dependency with another use case, which would require interface and
interaction. Ensure that this dependency and associated use case flows are captured in the Test Case
design.

7.2 Example of Test Cases for a specific Use Case

IN YOUR ZONE 46
Test Case Design

Manage Regions
(Edit Region operation only will be studied below to simplify the example)
Actor=Global Administrator.
Basic Flow
1. Login to CisionPoint application.
This Use Case starts when Actor log in to CisionPoint application.
2. Actor asks for Manage Regions functionality.
3. System verifies the User permissions.
System ensures that currently logged on user is associated with Regions function -> Manage Regions
permission.
4. System identifies all available Regions.
System displays to the user the identified list, ordered ascendingly by Region Name.
Create function is enabled and Delete function is disabled. If there are more than 20 Regions, then
the system should apply pagination.
5. Actor requests details of a Region.
User clicks on region name link to display region details.
6. System presents details of the requested Region.
Region details are opened in Create/Edit Region pop-up.
7. Actor makes necessary amendments to the Region settings.
Where necessary, it updates the list with available languages for Region (UC01.01.03 Select available
Languages for Region).
8. Actor submits changes.
9. System validates data.
System ensures that Regions settings are the valid ones.
10. System saves changes.
System make persistent changes applied by user to the Region and updates automatically
corresponding settings to all Customers belonging to the current Region and settings of Customer
Users under this Region.
11. System reloads the updated regions list.
System refreshes the list with available Regions and displays it to the user ordered in ascending
alphabetical order.

Alternate Flows
1. Actor does not have assigned Manage Regions permission of Regions function.
After Step2 of the Basic Flow, system hides Create and Delete functions and disables Save function
from Region Details.
2. Actor wants to cleanup provided settings.

IN YOUR ZONE 47
Test Case Design

After Step 6 of the Basic Flow, Actor initiates Clear action.


System re-establishes default settings of the Region for current step. The Use Case continues at Step
7 of the Basic Flow.

3. Actor cancels Edit region operation.


After Step 7 of the Basic Flow, user cancels the edit operation and the system redirects the user to
the list with Regions. The Use Case continues at Step 5 of the Basic Flow.

4. Duplicate Region Name.


System identifies that the provided Region name is a duplicate name of an existing Region. System
informs the user about this and asks him to provide a different Region name. The Use Case
continues at Step 7 of the Basic Flow.

5. Not all mandatory fields are filled in


System asks the user to complete all mandatory fields: 'Please complete all mandatory fields.' The
Use Case continues at Step 7 of the Basic Flow.

6. Settings provided for Region are not valid.


System asks the user to review provided settings: 'Some fields have not been filled in correctly.
Please review your entries.'
Also, next to each invalid fields is displayed an error icon. When the user points the mouse over the
icons, the system will display a tooltip with a text specifying the accepted characters for current
field.

Scenario 1 – Successful Edit Region operation


Scenario 2 - User with no Manage Region permission
Scenario 3 – User cleans up the provided Region details.
Consider cases when not all mandatory data are provided, invalid data are provided, duplicate
region name.
Scenario 4 – User cancels the update Region operation.
Scenario 5 – User provides duplicate region name.
Scenario 6 – User does not provide all mandatory values
Scenario 7 – User provides invalid Region details

Test Scenario/ Manage Valid Unique Man- All Clear Cancel Expected
Case Condition Regions data region datory fields action action Result

IN YOUR ZONE 48
Test Case Design

ID Permi- name values filled


ssion in

TC1 Scenario1: Successful + + + + - – + Region successfully


Manage Region operation: edited.
only mandatory data

TC2 Scenario1: Successful + + + + + – + Region successfully


Manage Region operation: edited.
all fields filled in with data

TC3 Scenario2: User with no – N/A N/A N/A N/A N/A N/A User is not able to edit the
Manage Region permission region. System disable the
Save functionality on
Region Details pop-up.
TC3 Scenario3: User cleans up + + - + + + - Default settings of the
the provided Region Region are re-established
details, region name not from DB.
unique
TC4 Scenario3: User cleans up + - + + + + - Default settings of the
the provided Region Region are re-established
details, invalid data in from DB.
fields
TC5 Scenario3: User cleans up + + + - + + - Default settings of the
the provided Region Region are re-established
details, missing mandatory from DB.
data
TC6 Scenario4: + + + + + - + The system redirects the
User cancels the update user to the list with
Region operation Regions
TC7 Scenario5: User provides + + - + - - - System informs the user
duplicate region name about this and asks him to
provide a different Region
name
TC8 Scenario6: User does not + + + - - - - System asks the user to
provide all mandatory complete all mandatory
values: does not provide fields: 'Please complete all
any mandatory value mandatory fields’

TC9 Scenario6: User does not + + + - - - - System asks the user to


provide mandatory values: complete all mandatory
at least one mandatory fields: 'Please complete all
value is missing. (separate mandatory fields’
tests will be designed to

IN YOUR ZONE 49
Test Case Design

verify for each mandatory


value separately and they
random combination)

TC9 Scenario7: + - + + + - - System asks the user to


User provides invalid review provided settings:
Region details 'Some fields have not
(separate tests will be been filled in correctly.
developed to test one Please review your
invalid value for a field at a entries.'
time to verify the system Also, next to each invalid
detects it correctly.) fields is displayed an error
icon. When the user
points the mouse over the
icons, the system will
display a tooltip with a
text specifying the
accepted characters for
current field.

8 Functional analysis
Test Cases are developed based on test basis analysis (requirements, business scenarios, use cases).
In order to create effective Test Cases, the Test Analyst must understand the details and the
complexity of the application.

Even when detailed requirements are available in a project, the interdependency between
requirements is not always obvious. The test analyst must analyse how any change to any part of the
application affects the rest of the application. It is not enough to create Test Cases that just verify
aspects of the change itself. An effective Test Case design must also cover all other areas affected by
this change.

Example: Consider the following requirement statement:


"The system must allow the user to edit the customer name on the customer edit page."
The customer name field and its restrictions are also documented.
Some testing steps for verifying that the requirement has been correctly implemented are:
1. Verify that the system allows the user to edit the customer name on the customer edit page, by
clicking on the customer name field and type a new customer name.
2. Try all positive and negative combinations— e.g., representative test data from all valid
equivalence classes, all valid boundaries. Test a subset of combinations and variations possible
within the invalid data equivalence classes and invalid boundaries.
3. Run a SQL query verifying that the update is saved correctly in the appropriate table(s).

IN YOUR ZONE 50
Test Case Design

The above steps comprise a good basic test. However, something is missing in order to fully verify
this requirement. The question that needs to be answered is how is the system otherwise affected
when the customer name is changed. Is there another screen, functionality, or path that uses or is
dependent upon the customer name? If so, it will be necessary next to determine how those other
parts of the application are affected.

Some examples:
1. Verify that the "Create Customer user" functionality in the Customer Users module is now using
this changed customer name.
- Add a customer user record, and verify that the new record has been saved using the new
customer name.
- Perform any other possible functions making use of the changed customer name, to verify that it
does not adversely affect any other previously working functionality.

Analysis and testing must continue until all affected areas have been identified and tested. After the
functional tests have been defined and numerous testing paths through the application have been
derived, additional test design techniques must be applied to narrow down the inputs for the
functional steps to be executed during testing.

9 User Interface Test Design


All interactive applications including web applications provide user interfaces in which a user
interacts with a system to perform a desired function. User interface based testing involves
functional and interface testing of the application through the user interface.
The following procedure can be used for user interface test design:
1. Study the user interfaces (web pages) and the user interface navigation diagrams (page flow
diagrams)
2. For each user interface (web page)
· Identify the data fields (input/output)
· Navigation input actions required
· For each navigation/input action define the expected output.
3. Develop specific/alternate user interaction dialog paths
· For each user interface
· Across related user interface
· Across unrelated user interface (if required)
4. Identify data set and input actions that would activate a specific user interface dialog
5. Design Test Case(s)
· Define test environment set up
· Define test procedure (test steps) of the test detailing navigation and input actions

IN YOUR ZONE 51
Test Case Design

· Define dependencies with other Test Cases - pre requisite for the Test Case
· Define input data (if any)
· Define output of the Test Case
· Define pass/fail/partial pass criteria
6. Document the Test Case
7. Walk through (dry run) the Test Case on the application
8. Review Test Case design to identify
· Missed conditions and paths
· Need for more Test Cases
· Defects in existing Test Cases
9. Update the Test Case design
10. Verify the Test Case design and close the review findings

Test Case(s) pertaining to a specific user interface and user interface path can be grouped together
to form a Test Suite.

IN YOUR ZONE 52
Test Case Design

10 Bibliography
1. http://www.testingeducation.org/BBST/BBST--IntroductiontoTestDesign.html
2. http://www.slideshare.net/warsha.agarwal/test-case-writing-best-practices-presentation-
945906
3. http://www.slideshare.net/guru__123/test-case-training
4. http://www.onestoptesting.com/equivalence-partitioning/
5. http://www.onestoptesting.com/test-cases/designing.asp
6. http://www.onestoptesting.com/boundary-value-analysis/
7. http://softestserv.ca/RBT_Cause-Effect_Graphing2.pdf
8. http://istqb.org/download/attachments/2326555/ISTQB+Glossary+of+Testing+Terms+2+1.pdf
9. http://msdn.microsoft.com/en-us/library/cc514239.aspx
10. Software testing. An ISTQB-ISEB Foundation Guide. Second Edition, 2010.
11. BS ISO/IEC 27001:2005

IN YOUR ZONE 53
Test Case Design

11 Glosary

Test objectives - List of all the business processes that the application is required to support; list of
standards for which there is required compliance; list of non-functional requirements: usability levels,
performance indicators, security aspects, etc.
Test condition - an item or event of the system that could be verified by one or more Test Cases, e.g.
a function, transaction, feature.
Test analysis and design – the process of identifying the test conditions for each test objective and
creation of Test Cases that exercise the identified test conditions.
Test basis – All documents from which the requirements of a component or system can be inferred;
the documentation on which test cases are based. May consist of functional specifications, user
requirements, business scenarios, use cases.

11.1.1 Test Case – a set of test inputs, execution preconditions, expected results
and execution post-conditions, developed for a particular objective or test
condition such as to exercise a particular program path or verify compliance
with a specific requirement. A test case has components that describes an
input, action or event and an expected response, to determine if a feature of
an application is working correctly.
Test Suite – a collection of one or more Test Cases for the software under test, where the post
condition of one test is often used as the precondition for the next one
Test data - Data that exists (for example, in a database) before a test is executed, and that
affects or is affected by the component or system under test.
Test item - The individual element to be tested. There usually is one test object and many test
items.
Effective Test Case - Test Case designed to catch specific faults and that has a good possibility of
revealing a defect.
Test Case design technique – a method used to derive or select a good set of tests from the total
number of all possible tests for a given system.
Use case - describes a sequence of actions performed by a system to provide an observable result of
value to a person or another system using the product under development.
Use case scenario - an instance of a use case, or a complete "path" through the use case.

IN YOUR ZONE 54

You might also like