0% found this document useful (0 votes)
40 views36 pages

Software Testing and Automation UNIT 3

The document outlines the principles and methodologies of software testing, including test objective identification, design factors, and various testing techniques such as boundary value and equivalence class testing. It emphasizes the importance of effective test case design, requirement identification, and modeling test results to ensure software quality and reliability. Additionally, it discusses the roles of manual and automation testing in achieving accurate results and the significance of decision tables in managing test conditions.

Uploaded by

adlincse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views36 pages

Software Testing and Automation UNIT 3

The document outlines the principles and methodologies of software testing, including test objective identification, design factors, and various testing techniques such as boundary value and equivalence class testing. It emphasizes the importance of effective test case design, requirement identification, and modeling test results to ensure software quality and reliability. Additionally, it discusses the roles of manual and automation testing in achieving accurate results and the significance of decision tables in managing test conditions.

Uploaded by

adlincse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

UNIT III TEST DESIGN AND EXECUTION

Test Objective Identification, Test Design Factors, Requirement identification, Testable


Requirements, Modeling a Test Design Process, Modeling Test Results, Boundary Value Testing,
Equivalence Class Testing, Path Testing, Data Flow Testing, Test Design Preparedness Metrics,
Test Case Design Effectiveness, Model-Driven Test Design, Test Procedures, Test Case
Organization and Tracking, Bug Reporting, Bug Life Cycle.

Test Objective Identification

Software Testing has different goals and objectives. The major objectives of Software testing are as
follows:

 Finding defects which may get created by the programmer while developing the software.
 Gaining confidence in and providing information about the level of quality.
 To prevent defects.
 To make sure that the end result meets the business and user requirements.
 To ensure that it satisfies the BRS that is Business Requirement Specification and SRS that is
System Requirement Specifications.
 To gain the confidence of the customers by providing them a quality product.

Software testing helps in finalizing the software application or product against business and user
requirements. It is very important to have good test coverage in order to test the software application
completely and make it sure that it’s performing well and as per the specifications.

While determining the test coverage the test cases should be designed well with maximum
possibilities of finding the errors or bugs. The test cases should be very effective. This objective can
be measured by the number of defects reported per test cases. Higher the number of the defects
reported the more effective are the test cases.

Once the delivery is made to the end users or the customers they should be able to operate it without
any complaints. In order to make this happen the tester should know as how the customers are going
to use this product and accordingly they should write down the test scenarios and design the test
cases. This will help a lot in fulfilling all the customer’s requirements.
Software testing makes sure that the testing is being done properly and hence the system is ready for
use. Good coverage means that the testing has been done to cover the various areas like functionality
of the application, compatibility of the application with the OS, hardware and different types of
browsers, performance testing to test the performance of the application and load testing to make sure
that the system is reliable and should not crash or there should not be any blocking issues. It also
determines that the application can be deployed easily to the machine and without any resistance.
Hence the application is easy to install, learn and use.

Test Design Factors

For designing Test Cases the following factors are considered:

 Correctness
 Negative
 User Interface
 Usability
 Performance
 Security
 Integration
 Reliability
 Compatibility

Correctness : Correctness is the minimum requirement of software, the essential purpose of testing.
The tester may or may not know the inside details of the software module under test e.g. control flow,
data flow etc.

Negative : In this factor we can check what the product it is not supposed to do.

User Interface : In UI testing we check the user interfaces. For example in a web page we may check
for a button. In this we check for button size and shape. We can also check the navigation links.

Usability : Usability testing measures the suitability of the software for its users, and is directed at
measuring the following factors with which specified users can achieve specified goals in particular
environments.

1.Effectiveness : The capability of the software product to enable users to achieve specified goals
with the accuracy and completeness in a specified context of use.
2.Efficiency : The capability of the product to enable users to expend appropriate amounts of
resources in relation to the effectiveness achieved in a specified context of use.

Performance : In software engineering, performance testing is testing that is performed from one
perspective to determine how fast some aspect of a system performs under a particular workload.

Performance testing can serve various purposes. It can demonstrate that the system needs performance
criteria.

1.Load Testing: This is the simplest form of performance testing. A load test is usually conducted to
understand the behavior of the application under a specific expected load.

2.Stress Testing: Stress testing focuses on the ability of a system to handle loads beyond maximum
capacity. System performance should degrade slowly and predictably without failure as stress levels
are increased.

3.Volume Testing: Volume testing belongs to the group of non-functional values tests. Volume
testing refers to testing a software application for a certain data volume. This volume can in generic
terms be the database size or it could also be the size of an interface file that is the subject of volume
testing.

Security : Process to determine that an Information System protects data and maintains functionality
as intended. The basic security concepts that need to be covered by security testing are the following:

1.Confidentiality : A security measure which protects against the disclosure of information to parties
other than the intended recipient that is by no means the only way of ensuring

2.Integrity: A measure intended to allow the receiver to determine that the information which it
receives has not been altered in transit other than by the originator of the information.

3.Authentication: A measure designed to establish the validity of a transmission, message or


originator. Allows a receiver to have confidence that the information it receives originated from a
specific known source.

4.Authorization: The process of determining that a requester is allowed to receive a service/perform


an operation.

Integration : Integration testing is a logical extension of unit testing. In its simplest form, two units
that have already been tested are combined into a component and the interface between them is tested.
Reliability : Reliability testing is to monitor a statistical measure of software maturity over time and
compare this to a desired reliability goal.

Compatibility : Compatibility testing of a part of software's non-functional tests. This testing is


conducted on the application to evaluate the application's compatibility with the computing
environment. Browser compatibility testing can be more appropriately referred to as user experience
testing. This requires that the web applications are tested on various web browsers to ensure the
following:

 Users have the same visual experience irrespective of the browsers through which they view
the web application.
 In terms of functionality , the application must behave and respond the same across various
browsers.

Requirement Identification

Software requirement means requirement that is needed by software to increase quality of software
product. These requirements are generally a type of expectation of user from software product that is
important and need to be fulfilled by software. Analysis means to examine something in an organized
and specific manner to know complete details about it.

Therefore, Software requirement analysis simply means complete study, analyzing, describing
software requirements so that requirements that are genuine and needed can be fulfilled to solve
problem. There are several activities involved in analyzing Software requirements. Some of them are
given below :
Problem Recognition :

The main aim of requirement analysis is to fully understand main objective of requirement that
includes why it is needed, does it add value to product, will it be beneficial, does it increase quality of
the project, does it will have any other effect. All these points are fully recognized in problem
recognition so that requirements that are essential can be fulfilled to solve business problems.

Evaluation and Synthesis :

Evaluation means judgement about something whether it is worth or not and synthesis means to create
or form something. Here are some tasks are given that is important in the evaluation and synthesis of
software requirement :

 To define all functions of software that necessary.

 To define all data objects that are present externally and are easily observable.

 To evaluate that flow of data is worth or not.

 To fully understand overall behavior of system that means overall working of system.

 To identify and discover constraints that are designed.

 To define and establish character of system interface to fully understand how system interacts

with two or more components or with one another.

Modeling :

After complete gathering of information from above tasks, functional and behavioral models are

established after checking function and behavior of system using a domain model that also known as

the conceptual model.

Specification :

The software requirement specification (SRS) which means to specify the requirement whether it is

functional or non-functional should be developed.

Review :

After developing the SRS, it must be reviewed to check whether it can be improved or not and must

be refined to make it better and increase the quality.


Test Table Requirements

A Decision Table is a table that shows the relationship between inputs and rules, cases, and test
conditions. It's a very useful tool for both complicated software testing and requirements
management. The decision table allows testers to examine all conceivable combinations of
requirements for testing and to immediately discover any circumstances that were overlooked.
True(T) and False(F) values are used to signify the criteria.

What is Decision Table Testing?

Decision table testing is a type of software testing that examines how a system responds to various
input combinations. This is a methodical methodology in which the various input combinations and
the accompanying system behavior (Output) are tabulated. That's why it's also known as a Cause-
Effect table because it captures both causes and effects for improved test

Example 1 − How to Create a Login Screen Decision Base Table

Let's make a login screen with a decision table. A login screen with E-mail and Password Input boxes.

The condition is simple − The user will be routed to the homepage if they give the right username and
password. An error warning will appear if any of the inputs are incorrect.

Legend

T - Make sure your login and password are correct.

F - Incorrect login or password

E - An error message appears.

H - The home screen appears.


Interpretation

Case 1 − Both the username and password were incorrect. An error message is displayed to the user.

Case 2 − The username and password were both right, however, the password was incorrect. An error

message is displayed to the user.

Case 3 − Although the username was incorrect, the password was accurate. An error message is

displayed to the user.

Case 4 − The user's username and password were both accurate, and the user went to the homepage.

We may generate two situations when converting this to a test case.

Enter the right username and password and click Login; the intended consequence is that the user will

be sent to the homepage.

And one from the situation below.

 If the user types in the erroneous username and password and then clicks Login, the user

should receive an error message.

 When you provide the proper username and incorrect password and click Login, the user

should see an error message.

 If the user types the erroneous username and password and then clicks Login, the user should

receive an error message.

Modeling a test design process

The design phase of software development deals with transforming the customer requirements as

described in the SRS documents into a form implementable using a programming language. The

software design process can be divided into the following three levels of phases of design:

 Interface Design
 Architectural Design
 Detailed Design
Elements of a System:

Architecture – This is the conceptual model that defines the structure, behavior, and views of a
system. We can use flowcharts to represent and illustrate the architecture.

Modules – These are components that handle one specific task in a system. A combination of the
modules makes up the system.

Components – This provides a particular function or group of related functions. They are made up of
modules.

Interfaces – This is the shared boundary across which the components of a system exchange
information and relate.

Data – This is the management of the information and data flow.


Interface Design: Interface design is the specification of the interaction between a system and its
environment. this phase proceeds at a high level of abstraction with respect to the inner workings of
the system i.e, during interface design, the internal of the systems are completely ignored and the
system is treated as a black box. Attention is focused on the dialogue between the target system and
the users, devices, and other systems with which it interacts. The design problem statement produced
during the problem analysis step should identify the people, other systems, and devices which are
collectively called agents. Interface design should include the following details:

 Precise description of events in the environment, or messages from agents to which the
system must respond.
 Precise description of the events or messages that the system must produce.
 Specification of the data, and the formats of the data coming into and going out of the system.
 Specification of the ordering and timing relationships between incoming events or messages,
and outgoing events or outputs.

Architectural Design: Architectural design is the specification of the major components of a system,
their responsibilities, properties, interfaces, and the relationships and interactions between them. In
architectural design, the overall structure of the system is chosen, but the internal details of major
components are ignored. Issues in architectural design includes:

 Gross decomposition of the systems into major components.


 Allocation of functional responsibilities to components.
 Component Interfaces
 Component scaling and performance properties, resource consumption properties, reliability
properties, and so forth.
 Communication and interaction between components.

The architectural design adds important details ignored during the interface design. Design of the
internals of the major components is ignored until the last phase of the design.

Detailed Design: Design is the specification of the internal elements of all major system components,
their properties, relationships, processing, and often their algorithms and the data structures. The
detailed design may include:

 Decomposition of major system components into program units.


 Allocation of functional responsibilities to units.
 User interfaces
 Unit states and state changes
 Data and control interaction between units
 Data packaging and implementation, including issues of scope and visibility of program
elements
 Algorithms and data structures

Modeling Test Results

Test results are the outcome of the whole process of software testing life cycle. The results thus
produced, offer an insight into the deliverables of a software project, significant in representing the
status of the project to the stakeholders.

Few concepts:

Testing:It is a process of identifying whether a bug/error hides within a project and also assess the
impact of the observation. It forms an important activity of Quality Assurance.

Debugging:This method involves identification and then correction of bugs/errors. When developers
come across an error in the code, they resort to debugging. Debugging is thus a part of unit testing.

Test Case:Test Case is a document that consists of test data, preconditions, postconditions and
expected results that are developed for a specific kind of test scenario, intended for serving a
particular purpose.

Test Suite:It is a collection of test cases, that are aimed at testing a software application to detect that
the application adheres to the requirement specifications. It basically consists of a detailed set of
instructions to attain a common goal.

There are majorly two broad categories/types of testing, which helps to obtain the test results in the
most appropriate way. They are as follows :
Manual Testing- It is the process of testing which comprises of a group of testers who examines the
code for the presence of a bug. The testing is performed without the use of any tool. The tester tests
the application just as an end-user would do, in order to find out defects, if any.

Automation Testing- Automating the test process is done with the help of a script or tool. A piece of
code is used to detect a bug/error.

Considering the aforementioned categories of testing, how should one come to a conclusion as to
which testing one must adopt to attain correct results. The following key points must be considered
while deciding upon a specific type of testing :

 Automation Testing is a life saver when the project under consideration is quite large and
complex in nature.
 When we need to repetitively test some part of the code, very often.
 When the requirements are quite stable, that is, requirements are not prone to change.
 When the application need to go through load and performance tests with many virtual users
involved in it.

Boundary Value Testing

Boundary value analysis is one of the widely used case design technique for black box testing. It is
used to test boundary values because the input values near the boundary have higher chances of error.

Whenever we do the testing by boundary value analysis, the tester focuses on, while entering
boundary value whether the software is producing correct output or not.

Boundary values are those that contain the upper and lower limit of a variable. Assume that, age is a
variable of any function, and its minimum value is 18 and the maximum value is 30, both 18 and 30
will be considered as boundary values.

The basic assumption of boundary value analysis is, the test cases that are created using boundary
values are most likely to cause an error

There is 18 and 30 are the boundary values that's why tester pays more attention to these values, but
this doesn't mean that the middle values like 19, 20, 21, 27, 29 are ignored. Test cases are developed
for each and every value of the range.
Testing of boundary values is done by making valid and invalid partitions. Invalid partitions are tested
because testing of output in adverse condition is also essential.

Let's understand via practical:

Imagine, there is a function that accepts a number between 18 to 30, where 18 is the minimum and 30
is the maximum value of valid partition, the other values of this partition are 19, 20, 21, 22, 23, 24,
25, 26, 27, 28 and 29. The invalid partition consists of the numbers which are less than 18 such as 12,
14, 15, 16 and 17, and more than 30 such as 31, 32, 34, 36 and 40. Tester develops test cases for both
valid and invalid partitions to capture the behavior of the system on different input conditions.
The software system will be passed in the test if it accepts a valid number and gives the desired
output, if it is not, then it is unsuccessful. In another scenario, the software system should not accept
invalid numbers, and if the entered number is invalid, then it should display error massage.

Advantages of Boundary Value Analysis

1. Using BVA reduces the number of test cases required to cover the input domain by only testing a
few values at each boundary instead of all possible ones. This saves time and resources and makes test
cases more manageable and maintainable.

2. BVA can also help you find deviation errors error, overflow errors, or boundary condition errors
that you would otherwise miss. This overall improves software quality.

3. BVA can find bugs that can affect application functionality, performance, or security, thus
improving the quality and reliability of your software.

4. BVA ensures that the software can process the input values correctly.

5. Errors can be detected early in the software development cycle. This reduces the cost of fixing
errors later.

Disadvantages of Boundary Value Analysis

1. The success of testing using this technique depends on the equivalence classes identified. This also
depends on the tester’s experience and application knowledge. Thus, incorrect identification of
equivalence classes leads to incorrect limit testing. Equivalence classes refer to a software testing
technique where input data is divided into groups or sets that are expected to exhibit similar behavior
in the software system being tested. Each group is known as an equivalence class

2. Applications with open or missing one-dimensional boundaries are unsuitable for this technique.
Other black-box techniques, such as “domain analysis”, are used in such cases.

3. BVA may fail to identify defects that occur within the boundaries themselves. This can lead to false
negatives and prevent developers from detecting important bugs early in development.

4. BVA may be effective at identifying potential edge cases, testing all possible boundary values in a
given system may not be practical or feasible in every case.

Equivalence Class Testing

Equivalence class testing is better known as Equivalence Class Partitioning and Equivalence
Partitioning. This is a renowned testing approach among all other software testing techniques in the
market that allows the testing team to develop and partition the input data for analyzing and testing
and based on that the software products are partitioned and divided into number of equivalence
classes for testing.

The equivalence classes that are divided perform the same operation and produce same characteristics
or behavior of the inputs provided.

The test cases are created on the basis on the different attributes of the classes and each input from the
each class is used for execution of test cases, validating the software functions and moreover
validating the working principles of the software products for the inputs that are given for the
respective classes.

It is also referred as the logical step in functional testing model approach that enhances the quality of
test classes and by removing any redundancy or faults that can exist in the testing approach.

Types of Equivalence Class Testing

Weak Normal Testing Class: This type of testing uses only a single variable from each equivalence
class during test cases. The word weak signifies single fault and in the testing scenario, there is only
one element. The tester identifies the values in a systematic way.

Strong Normal Testing Class: This type of testing is associated with multiple fault assumptions and
test cases are required for each element from equivalence class and the testing team covers the whole
equivalence class by using every possible inputs.

Weak Robust Testing Class: This type of testing results into single fault as it is weak and the
expected output from invalid test cases cannot be defined. The testers usually spend a huge amount of
time for defining the expected output from the test cases. The testing team mainly focus on testing the
test cases for invalid values.

Strong Robust Testing Class: This form of class testing is redundant. So multiple fault assumptions
are present and the equivalence classes are measured in the terms of valid and invalid inputs from test
cases. However, for the testing team it is not feasible for reducing the redundancy.

Examples of Equivalence Partitioning technique

Assume that there is a function of a software application that accepts a particular number of digits, not
greater and less than that particular number. For example, an OTP number which contains only six
digits, less or more than six digits will not be accepted, and the application will redirect the user to the
error page.
Let's see one more example.

A function of the software application accepts a 10 digit mobile number.


In both examples, we can see that there is a partition of two equally valid and invalid partitions, on
applying valid value such as OTP of six digits in the first example and mobile number of 10 digits in
the second example, both valid partitions behave same, i.e. redirected to the next page.

Another two partitions contain invalid values such as 5 or less than 5 and 7 or more than 7 digits in
the first example and 9 or less than 9 and 11 or more than 11 digits in the second example, and on
applying these invalid values, both invalid partitions behave same, i.e. redirected to the error page.

We can see in the example, there are only three test cases for each example and that is also the
principal of equivalence partitioning which states that this method intended to reduce the number of
test cases.

Advantages:

 Equivalence class testing helps reduce the number of test cases, without compromising the
test coverage.
 Reduces the overall test execution time as it minimizes the set of test data.
 It can be applied to all levels of testing, such as unit testing, integration testing, system
testing, etc.
 Enables the testers to focus on smaller data sets, which increases the probability to uncovering
more defects in the software product.
 It is used in cases where performing exhaustive testing is difficult but at the same time
maintaining good coverage is required.

Disadvantages:

 It does not consider the conditions for boundary value.


 The identification of equivalence classes relies heavily on the expertise of testers.
 Testers might assume that the output for all input data set are correct, which can become a
great hurdle in testing.

Path Testing

Path Testing is a white-box testing technique based on a program's or module's control structure. A
control flow graph is created using this structure, and the many possible paths in the graph are tested
using this structure.

The approach of identifying pathways in the control flow graph that give a foundation set of execution
paths through the program or module is known as basis path testing.
Because this testing is dependent on the program's control structure, it necessitates a thorough
understanding of the program's structure. Four stages are followed to create test cases using this
technique −

 Create a Control Flow Graph.


 Calculate the Graph's Cyclomatic Complexity
 Identify the Paths That Aren't Connected
 Create test cases based on independent paths.

Control Flow Graph

A control flow graph (or simply flow graph) is a directed graph that depicts a program's or module's
control structure. V number of nodes/vertices and E number of edges make up a control flow graph
(V, E). A control graph can also include the following −

 A node with multiple arrows entering it is known as a junction node.


 A node having more than one arrow leaving it is called a decision node.
 The area encompassed by edges and nodes is referred to as a region (area outside the graph is
also counted as a region.).
Below are the notations utilized while constructing a flow graph −

Sequential Statements –

If – Then – Else
Do – While

While – Do

Switch – Case
Cyclomatic Complexity – The cyclomatic complexity V(G) is said to be a measure of the logical
complexity of a program. It can be calculated using three different formulae :

Formula based on edges and nodes :

V(G) = e - n + 2*P

Where, e is number of edges, n is number of vertices, P is number of connected components. For


example, consider first graph given above,

where, e = 4, n = 4 and p = 1

So,

Cyclomatic complexity V(G)

=4-4+2*1

=2

Formula based on Decision Nodes :

V(G) = d + P

where, d is number of decision nodes, P is number of connected nodes. For example, consider first
graph given above,

where, d = 1 and p = 1

So,

Cyclomatic Complexity V(G)

=1+1

=2

Formula based on Regions :

V(G) = number of regions in the graph

For example, consider first graph given above,

Cyclomatic complexity V(G)

= 1 (for Region 1) + 1 (for Region 2) = 2


Hence, using all the three above formulae, the cyclomatic complexity obtained remains same. All
these three formulae can be used to compute and verify the cyclomatic complexity of the flow graph.
Note –

For one function [e.g. Main( ) or Factorial( ) ], only one flow graph is constructed. If in a program,
there are multiple functions, then a separate flow graph is constructed for each one of them. Also, in
the cyclomatic complexity formula, the value of ‘p’ is set depending of the number of graphs present
in total.

If a decision node has exactly two arrows leaving it, then it is counted as one decision node. However,
if there are more than 2 arrows leaving a decision node, it is computed using this formula :

d=k-1

Here, k is number of arrows leaving the decision node.

Independent Paths : An independent path in the control flow graph is the one which introduces at least
one new edge that has not been traversed before the path is defined. The cyclomatic complexity gives
the number of independent paths present in a flow graph. This is because the cyclomatic complexity is
used as an upper-bound for the number of tests that should be executed in order to make sure that all
the statements in the program have been executed at least once. Consider first graph given above here
the independent paths would be 2 because number of independent paths is equal to the cyclomatic
complexity. So, the independent paths in above first given graph :

Path 1:

A -> B

Path 2:

C -> D

Note – Independent paths are not unique. In other words, if for a graph the cyclomatic complexity
comes out be N, then there is a possibility of obtaining two different sets of paths which are
independent in nature. Design Test Cases : Finally, after obtaining the independent paths, test cases
can be designed where each test case represents one or more independent paths. Advantages : Basis
Path Testing can be applicable in the following cases:
More Coverage – Basis path testing provides the best code coverage as it aims to achieve maximum
logic coverage instead of maximum path coverage. This results in an overall thorough testing of the
code.

Maintenance Testing – When a software is modified, it is still necessary to test the changes made in
the software which as a result, requires path testing.

Unit Testing – When a developer writes the code, he or she tests the structure of the program or
module themselves first. This is why basis path testing requires enough knowledge about the structure
of the code.

Integration Testing – When one module calls other modules, there are high chances of Interface
errors. In order to avoid the case of such errors, path testing is performed to test all the paths on the
interfaces of the modules.

Testing Effort – Since the basis path testing technique takes into account the complexity of the
software (i.e., program or module) while computing the cyclomatic complexity, therefore it is
intuitive to note that testing effort in case of basis path testing is directly proportional to the
complexity of the software or program.

Data Flow Testing

Data Flow Testing is a type of structural testing. It is a method that is used to find the test paths of a
program according to the locations of definitions and uses of variables in the program. It has nothing
to do with data flow diagrams.

It is concerned with:

Statements where variables receive values,

Statements where these values are used or referenced.

To illustrate the approach of data flow testing, assume that each statement in the program assigned a
unique statement number. For a statement number S-

DEF(S) = {X | statement S contains the definition of X}

USE(S) = {X | statement S contains the use of X}

If a statement is a loop or if condition then its DEF set is empty and USE set is based on the condition
of statement s.
Data Flow Testing uses the control flow graph to find the situations that can interrupt the flow of the
program.

Reference or define anomalies in the flow of the data are detected at the time of associations between
values and variables. These anomalies are:

 A variable is defined but not used or referenced,


 A variable is used but never defined,
 A variable is defined twice before it is used

Advantages of Data Flow Testing:

Data Flow Testing is used to find the following issues-

 To find a variable that is used but never defined,


 To find a variable that is defined but never used,
 To find a variable that is defined multiple times before it is use,
 Deallocating a variable before it is used.

Disadvantages of Data Flow Testing

 Time consuming and costly process


 Requires knowledge of programming languages

Example:

1. read x, y;

2. if(x>y)

3. a = x+1

else

4. a = y-1

5. print a;

Control flow graph of above example:


Define/use of variables of above example:
Variable Defined at node Used at node

x 1 2, 3

y 1 2, 4

a 3, 4 5

Test Design Preparedness Metrics

The following metrics can be used to represent the level of preparedness of test design :

1. Preparation Status of Test Cases (PST):

A test case can go through a number of phases or states, such as draft and review, before it is released
as a valid and useful test case.

Thus it is useful to periodically monitor the progress of test design by counting the test cases lying in
different states of design – create, draft, review, released and deleted.

It is expected that all the planned test cases that are created for a particular project eventually move to
the released state before the start of test execution.

2. Average Time Spent (ATS) in Test Case Design :

It is useful to know the amount of time it takes for a test case to move from its initial conception, that
is, create state, to when it is considered to be usable, that is, released state.
This metric is useful in allocating time to the test preparation activity in a subsequent test project.
Hence it is useful in test planning.

3. Number of Available Test (NAT) Cases :

This is the number of test cases in the released state from the existing projects.

Some of these test cases are selected for regression testing in the current test project.

4.Number of Planned Test (NPT) Cases :

This is the number of test cases that are in the test suite and ready for execution at the start of system
testing.

This metric is useful in scheduling test execution. As testing continues, new, unplanned test cases may
be required to be designed.

A large number of new test cases compared to NPT suggest that initial planning was not accurate.

5. Coverage of a Test Suite (CTS) :

This metric gives the fraction of all requirements covered by a selected number of test cases or a
complete test suite.

Test Case Design Effectiveness

The objectives of test case design effectiveness metric is to

(i) Measure the “defect revealing ability” of the test suite and

(ii) Use the metric to improve the test design process.

During system level testing, defects are revealed due to the execution of planned test cases.

In addition to these defects, new defects are found during testing for which no test cases have been
planned.

For these new defects, new test cases are designed, which are called as test case escaped (TCE). Test
escapes occur because of deficiencies in the test design process. This happens because the test
engineers get new ideas while executing the planned test cases.

A metric commonly used in the industry to measure test case design effectiveness is the test case
design yield (TCDY), defined as
TCDY = ((NPT/NPT + number of TCE)) * 100%

Where NPT is the Number of Planned Test Cases and TCE is the Test Case Escaped

The TCDY is also used to measure the effectiveness of a particular testing phase.

Model –Driven test design

MDTD is built on the idea that designers will become more effective and efficient if they can raise the
level of abstraction. This approach breaks down the testing into a series of small tasks that simplify
test generation. Then test designers isolate their tasks and work at a higher level of abstraction by
using mathematical engineering structures to design test values independently of the details of the
software or design artifacts, test automation, and Test Execution

Different phases in MDTD

MDTD can be done in 4 different phases. Each type of activity requires different skills, background
knowledge, education, and training.It is better to use different sets of people depend on the situation.

Test Design — This can be done in either Criteria-Based where Design test values satisfy coverage
criteria or other engineering goals or in Human-Based where Design test values based on domain
knowledge of the program and human knowledge of testing which is comparatively harder. This the
most technical part of the MDTD process better to use experienced developers in this phase.

Test Automation — This involves embedding test values to scripts. Test cases are defined based on
the test requirements. Test values are chosen such that we can cover a larger part of the application
with fewer test cases. We don’t need that much domain knowledge in this phase, however, we need to
use technically skilled people.

Test Execution — The test engineer will run tests and records the results in this activity. Unlike the
previous activities, test execution not required a high skill set such as technical knowledge, logical
thinking, or domain knowledge. Since we consider this phase comparatively low risk, we can assign
junior intern engineers to execute the process. But we should focus on monitoring, log collecting
activities based on automation tools.

Test Evaluation — The process of evaluating the results and reporting to developers. This phase is
comparatively harder and we expected to have knowledge in the domain, testing, and User interfaces,
and psychology

The below diagrams shows the steps & activities involved in the MDTD
Steps in MDTD

Activities in MDTD
Test automation process, make it easy to do regression testing in less time compared to manual testing,
and also it avoids the chance of missing the previous passed test cases to be tested in the current testing
process. MDTD defines a simple framework to automate the testing process in a structured manner.
This is a simple introduction to MDTD to get a rough idea about it.

Test Procedures

These are explained as following below.

Step-1: Assess Development Plan and Status –

This initiative may be prerequisite to putting together Verification, Validation, and Testing Plan wont
to evaluate implemented software solution. During this step, testers challenge completeness and
correctness of event plan. Based on extensiveness and completeness of Project Plan testers can estimate
quantity of resources they’re going to got to test implemented software solution.

Step-2: Develop the Test Plan –

Forming plan for testing will follow an equivalent pattern as any software planning process. The
structure of all plans should be an equivalent, but content will vary supported degree of risk testers
perceive as related to software being developed.

Step-3: Test Software Requirements –

Incomplete, inaccurate, or inconsistent requirements cause most software failures. The inability to get
requirement right during requirements gathering phase can also increase cost of implementation
significantly. Testers, through verification, must determine that requirements are accurate, complete,
and they do not conflict with another.

Step-4: Test Software Design –

This step tests both external and internal design primarily through verification techniques. The testers
are concerned that planning will achieve objectives of wants, also because design being effective and
efficient on designated hardware.

Step-5: Build Phase Testing –

The method chosen to build software from internal design document will determine type and
extensiveness of testers needed. As the construction becomes more automated, less testing are going to
be required during this phase. However, if software is made using waterfall process, it’s subject to error
and will be verified. Experience has shown that it’s significantly cheaper to spot defects during
development phase, than through dynamic testing during test execution step.
Step-6: Execute and Record Result –

This involves testing of code during dynamic state. The approach, methods, and tools laid out in test
plan are going to be wont to validate that executable code actually meets stated software requirements,
and therefore the structural specifications of design.

Step-7: Acceptance Test –

Acceptance testing enables users to gauge applicability and usefulness of software in performing their
day-to-day job functions. This tests what user believes software should perform, as against what
documented requirements state software should perform.

Step-8: Report Test Results –

Test reporting is continuous process. It may be both oral and written. It is important that defects and
concerns be reported to the appropriate parties as early as possible, so that corrections can be made at
the lowest possible cost.

Step-9: The Software Installation –

Once test team has confirmed that software is prepared for production use, power to execute that
software during production environment should be tested. This tests interface to operating software,
related software, and operating procedures.

Step-10: Test Software Changes –

While this is often shown as Step 10, within context of performing maintenance after software is
implemented, concept is additionally applicable to changes throughout implementation process.
Whenever requirements changes, test plan must change, and impact of that change on software systems
must be tested and evaluate.

Step-11: Evaluate Test Effectiveness –

Testing improvement can best be achieved by evaluating effectiveness of testing at top of every
software test assignment. While this assessment is primarily performed by testers, it should involve
developers, users of software, and quality assurance professionals if function exists within the IT
organization.
Test Case Organization and tracking

One consideration that you should take into account when creating the test case documentation is how
the information will be organized and tracked. Think about the questions that a tester or the test team
should be able to answer:

Which test cases do you plan to run?

How many test cases do you plan to run? How long will it take to run them?

Can you pick and choose test suites (groups of related test cases) to run on particular features or areas
of the software?

When you run the cases, will you be able to record which ones pass and which ones fail?

Of the ones that failed, which ones also failed the last time you ran them?

What percentage of the cases passed the last time you ran them?

These examples of important questions might be asked over the course of a typical project. Consider
that some sort of process needs to be in place that allows you to manage your test cases and track the
results of running them. There are essentially four possible systems:

In your head. Don't even consider this one, even for the simplest projects, unless you're testing software
for your own personal use and have no reason to track your testing. You just can't do it.

Paper/documents. It's possible to manage the test cases for very small projects on paper. Tables and
charts of checklists have been used effectively. They're obviously a weak method for organizing and
searching the data but they do offer one very important positivea written checklist that includes a
tester's initials or signature denoting that tests were run is excellent proof in a court-of-law that testing
was performed.

Spreadsheet. A popular and very workable method of tracking test cases is by using a spreadsheet.. By
keeping all the details of the test cases in one place, a spreadsheet can provide an at-a-glance view of
your testing status. They're easy to use, relatively easy to set up, and provide good tracking and proof
of testing.
The ideal method for tracking test cases is to use a Test Case Management Tool, a database
programmed specifically to handle test cases. Many commercially available applications are set up to
perform just this specific task.

The important thing to remember is that the number of test cases can easily be in the thousands and
without a means to manage them, you and the other testers could quickly be lost in a sea of
documentation. You need to know, at a glance, the answer to fundamental questions such as, "What
will I be testing tomorrow, and how many test cases will I need to run?"

Bug Reporting

As a tester, our basic aim is to decipher bugs. Whenever a new build is received to the testers for
testing, the primary objective of each and every tester is to find out as many bugs as possible from
every corner of the application to make the application bug free. To accomplish this task, we perform
many different testing techniques. We check GUI, verification checks, integration test, functional
check, security check and many more, which makes us drill deep into the application.

It as a well known fact that bug awareness is of no use until it is well documented, which creates a role
of bug reports. Bug reports play an important role in Software Development Life Cycle (SDLC) in the
various stages as they are referenced by testers, developers and managers, which makes bug reports an
important stage.
Once the bugs are reported by the testers and are submitted to the developers to work upon, there may
exist a cold war between them regarding the bug reported. The best tester is not the one who finds most
of the bugs, but is the one who gets most of the bugs fixed.

Bug Reporting: An Art

The first aim of the bug report is to let the programmer know, where and how his code failed in a
module of an Application. The bug report gives the programmer a detailed description and steps so that
the programmer can verify is the bug valid or invalid. In case the development team is not able to
replicate the reported bug using Bug Report, there can be back flows from the development team
saying that it is not a bug, cannot reproduce, and many other reasons.

Hence it is very important that the Bug Report must be prepared by the testers with utmost proficiency.
It should describe the following three things-

What we did

 Module, Page/Window names, which we navigate to.


 All the test data entered and selected.
 Buttons and the proper order of clicking.

What we saw

 GUI flaws.
 Missing or no validation messages.
 Incorrect Messages.
 Incorrect navigation pages.

What we expected to see

 GUI flaws: Give proper screenshots with highlight.


 Correct language or message.
 Correct validation messages.
 Mention the actual navigation page.
Some tips for effective bug reporting

Bug description should be clearly identifiable

A bug description is a short description that briefly describes, what exactly is the problem?. A
description can require a few steps to be produced, but it should be clear and meaningful.

Bug should be reported after building a proper context

Pre-conditions for reproducing the bug should be properly defined in the bug report, so as to reach the
exact point, where the bug can be reproduced.

Steps should be clear with short and meaningful sentences

Nobody wants to study an entire paragraph or a long sentence. The sentences should be small, clear
and easy to understand. Make your report step wise by numbering it (1,2,3..).

Cite examples wherever necessary

Instead of writing an ambiguous statement like enter invalid data, try to mention the real data/value
entered. For ex, enter the phone number as 12365478!A.
Give references to specifications

It is always suggested to mention the section and page number for the reference, when any bug arises,
which is contradicting the SRS or any functional document.

Avoid passing any kind of judgment in the bug description

A tester should always be polite and should not be judgmental, so as to keep his bug up and meaningful
and to avoid the controversies between them.

Assign severity and priority

It is the role of the tester to assign the proper severity and priority to the bugs. Severity is the state or
quality of being severe, which defines the importance of the bug from the functional point of view.
Priority defines when and how fast the bug should be fixed.

Severity Levels can be:

 Show Stopper
 High
 Medium
 Low

Priority Levels can be:

 High
 Medium
 Low

Provide screenshots

This is the best approach to make everyone understand the bug more clearly. For any error found,
Server error, GUI issue, message prompts or any other, screenshots should be saved and attached in the
bug report.

Bug Life Cycle

A defect is an error or bug in an application that is created during the building or designing of software
and due to which software starts to show abnormal behaviors during its use. So it is one of the
important responsibilities of the tester to find as much as defect possible to ensure the quality of the
product is not affected and the end product is fulfilling all requirements perfectly for which it has been
designed and provide required services to the end-user
What is Defect Life Cycle?

Defect Life Cycle is the life cycle of a defect or bug which it goes through covering a specific set of
states in its entire life. Mainly bug life cycle refers to its entire state starting from a new defect detected
to the closing off of that defect by the tester. Alternatively, it is also called a Bug Life Cycle.

1.New: When any new defect is identified by the tester, it falls in the ‘New’ state. It is the first state of
the Bug Life Cycle. The tester provides a proper Defect document to the Development team so that the
development team can refer to Defect Document and can fix the bug accordingly.

2. Assigned: Defects that are in the status of ‘New’ will be approved and that newly identified defect is
assigned to the development team for working on the defect and to resolve that. When the defect is
assigned to the developer team the status of the bug changes to the ‘Assigned’ state.

3. Open: In this ‘Open’ state the defect is being addressed by the developer team and the developer
team works on the defect for fixing the bug. Based on some specific reason if the developer team feels
that the defect is not appropriate then it is transferred to either the ‘Rejected’ or ‘Deferred’ state.

4. Fixed: After necessary changes of codes or after fixing identified bug developer team marks the
state as ‘Fixed’.
5. Pending Request: During the fixing of the defect is completed, the developer team passes the new
code to the testing team for retesting. And the code/application is pending for retesting on the Tester
side so the status is assigned as ‘Pending Retest’.

6. Retest: At this stage, the tester starts work of retesting the defect to check whether the defect is fixed
by the developer or not, and the status is marked as ‘Retesting’.

7. Reopen: After ‘Retesting’ if the tester team found that the bug continues like previously even after
the developer team has fixed the bug, then the status of the bug is again changed to ‘Reopened’. Once
again bug goes to the ‘Open’ state and goes through the life cycle again. This means it goes for Re-
fixing by the developer team.

8. Verified: The tester re-tests the bug after it got fixed by the developer team and if the tester does not
find any kind of defect/bug then the bug is fixed and the status assigned is ‘Verified’.

9. Closed: It is the final state of the Defect Cycle, after fixing the defect by the developer team when
testing found that the bug has been resolved and it does not persist then they mark the defect as a
‘Closed’ state.

Benefits of Defect Lifecycle

 Deliver High-Quality Product


 Improve Return on Investment (ROI) by Reducing the Cost of Development
 Better Communication, Teamwork, and Connectivity
 Detect Issues Earlier and Understand Defect Trends
 Better Service and Customer Satisfaction

Limitations in Defect Lifecycle

 Variations of the Bug Life Cycle


 No Control on Test Environment

You might also like