ST Unit - 3
ST Unit - 3
Given this approach, some of the issues that remain open are the following:
Are the three values adequate to show that the module meets its specification
when the tests are run? Should additional or fewer values be used to make the
most effective use of resources?
Are there any input values, other than those selected, more
likely to reveal defects? For example, should positive
integers at the beginning or end of the domain be
specifically selected as inputs?
Should any values outside the valid domain be used as test
inputs? For example, should test data include floating point
values, negative values, or integer values greater than 100?
Use of random test inputs may save some time and effort
But, selecting test inputs randomly has very little chance of
producing an effective set of test data
Random Testing Steps:
Random Inputs are identified to be evaluated against the
system.
Test Inputs are selected independently from test domain.
Tests are Executed using those random inputs.
Record the results and compare against the expected
outcomes.
Reproduce/Replicate the issue and raise defects, fix and
retest.
Random testing is a testing technique where programs are
tested by generating random and independent inputs. The
results of output generated are compared with the software
specifications to verify if the result is correct or not. There are
some strengths and weakness of random testing.
The strength of random testing are:
It is inexpensive to use
It does not have any bias
The bugs are found very easily and quickly
If software is used properly it will find the bugs.
The weakness of this testing is:
It is capable of finding only basic bugs
It is precise when specifications are imprecise.
This technique compares poorly with other techniques
to find the bugs
This technique will create a problem for continuous
integration if different inputs are randomly selected on
each test.
Some think that white box testing is better than this
random testing technique
Random Testing Characteristics
It is performed where defects in a software application is
not identified by the regular intervals.
Random input is used to test the system performance and
its reliability.
Saves time and effort than actual tests.
Other testing methods are not used.
The common example of Random testing is: use of
random integers to test the software function that returns
the results based on those integers. Specifically when
dealing with integers or other types of variables. Random
testing is random as a set of random inputs that are used,
in other words testers are bound to choose set of integers
rather than infinite set.
Requirements Based Testing
What is Requirements Based Testing?
The process of requirements based testing deals
with validating whether the requirements are
complete, consistent , unambiguous, complete and
logically connected. With such requirements, we can
proceed to develop test cases to ensure that the test
cases full fill all the requirements.
Testing in this technique revolves around
requirements. The strategy of Requirement based
testing is to integrate testing throughout the life
cycle of the software development process, to
assure quality of the Requirement Specification. The
aim is defect prevention than defect detection.
Taking Requirement Based testing into account,
testing is divided into the following types of activity :
Stages in Requirements based Testing:
Define Test Completion Criteria : Testing should be defined in
quantifiable terms. The goal is considered to be achieved only when
test coverage is 100%.
Design Test Cases : Test cases must be in accordance with
requirements specification.
Build Test Cases :Join the logical parts together to form/build test
cases .
Execute Test Cases :Execute the test cases to evaluate the results.
Verify Test Results :Check whether actual results deviate from the
expected ones.
Manage Test Library :Test manager is responsible for monitoring the
test case executions, that is, the tests passed or failed, or to ascertain
whether all tests have been successfully performed.
Track and Manage Defects - Any defects detected during the testing
process goes through the defect life cycle and are tracked to
resolution. Defect Statistics are maintained which will give us the
overall status of the project
Why Requirements are Critical :
Various studies have shown that software projects fail due to
the following reasons:
Incomplete requirements and specifications.
Frequent changes in requirements and specifications
When there is lack of user input to requirements
So the requirements based testing process addresses each of
the above issues as follows :
The Requirements based testing process starts at the very
early phase of the software development, as correcting
issues/errors is easier at this phase.
It begins at the requirements phase as the chances of
occurrence of bugs have its roots here.
It aims at quality improvement of requirements. Insufficient
requirements leads to failed projects.
Requirements Testing process:
Testing must be carried out in a timely manner.
Testing process should add value to the software life cycle,
hence it needs to be effective.
Testing the system exhaustively is impossible hence the
testing process needs to be efficient as well.
Testing must provide the overall status of the project,
hence it should be manageable.
Cause-and-effect graphing
Cause Effect Graph Cause Effect Graph is a black box
testing technique. ... that aids in choosing test cases
that logically relate Causes (inputs) to Effects
(outputs) to produce test cases. A “Cause” stands for
a separate input condition that fetches about an
internal change in the system.. It is also known as
Ishikawa diagram as it was invented by Kaoru Ishikawa or
fish bone diagram because of the way it looks.
Cause-effect graph which underlines the relationship
between a given result and all the factors affecting the
result. It is used to write dynamic test cases.
A “Cause” stands for a separate input condition that
fetches about an internal change in the system. An
“Effect” represents an output condition, a system
transformation or a state resulting from a combination of
causes.
The dynamic test cases are used when code works
dynamically based on user input. For example, while using
email account, on entering valid email, the system accepts it
but, when you enter invalid email, it throws an error
message. In this technique, the input conditions are
assigned with causes and the result of these input
conditions with effects.
Cause-Effect graph technique is based on a collection of
requirements and used to determine minimum possible test
cases which can cover a maximum test area of the software.
The main advantage of cause-effect graph testing is, it
reduces the time of test execution and cost.
The Cause-Effect Diagram can be used under these
Circumstances:
1.To determine the current problem so that right decision can
be taken very fast.
2.To narrate the connections of the system with the factors
affecting a particular process or effect.
3.To recognize the probable root causes, the cause for a exact
effect, problem, or outcome.
Benefits of making cause-Effect Diagram
It finds out the areas where data is collected for additional
study.
It motivates team contribution and uses the team data of
the process.
Uses synchronize and easy to read format to diagram
cause-and-effect relationships.
Point out probable reasons of difference in a process.
It enhances facts of the procedure by helping everyone to
learn more about the factors at work and how they relate.
It assists us to decide the root reasons of a problem or
quality using a structured approach.
The dynamic test cases are used when code works dynamically
based on user input.
For example, while using email account, on entering valid
email, the system accepts it but, when you enter invalid email,
it throws an error message.
In this technique, the input conditions are assigned with causes
and the result of these input conditions with effects.
The main advantage of cause-effect graph testing is, it reduces
the time of test execution and cost.
Cause-Effect graph technique is based on a collection of
requirements and used to determine minimum possible test
cases which can cover a maximum test area of the software.
This technique aims to reduce the number of test cases but
still covers all necessary test cases with maximum coverage to
achieve the desired application quality.
Cause-Effect graph technique converts the requirements
specification into a logical relationship between the input and
output conditions by using logical operators like AND, OR and
NOT.
Just assume that each node having the value 0 or 1 where
0 shows the ‘absent state’ and 1 shows the ‘present
state’. The identity function states when c1 = 1, e1 = 1 or
we can say if c0 = 0 and e0 = 0.
The NOT function states that, if C1 = 1, e1= 0 and vice-
versa. Likewise, OR function states that, if C1 or C2 or C3
= 1, e1 = 1 else e1 = 0. The AND function states that, if
both C1 and C2 = 1, e1 = 1, else e1 = 0.
The AND and OR functions are permitted to have any
number of inputs.
Compatibility Testing
What is Compatibility?
Compatibility is nothing but the capability of existing or
living together. In normal life, Oil is not compatible with
water, but milk can be easily combined with water.
What is Compatibility Testing?
Compatibility Testing is a type of Software testing to
check whether your software is capable of running on
different hardware, operating systems, applications,
network environments or Mobile devices.
Compatibility testing is a non-functional testing
conducted on the application to evaluate the
application's compatibility within different
environments. It can be of two types - forward
compatibility testing and backward compatibility
testing.
Operating system Compatibility Testing - Linux , Mac
OS, Windows
Database Compatibility Testing - Oracle SQL Server
Browser Compatibility Testing - IE , Chrome, Firefox
Other System Software - Web server, networking/
messaging tool, etc.
What is Compatibility testing?
Compatibility testing is a non-functional testing method
primarily done to ensure customer satisfaction. This
testing process will ensure that the software is
compatible across operating systems, hardware
platforms, web browsers, etc.
The testing also works as validation for compatibility
requirements that have been set at the planning stage of
the software.
The process helps in developing software that has the
ability to work seamlessly across platforms and
hardware without any trouble
In today’s competitive world, it is important that the
software or the products released to the buyers reflect true
value for the amount they incur to buy or use the product.
Thorough testing of the products helps create quality
products that provide value for money. Various software
tests are performed at different stages of software
development and testing is also conducted on the finished
product, prior to its release.
This testing is done to ensure a competitive edge in terms
of quality, compatibility, cost, and delivery for the end
product before it is delivered.
Compatibility testing helps ensure complete customer
satisfaction as it checks whether the application performs
or operates as expected for all the intended users across
multiple platforms.
This non-functional testing is performed to ensure
compatibility of a system, application, or website built with
various other objects such as other web browsers,
databases, hardware platforms, users, operating systems,
mobile devices & networks etc.
It is conducted on the application to evaluate the
application’s compatibility with different environments. It
can be performed either through automation tools or it can
be conducted manually.
Need for Compatibility Testing:
Software applications released should be of high quality
and compatible with all hardware, software, OS,
platforms, etc.
which is achieved through opting for compatibility testing.
Compatibility can be ensured through adopting
compatibility testing, which detects for any errors before
the product is delivered to the end user.
This testing establishes or confirms that the product
meets all the requirements set and agreed upon by both
the developer and the end user.
This stable or quality product in turn improves the
reputation of the firm and propels the company to
success. It is also true that quality products improve sales
and marketing efforts and bring delight to the customer.
Moreover, an efficient compatibility test effort ensures real
compatibility among different computing environments.
In addition, a truly dynamic compatibility testing also confirms
the workability and stability of the software that is of much
importance before its release.
Types of Compatibility Testing
#1) Compatibility Forward testing: This type of testing
verifies that the software is compatible with the newer or
upcoming versions, and is thus named as forward
compatible.
#2) Compatibility Backward testing checks whether the
mobile app has been developed for the latest versions of
an environment also work perfectly with the older
version. The behavior of the new hardware/software has
been matched against the behavior of the old
hardware/software
Compatibility type of testing can be performed on operating
systems, databases, systems software, browsers, and mobile
applications. The mobile app testing is performed across
various platforms, devices, and networks.
Process of Compatibility Testing
The compatibility test is conducted under different hardware
and software application conditions, where the computing
environment is important, as the software product created
must work in a real-time environment without any errors or
bugs.
Some of the main computing environments are the operating
systems, hardware peripherals, browsers, database content,
computing capacity, and other related system software if any.
user Documentation Testing
Documentation for Software testing helps in
estimating the testing effort required, test coverage,
requirement tracking/tracing, etc. This section
includes the description of some commonly used
documented artifacts related to Software
development and testing, such as:
Test Plan
Requirements
Test Cases
USER DOCUMENTATION TESTING