Quality Control ::
Dynamic Testing Techniques
Levels of Testing
Type of testing Performed by
o Unit (Module) testing Programmer
o Integration testing Development team
o Function testing Independent test group
o System testing Independent test group
o Acceptance testing Customer
Dynamic Testing
Dynamic Testing is a software testing method used to test
the dynamic behaviour of software code.
The main purpose of dynamic testing is to test software
behaviour with dynamic variables or variables which are not
constant and finding weak areas in software runtime
environment.
The code must be executed in order to test the dynamic
behavior.
Dynamic Testing
The main aim of the Dynamic tests is to ensure
that software works properly during and after the
installation of the software ensuring a stable
application without any major flaws.
Dynamic test is to ensure consistency to the
software.
Types of Dynamic Testing
Dynamic Testing is classified into two categories.
White Box Testing
Black Box Testing
Levels of Testing
White Box Testing
Is detailed examination of internal structure and
logic of code.
In white box testing, you create test cases by
looking at the code to detect any potential failure
scenarios.
White box testing is also know as Glass Testing
or Open box testing.
Black Box Testing
Method that examines the functionality of an application,
without looking at its structure.
The tester does not ever examine the programming
code and does not need any further knowledge of the
program other than requirement specifications (SRS).
You can begin planning for black box testing soon after
the requirements and the functional specifications are
available.
Black Box Testing
The Black Box Testing is classified into two
types.
Functional Testing
Non-Functional Testing
Functional Testing
Functional testing is a process of attempting to find discrepancies
between the program and the external specification.
it is performed by executing the functional test cases written by
the QA team
system is tested by providing input, verifying the output and
comparing the actual results with the expected results.
Test cases derived from system’s functional specification
All black box methods equivalence-partitioning, boundary-
value analysis, cause-effect graphing, and error-guessing
methods for test cases design are applicable
Levels of Functional Testing
Unit Testing – Generally Unit is a small piece of code
which is testable, Unit Testing is performed at individual
unit of software and is performed by developers
Integration Testing - Integration Testing is the testing
which is performed after Unit Testing and is performed by
combining all the individual units which are testable and is
performed either by developers or testers
Levels of Functional Testing
System Testing - System Testing is a performed to ensure
whether the system performs as per the requirements and
is generally performed when the complete system is ready,
it is performed by testers when the Build or code is
released to QA team
Acceptance Testing - Acceptance testing is performed to
verify whether the system has met the business
requirements and is ready to use or ready for deployment
and is generally performed by the end user
Non- Functional Testing
Non-Functional testing is a testing technique which does
not focus on functional aspects and mainly concentrates on
the nonfunctional attributes of the system such as memory
leaks, performance or robustness of the system.
Non-Functional testing is performed at all test levels.
Non-Functional Testing Techniques
Volume testing
To determine whether the program can handle the
required volumes of data, requests specified in its
objectives.
Usability (human factors) testing
To find out the human-factor, or usability, problems.
Usability testing is a method to verify the usability of the
system by the end users to verify on how comfortable
the users are with the system
Non-Functional Testing Techniques
Performance testing
Many programs have specific performance or efficiency
objectives such as response times and throughput
rates.
performed to check whether the response time of the
system is normal as per the requirements under the
desired network load.
Since the purpose of a system test is to demonstrate
that the program does not meet its objectives, test
cases must be designed to show that the program does
not satisfy its performance objectives.
Non-Functional Testing Techniques
Load/stress testing
To identify peak load conditions at which the program
will fail to handle
This should not be confused with volume testing;
A heavy stress is a peak volume of data, or activity,
encountered over a short span of time.
For example, when evaluating a typist.
A volume test would determine whether the typist could
cope with a draft of a large report;
A stress test would determine whether the typist could
type at a rate of 50 words per minute.
Non-Functional Testing Techniques
Security testing
Security testing is performed to verify the
robustness of the application, i.e to ensure that
only the authorizes users/roles are accessing the
system
Non-Functional Testing Techniques
Reliability Testing
The goal of all types of testing is the improvement of the
program reliability, but if the program’s objectives contain
specific statements about reliability, specific reliability
tests might be devised.
Testing reliability objectives can be difficult.
For example, a modern online system such as a corporate
wide area network (WAN) or an Internet service provider
(ISP) generally has a targeted uptime of 99.97 percent
over the life of the system.
There is no known way that we could test this objective
with a test period of months or even years.
Non-Functional Testing Techniques
Recovery Testing
Recovery testing is a method to verify on how well a
system is able to recover from crashes and hardware
failures.
Programs such as OS, DBMS often have recovery
objectives that state how the system is to recover from
programming errors, hardware failures, and data errors.
One objective of the system test is to show that these
recovery functions do not work correctly.
Programming errors can be purposely injected into a
system to determine whether it can recover from them.
Types of Testing
Test performed on a software product before it is released
to a large user’s community
Alpha testing
Conducted at a developer’s site by a user
Tests conducted in a controlled environment
Beta testing
Conducted at one or more user sites by the end user
It is a live use of the product in an environment over which
developer has no control
Regression testing
Re-run of previous tests to ensure that software already tested
has no regressed (go back) to an early error level after making
changes to software
Types of Testing
Installation Testing
It is an unusual type of testing because its purpose is not
to find software errors but to find errors that occur during
the installation process.
To identify the ways in which the installation procedures
lead to incorrect results.
Installation tests should be developed by the organization
that produced the system, delivered as part of the system,
and run after the system is installed.
Many events occur when installing software
systems. A short list of examples includes the
following:
User must select a variety of options.
Files and libraries must be allocated and loaded.
Valid hardware configurations must be present.
Programs may need network connectivity to connect to
other programs.
When to Stop Testing?
Unlike when to start testing it
is difficult to determine when
to stop testing, as testing is a
never ending process and no
one can say that any software
is 100% tested i.e. error-free.
When to stop Testing?
Following are the aspects which should be considered to stop
the testing:
Testing Deadlines. ( poor condition )
Completion of Functional and code coverage to a certain point.
Bug rate falls below a certain level and no high priority bugs
are identified.
Management decision. When tester are not finding important
bugs.
Test Completion Criteria
The two most common criteria are these:
1. Stop when the scheduled time for testing expires.
2. Stop when all the test cases execute without detecting errors
Both criteria are not good
Better Test Completion Criteria
Base completion on use of specific test case design methods
Example: test case derived from
1) satisfying multiple condition coverage
2) BVA
3) cause effect graphing & and all resultant test cases are
eventually unsuccessful
Better Test Completion Criteria
State the test completion criteria in term of number of errors
to be found
This includes
An estimate of total no. of errors in program
An estimate of the % of errors in that can be found through
testing
Estimates of what fraction of errors originate in particular design
processes and during what phases of testing they are detected
Better Test Completion Criteria
Plot the number of errors found per unit time during the test phase
The rate of error detection falls below a specified threshold