Dynamic Testing
Dynamic Testing
Security testing
Security testing is performed to verify the
robustness of the application, i.e to ensure that
only the authorizes users/roles are accessing the
system
Non-Functional Testing Techniques
Reliability Testing
The goal of all types of testing is the improvement of the
program reliability, but if the program’s objectives contain
specific statements about reliability, specific reliability
tests might be devised.
Testing reliability objectives can be difficult.
For example, a modern online system such as a corporate
wide area network (WAN) or an Internet service provider
(ISP) generally has a targeted uptime of 99.97 percent
over the life of the system.
There is no known way that we could test this objective
with a test period of months or even years.
Non-Functional Testing Techniques
Recovery Testing
Recovery testing is a method to verify on how well a
system is able to recover from crashes and hardware
failures.
Programs such as OS, DBMS often have recovery
objectives that state how the system is to recover from
programming errors, hardware failures, and data errors.
One objective of the system test is to show that these
recovery functions do not work correctly.
Programming errors can be purposely injected into a
system to determine whether it can recover from them.
Types of Testing
Test performed on a software product before it is released
to a large user’s community
Alpha testing
Conducted at a developer’s site by a user
Tests conducted in a controlled environment
Beta testing
Conducted at one or more user sites by the end user
It is a live use of the product in an environment over which
developer has no control
Regression testing
Re-run of previous tests to ensure that software already tested
has no regressed (go back) to an early error level after making
changes to software
Types of Testing
Installation Testing
It is an unusual type of testing because its purpose is not
to find software errors but to find errors that occur during
the installation process.
To identify the ways in which the installation procedures
lead to incorrect results.
Installation tests should be developed by the organization
that produced the system, delivered as part of the system,
and run after the system is installed.
Many events occur when installing software
systems. A short list of examples includes the
following:
User must select a variety of options.
Files and libraries must be allocated and loaded.
Valid hardware configurations must be present.
Programs may need network connectivity to connect to
other programs.
When to Stop Testing?
2. Stop when all the test cases execute without detecting errors
2) BVA
3) cause effect graphing & and all resultant test cases are
eventually unsuccessful
Better Test Completion Criteria
State the test completion criteria in term of number of errors
to be found
This includes
An estimate of total no. of errors in program