Module 4
Module 4
Fig 4.2 shows that software inspections and testing support V & V at
different stages in the software process
Development Testing
Development testing includes all testing activities that are carried out
by the team developing the system.
The tester of the software is usually the programmer who developed
that software, although this is not always the case.
During development, testing may be carried out at three levels of granularity:
1. Unit testing, where individual program units or object classes are
tested. Unit testing should focus on testing the functionality of
objects or methods.
2. Component testing, where several individual units are integrated to
create composite components. Component testing should focus on
testing component interfaces.
3. System testing, where some or all of the components in a system
are integrated and the system is tested as a whole. System testing
should focus on testing component interactions.
1. Unit Testing
Unit testing is the process of testing program components, such as methods
or object classes.
Individual functions or methods are the simplest type of component.
Your tests should be calls to these routines with different input parameters.
The interface of this object is shown in fig 4.4. It has a single attribute,
which is its identifier.
An automated test has three parts:
* A setup part, where you initialize the system with the test case,
namely the inputs and expected outputs
* A call part, where you call the object or method to be tested.
* An assertion part where you compare the result of the call with the
expected result. If the assertion evaluates to true, the test has been
successful; if false, then it has failed.
these components.
Interface errors in the composite component may not be detectable by
testing the individual objects because these errors result from
interactions between the objects in the component.
Different types of interface error that can occur:
* Parameter interfaces: These are interfaces in which data or
sometimes function references are passed from one component to
another. Methods in an object have a parameter interface.
* Shared memory interfaces: These are interfaces in which a block of
memory is shared between components. Data is placed in the
memory by one subsystem and retrieved from there by other sub-
systems.
* Procedural interfaces: These are interfaces in which one component
encapsulates a set of procedures that can be called by other
components. Objects and reusable components have this form of
interface.
* Message passing interfaces: These are interfaces in which one
component requests a service from another component by passing
a message to it. A return message includes the results of executing
the service.
Release Testing
Release testing is the process of testing a particular release of a system
that is intended for use outside of the development team.
There are two important distinctions between release testing and
system testing during the development process:
1. A separate team that has not been involved in the system
development should be responsible for release testing.
2. System testing by the development team should focus on
discovering bugs in the system (defect testing). The objective of
release testing is to check that the system meets its requirements
and is good enough for external use (validation testing).
Release testing is usually a black-box testing process where tests are
derived from the system specification.
The system is treated as a black box whose behavior can only be
determined by studying its inputs and the related outputs.
.1 Scenario testing
Scenario testing is an approach to release testing where typical scenarios
are devised and are used to develop test cases for the system.
A scenario is a story that describes one way in which the system might be used.
Scenarios should be realistic and real system users should be able to relate to them.
When scenario-based approach is used, normally several requirements
within the same scenario are tested.
Therefore, as well as checking individual requirements, checking that combinations
of requirements do not cause problems.
Performance testing
Once a system has been completely integrated, it is possible to test for
emergent properties, such as performance and reliability.
Performance tests have to be designed to ensure that the system can
process its intended load.
This usually involves running a series of tests where the load is increased,
until the system performance becomes unacceptable.
To test whether performance requirements are being achieved, an
operational profile may be constructed.
An operational profile is a set of tests that reflect the actual mix of work
that will be handled by the system.
This type of testing has two functions:
a. It tests the failure behavior of the system. Circumstances may arise
through an unexpected combination of events where the load placed
on the system exceeds the maximum anticipated load. In these
circumstances, it is important that system failure should not cause
data corruption or unexpected loss of user services.
b. It stresses the system and may cause defects to come to light that
would not normally be discovered. Although it can be argued that
these defects are unlikely to cause system failures in normal usage,
User Testing
User or customer testing is a stage in the testing process in which users or
customers provide input and advice on system testing.
This may involve formally testing a system that has been commissioned from
an external supplier, or could be an informal process where users
experiment with a new software product to see if they like it and that it does
what they need.
In practice, there are three different types of user testing:
1. Alpha testing, where users of the software work with the
development team to test the software at the developer’s site.
2. Beta testing, where a release of the software is made available to
users to allow them to experiment and to raise problems that they
discover with the system developers.
3. Acceptance testing, where customers test a system to decide
whether or not it is ready to be accepted from the system
developers and deployed in the customer environment.
There are six stages in the acceptance testing process, as shown in fig 4.10. They are:
1. Define acceptance criteria: This stage should, ideally, take place
early in the process before the contract for the system is signed. The
acceptance criteria should be part of the system contract and be
agreed between the customer and the developer. Detailed
requirements may not be available and there may be significant
requirements change during the development process.
2. Plan acceptance testing: This involves deciding on the resources,
time, and budget for acceptance testing and establishing a testing
schedule. The acceptance test plan should also discuss the required
coverage of the requirements and the order in which system
features are tested. It should define risks to the testing process,
such as system crashes and inadequate performance, and discuss
how these risks can be mitigated.
3. Derive acceptance tests: Once acceptance criteria have been
established, tests have to be designed to check whether or not a
system is acceptable. Acceptance tests should aim to test both the
functional and non-functional characteristics (e.g., performance) of
the system.
4. Run acceptance tests: The agreed acceptance tests are executed on
the system. Ideally, this should take place in the actual environment
where the system will be used, but this may be disruptive and
impractical. Therefore, a user testing environment may have to be
set up to run these tests. It is difficult to automate this process as
part of the acceptance tests may involve testing the interactions
between end-users and the system.
5. Negotiate test results: It is very unlikely that all of the defined
acceptance tests will pass and that there will be no problems with
the system. If this is the case, then acceptance testing is complete
and the system can be handed over. More commonly, some
problems will be discovered. In such cases, the developer and the
customer have to negotiate to decide if the system is good enough
to be put into use. They must also agree on the developer’s
response to identified problems.
6. Reject/accept system: This stage involves a meeting between the
developers and the customer to decide on whether or not the
system should be accepted. If the system is not good enough for
use, then further development is required to fix the identified
problems. Once complete, the acceptance testing phase is repeated.
Test Automation
Test automation is essential for test-first development.
Test-first development is an approach to development where tests are
written before the code to be tested
Tests are written as executable components before the task is implemented.
These testing components should be standalone, should simulate the
submission of input to be tested, and should check that the result meets the
output specification.
An automated test framework is a system that makes it easy to write
executable tests and submit a set of tests for execution.
Test automation tools such as JUnit that can re-run component tests when
new versions of the component are created, can commonly be used
JUnit is a set of java classes that the user extends to create an automated
testing environment.
As testing is automated, there is always a set of tests that can be quickly and
easily executed.
Whenever any functionality is added to the system, the tests can be run and
problems that the new code has introduced can be caught immediately
Test-first development and automated testing usually results in a large
number of tests being written and executed.
This approach does not necessarily lead to thorough program testing. There
are three reasons for this:
1. Programmers prefer programming to testing and sometimes they
take shortcuts when writing tests.
2. Some tests can be very difficult to write incrementally
3. It becomes difficult to judge the completeness of a set of tests.
SOFTWARE EVOLUTION
Software evolution may be triggered by changing business requirements, by
reports of software defects, or by changes to other systems in a software
system’s environment. software engineering as a spiral process with
requirements, design, implementation, and testing going on throughout the
lifetime of the system (Figure 4.12).
This process can be started by creating release 1 of the system.
Once delivered, changes are proposed and the development of release 2
starts almost immediately.
This model of software evolution implies that a single organization is
responsible for both the initial software development and the evolution of
the software
Evolution processes
Software evolution processes vary depending on the type of software
being maintained, the development processes used in an organization
and the skills of the people involved.
In some organizations, evolution may be an informal process where
change requests mostly come from conversations between the system
users and developers.
In other companies, it is a formalized process with structured
documentation produced at each stage in the process.
The processes of change identification and system evolution are cyclic
and continue throughout the lifetime of a system (Fig 4.14).
Software Maintenance
Software maintenance is the general process of changing a system after it
has been delivered.
There are three different types of software maintenance:
Maintenance prediction
It is essential to estimate the overall maintenance costs for a system in a
given time period.
Fig 4.20 shows these predictions and associated questions.
Predicting the number of change requests for a system requires an
understanding of the relationship between the system and its external
environment
Average time required for impact analysis: This reflects the number
of program components that are affected by the change request. If
this time increases, it implies more and morecomponents are
affected and maintainability is decreasing.
2. Average time taken to implement a change request: This is not the
same as the time for impact analysis although it may correlate with
it. This is the amount of time needed to modify the system and its
documentation, after the assessment of which components are
affected
3. Number of outstanding change requests: An increase in this
number over time may imply a decline in maintainability.
Software Engineering
To make legacy software systems easier to maintain, these systems can
be reengineered to improve their structure and understandability.
Reengineering may involve re-documenting the system, refactoring the
system architecture, translating programs to a modern programming
language, and modifying and updating the structure and values of the
system’s data.
There are two important benefits from reengineering rather than replacement:
1. Reduced risk: There is a high risk in redeveloping business-critical
software. Errors may be made in the system specification or there
may be development problems. Delays in introducing the new
software may mean that business is lost and extra costs are incurred
2. Reduced cost: The cost of reengineering may be significantly less
than the cost of developing new software.
Fig 4.21 is a general model of the reengineering process.