SE Module 4
SE Module 4
Introduction:-
• Testing is intended to show that a program does what it is intended to do and to
discover program defects before it is put into use.
• The testing process has two distinct goals:
1. To demonstrate to the developer and the customer that the software meets its
requirements.
For custom software, this means that there should be at least one test for every
requirement in the requirements document.
For generic software products, it means that there should be tests for all of the
system features, plus combinations of these features, that will be incorporated in the
product release.
2. To discover situations in which the behavior of the software is incorrect, undesirable,
or does not conform to its specification. Defect testing is concerned with rooting out
undesirable system behavior such as system crashes, unwanted interactions with
other systems, incorrect computations, and data corruption.
The diagram shown in Figure 3.1 explains the differences between validation testing and
defect testing. Think of the system being tested as a black box. The system accepts inputs
from some input set I and generates outputs in an output set O. Some of the outputs will be
erroneous. These are the outputs in set Oe that are generated by the system in response to
inputs in the set Ie. The priority in defect testing is to find those inputs in the set Ie because
these reveal problems with the system. Validation testing involves testing with correct inputs
that are outside Ie. These stimulate the system to generate the expected correct outputs.
• To test the states of the weather station, we use a state model. Using this model, you
can identify sequences of state transitions that have to be tested and define event
sequences to force these transitions.
• In principle, you should test every possible state transition sequence, although in
practice this may be too expensive.
• Examples of state sequences that should be tested in the weather station include:
o Shutdown _ Running _ Shutdown
o Configuring _ Running _ Testing _ Transmitting _ Running
o Running _ Collecting _ Running _ Summarizing _ Transmitting _ Running
Automated Testing
• Whenever possible, unit testing should be automated so that tests are run and checked
without manual intervention.
• In automated unit testing, you make use of a test automation framework (such as
JUnit) to write and run your program tests.
• Unit testing frameworks provide generic test classes that you extend to create specific
test cases. They can then run all of the tests that you have implemented and report,
often through some GUI, on the success of otherwise of the tests.
• An automated test has three parts:
1. A setup part, where you initialize the system with the test case, namely the
inputs and expected outputs.
2. A call part, where you call the object or method to be tested.
3. An assertion part where you compare the result of the call with the expected
result. If the assertion evaluates to true, the test has been successful; if false,then it
has failed.
Equivalence-Class Partitioning
• The input data and output results of a program often fall into a number of different
classes with common characteristics.
• Examples of these classes are positive numbers, negative numbers, and menu
selections.
• Programs normally behave in a comparable way for all members of a class. That is, if
you test a program that does a computation and requires two positive numbers, then
you would expect the program to behave in the same way for all positive numbers.
• Because of this equivalent behavior, these classes are sometimes called equivalence
partitions or domains (Bezier, 1990).
• In Figure 3.2, the large shaded ellipse on the left represents the set of all possible
inputs to the program that is being tested.
• The smaller unshaded ellipses represent equivalence partitions.
Interface Types:
There are different types of interface between program components and, consequently,
different types of interface error that can occur:
1. Parameter interfaces: These are interfaces in which data or sometimes function references
are passed from one component to another. Methods in an object have a parameter interface.
2. Shared memory interfaces: These are interfaces in which a block of memory is shared
between components. Data is placed in the memory by one subsystem and retrieved from
there by other sub-systems. This type of interface is often used in embedded systems, where
sensors create data that is retrieved and processed by other system components.
3. Procedural interfaces: These are interfaces in which one component encapsulates a set of
procedures that can be called by other components. Objects and reusable components have
this form of interface.
4. Message passing interfaces These are interfaces in which one component requests a
service from another component by passing a message to it. A return message includes the
results of executing the service. Some object-oriented systems have this form of interface, as
do client–server systems.
Interface errors:
These errors fall into three classes:
1. Interface misuse:-A calling component calls some other component and makes an
error in the use of its interface. This type of error is common with parameter
interfaces where parameters may be of the wrong type or be passed in the wrong
order, or the wrong number of parameters may be passed.
2. Interface misunderstanding: - A calling component misunderstands the specification
of the interface of the called component and makes assumptions about its behavior.
The called component does not behave as expected which then causes unexpected
behavior in the calling component.
• You can use this diagram to identify operations that will be tested and to help design the
test cases to execute the tests. Therefore, issuing a request for a report will result in the
execution of the following thread of methods:
SatComms:request _ WeatherStation:reportWeather _ Commslink:Get(summary)
_ WeatherData:summarize
• The sequence diagram helps you design the specific test cases that you need as it shows
what inputs are required and what outputs are created:
1. An input of a request for a report should have an associated acknowledgment. A report
should ultimately be returned from the request. During testing, you should create summarized
data that can be used to check that the report is correctly organized.
2. An input request for a report to WeatherStation results in a summarized report being
generated. You can test this in isolation by creating raw data corresponding to the summary
• An automated testing environment, such as the JUnit environment that supports Java
program testing (Massol and Husted, 2003), is essential for TDD.
• As the code is developed in very small increments, you have to be able to run every
test each time that you add functionality or refactor the program. Therefore, the tests
are embedded in a separate program that runs the tests and invokes the system that is
being tested.
• A strong argument for test-driven development is that it helps programmers clarify
their ideas of what a code segment is actually supposed to do.
• For example, if your computation involves division, you should check that you are not
dividing the numbers by zero. If you forget to write a test for this, then the code to
check will never be included in the program.
1. A separate team that has not been involved in the system development should be
responsible for release testing.
2. System testing by the development team should focus on discovering bugs in the
system (defect testing). The objective of release testing is to check that the system meets its
requirements and is good enough for external use (validation testing).
• Performance tests have to be designed to ensure that the system can process its
intended load.
• This usually involves running a series of tests where you increase the load until the
system performance becomes unacceptable.
• Performance testing is concerned both with demonstrating that the system meets its
requirements and discovering problems and defects in the system. To test whether
performance requirements are being achieved, you may have to construct an
operational profile.
• An operational profile is a set of tests that reflect the actual mix of work that will be
handled by the system.
• Therefore, if 90% of the transactions in a system are of type A; 5% of type B; and the
remainder of types C, D, and E, then you have to design the operational profile so that
the vast majority of tests are of type A. Otherwise, you will not get an accurate test of
the operational performance of the system.
• This approach, of course, is not necessarily the best approach for defect testing.
• Stress testing is particularly relevant to distributed systems based on a network of
processors. These systems often exhibit severe degradation when they are heavily
loaded. The network becomes swamped with coordination data that the different
processes must exchange. The processes become slower and slower as they wait for
the required data from other processes.
• Stress testing helps you discover when the degradation begins so that you can add
checks to the system to reject transactions beyond this point.
1. Alpha testing:-
• users and developers work together to test a system as it is being developed.
This means that the users can identify problems and issues that are not readily
apparent to the development testing team.
• Developers can only really work from the requirements but these often do not
reflect other factors that affect the practical use of the software
• Alpha testing is often used when developing software products that are sold as
shrink-wrapped systems.
• It also reduces the risk that unanticipated changes to the software will have
disruptive effects on their business.
• Alpha testing may also be used when custom software is being developed.
2. Beta testing:-
• takes place when an early, sometimes unfinished, release of a software system
is made available to customers and users for evaluation.
• Beta testers may be a selected group of customers who are early adopters of
the system. Alternatively, the software may be made publicly available for
use by anyone who is interested in it.
• Beta testing is mostly used for software products that are used in many
different environments.
1. Define acceptance criteria This stage should, ideally, take place early in the
process before the contract for the system is signed. The acceptance criteria
should be part of the system contract and be agreed between the customer
and the developer. In practice, however, it can be difficult to define criteria
so early in the process. Detailed requirements may not be available and
there may be significant requirements change during the development
process.
2. Plan acceptance testing This involves deciding on the resources, time, and
budget for acceptance testing and establishing a testing schedule. The
acceptance test plan should also discuss the required coverage of the
requirements and the order in which system features are tested. It should
define risks to the testing process, such as system crashes and inadequate
performance, and discuss how these risks can be mitigated.
3. Derive acceptance tests Once acceptance criteria have been established, tests
have to be designed to check whether or not a system is acceptable.
Acceptance tests should aim to test both the functional and non-functional
Module 3
Chapter 2
Software Evolution
Introduction
Software development does not stop when a system is delivered but continuesthroughout the
lifetimeof the system. After a system has been deployed, it inevitablyhas to change if it is to
remain useful.
Causes of system change
Business changes and changes to user expectationsgenerate new requirements for the
existing software.
Parts of the softwaremay have to be modified to correct errors that are found in
operation
To adapt itforchanges to its hardware and software platform
To improve its performance orother non- functional characteristics.
An alternative view of the software evolutionlife cycle, as shown in the following figure
Evolution processes
Change identification and evolution process
System change proposals causes system evolution in all organizations.
During release planning, all proposed changes - fault repair, adaptation, and
newfunctionality - are considered.
A decision is then made on which changes to implement in the next version of the
system.
The changes are implemented and validated, and a new version of the system is
released.
The processthen iterates with a new set of changes proposed for the next release.
Change requests sometimes relate to system problems that have to be tackled urgently.
These urgent changes can arise for three reasons:
1. If a serious system fault occurs that has to be repaired to allow normal operation to
continue.
2. If changes to the systems operating environment have unexpected effects that disrupt
normal operation.
3. If there are unanticipated changes to the business running the system, such as the
emergence of new competitors or the introduction of new legislation that affects the
system.
The emergency repair process is required to quickly fix the above problems.
The source code is analyzed and modified directly, rather than modifying the
requirements and design
The disadvantages of emergency repair process are as follows
o the requirements, the software design, and the code become inconsistent
o the process of software aging is accelerated since a quick workable solution is
chosen rather than the best possible solution for quick fix
o future changes become more difficult and maintenance costs increase
Continuing change
The first law states that system maintenance is an inevitable process.
As the system’s environment changes, new requirements emerge and the system must be
modified.
Increasing complexity
The second law states that, as a system is changed, its structure is degraded.
To avoid this, invest in preventative maintenance.
Time is spent improving the software structure without adding to its functionality.
This means additional costs, more than those of implementing required system changes.
Large program evolution
It suggests that large systems have a dynamic of their own
This law is a consequence of structural factors that influence and constrain system
change, and organizational factors that affect the evolution process.
Structural factors:
These factors come from complexity of large systems.
As you change and extend a program, its structure tends to degrade.
Making large changes to a program may introduce new faults and then inhibit further
program changes.
Organisational factors:
These are produced by large organizations.
Companies have to make decisions on the risks and value of the changes and the costs
involved. Such decisions take time to make.
The speed of the organization’s decision-making processes therefore governs the rate of
change of the system.
Organizational stability
In most large programming projects a change to resources or staffing has imperceptible
(slight) effects on the long-term evolution of the system.
Conservation of familiarity
Adding new functionality to a system inevitably introduces new system faults.
The more functionality added in each release, the more faults there will be.
Relatively little new functionality should be included in this release.
This law suggests that you should not budget for large functionality increments in each
release without taking into account the need for fault repair.
Continuing growth
The functionality offered by systems has to continually increase user satisfaction.
The users of software will become increasingly unhappy with it unless it is maintained
and new functionality is added to it.
Declining quality
The quality of systems will decline unless they are modified to reflect changes in their
operational environment.
Feedback system
Software maintenance
It is the general process of changing a system after it has been delivered.
There are three different types of software maintenance:
o Fault repairs
o Environmental adaptations
o Functionality addition
Fault repairs
Coding errors are usually relatively cheap to correct
Design errors are more expensive as they may involve rewriting several program
components.
Requirements errors are the most expensive to repair because of the extensive system
redesign which may be necessary.
Environmental adaptation
This type of maintenance is required when some aspect of the system’s environment such
as the hardware, the platform operating system, or other support software changes.
The application system must be modified to adapt it to cope with these environmental
changes.
Functionality addition
This type of maintenance is necessary when the system requirements change in response
to organizational or business change.
The above figure shows that overall lifetime costs may decrease as more effort is
expended during system development to produce a maintainable system.
In system 1, more development cost has resulted in lesser overall lifetime costs when
compared to system 2.
It is usually more expensive to add functionality after a system is in operation than it is to
implement the same functionality during development. The reasons for this are:
1. Team stability
The new team or the individuals responsible for system maintenance are usually not the
same as the people involved in development
They do not understand the system or the background to system design decisions.
They need to spend time understanding the existing system before implementing changes
to it.
2. Poor development practice
The contract to maintain a system is usually separate from the system development
contract.
There is no incentive for a development team to write maintainable software.
The development team may not write maintainable software to save effort.
This means that the software is more difficult to change in the future.
3. Staff skills
Maintenance is seen as a less-skilled process than system development.
It is often allocated to the most junior staff.
Also, old systems may be written in obsolete programming languages.
The maintenance staff may not have much experience of development in these languages
and must learn these languages to maintain the system.
4. Program age and structure
As changes are made to programs, their structure tends to degrade.
As programs age, they become harder to understand and change.
System documentation may be lost or inconsistent.
Old systems may not have been subject to stringent configuration management so time is
often wasted finding the right versions of system components to change.
Maintenance prediction
It is important try to predict what system changes might be proposed and what parts of the
system are likely to be the most difficult to maintain.
Also estimating the overall maintenance costs for a system in a given time period is
important.
The number of change requests for a system requires an understanding of the relationship
between the system and its external environment.
Therefore, to evaluate the relationships between a system and its environment the
following assessment should be made
1. The number and complexity of system interfaces The larger the number of interfaces
and the more complex these interfaces, the more likely it is that interface changes will be
required as new requirements are proposed.
2. The number of inherently volatile system requirements The requirements that reflect
organizational policies and procedures are likely to be more volatile than requirements
that are based on stable domain characteristics.
3. The business processes in which the system is used As business processes evolve, they
generate system change requests. The more business processes that use a system, the
more the demands for system change.
After a system has been put into service, the process data may be used to help predict
maintainability.
The process metrics that can be used for assessing maintainability are as follows:
1. Number of requests for corrective maintenance An increase in the number of bug and
failure reports may indicate that more errors are being introduced into the program than
are being repaired during the maintenance process. This may indicate a decline in
maintainability.
2. Average time required for impact analysis The number of program components that
are affected by the change request. If this time increases, it implies more and more
components are affected and maintainability is decreasing.
3. Average time taken to implement a change request This is the amount of time needed
to modify the system and its documentation. An increase in the time needed to implement
a change may indicate a decline in maintainability.
4. Number of outstanding change requests An increase in this number over time may
imply a decline in maintainability.
Software reengineering
Benefits of reengineering
Reduced risk Reengineering reduces the high risk in redeveloping business-critical software.
Errors may be made in the system specification or there may be development problems.
Delays in introducing the new software may mean that business is lost and extra costs are
incurred.
Reduced cost The cost of reengineering may be significantly less than the cost of developing
new software.
2. Reverse engineering
The program is analyzed and information extracted from it.
This helps to document its organization and functionality.
This process is usually completely automated.
4. Program modularization
Related parts of the program are grouped together.
Where appropriate, redundancy is removed.
This is a manual process.
5. Data reengineering
The data processed by the program is changed to reflect program changes.
This may mean redefining database schemas, converting existing databases to the new
structure, clean up the data, finding and correcting mistakes, removing duplicate
records, etc.
Tools are available to support data reengineering.
Reengineering approaches
The costs of reengineering depend on the extent of the work that is carried out.
The following figure shows a spectrum of possible approaches to reengineering
Disadvantages of reengineering
There are limits to how much you can improve a system by reengineering.
It isn’t possible to convert a system written using a functional approach to an object-
oriented system.
Major architectural changes of the system data management cannot be carried out
automatically.
The reengineered system will probably not be as maintainable as a new system developed
using modern software engineering methods.
There are situations (bad smells) in which the code of a program can be improved or
refactored. They are as follows
1. Duplicate code The same of very similar code may be included at different places in a
program. This can be removed and implemented as a single method or function that is called
as required.
Organizations have a limited budget for maintaining and upgrading legacy systems.
They have to decide how to get the best return on their investment.
This involves making a realistic assessment of their legacy systems and then deciding on
the most appropriate strategy for evolving these systems.
Business perspective is to decide whether or not the business really needs the system.
Technical perspective is to assess the quality of the application software and the
system’s support software and hardware.
1. Low quality, low business value Keeping these systems in operation will be expensive
and the rate of the return to the business will be fairly small. These systems should be
scrapped.
2. Low quality, high business value These systems are making an important business
contribution so they cannot be scrapped. However, their low quality means that it is
expensive to maintain them. These systems should be reengineered to improve their quality.
They may be replaced, if a suitable off-the-shelf system is available.
3. High quality, low business value These are systems that don’t contribute much to the
business but which may not be very expensive to maintain. It is not worth replacing these
systems so normal system maintenance may be continued if expensive changes are not
required and the system hardware remains in use. If expensive changes become necessary,
the software should be scrapped.
4. High quality, high business value These systems have to be kept in operation. However,
their high quality means that you don’t have to invest in transformation or system
replacement. Normal system maintenance should be continued.
Business perspective
The four basic issues that have to be discussed with system stakeholders to assess business
value of the system are as follows
1. The use of the system If systems are only used occasionally or by a small number of
people, they may have a low business value. However, there may be occasional but important
use of systems. For example, in a university, a student registration system may only be used
at the beginning of each academic year. However, it is an essential system with a high
business value.
2. The business processes that are supported When a system is introduced, business
processes are designed to exploit the system’s capabilities. However, as the environment
changes, the original business processes may become obsolete. Therefore, a system may have
a low business value because it forces the use of inefficient business processes.
3. The system dependability If a system is not dependable and the problems directly affect
the business customers or mean that people in the business are diverted from other tasks to
solve these problems, the system has a low business value.
4. The system outputs If the business depends on the system outputs, then the system has a
high business value. Conversely, if these outputs can be easily generated in some other way
or if the system produces outputs that are rarely used, then its business value may be low.
Technical perspective
To assess a software system from a technical perspective, you need to consider both the
application system itself and the environment in which the system operates.
Data can be collected to assess the quality of the system. The data that can be collected are
1. The number of system change requests System changes usually corrupt the system
structure and make further changes more difficult. The higher this value, the lower the quality
of the system.
2. The number of user interfaces The more interfaces, the more likely that there will be
inconsistencies and redundancies in these interfaces, hence reducing system quality.
3. The volume of data used by the system The higher the volume of data (number of files,
size of database, etc.), the more likely that it is that there will be data inconsistencies that
reduce the system quality.