Testing Overview Purpose of Testing
Testing Overview Purpose of Testing
Purpose of Testing
Several factors contribute to the importance of making testing a high priority of any
software development effort. These include:
• Reducing the cost of developing the program. Minimal savings that might occur in
the early stages of the development cycle by delaying testing efforts are almost
certainly bound to increase development costs later. Common estimates indicate
that a problem that goes undetected and unfixed until a program is actually in
operation can be 40 – 100 times more expensive to resolve than resolving the
problem early in the development cycle.
• Ensuring that your application behaves exactly as you explain to the user. For the
vast majority of programs, unpredictability is the least desirable consequence of
using an application.
• Reducing the total cost of ownership. By providing software that looks and
behaves as shown in your documentation, your customers require fewer hours of
training and less support from product experts.
The earlier in the development cycle that testing becomes part of the effort the
better. Planning is crucial to a successful testing effort, in part because it has a great
deal to do with setting expectations. Considering budget, schedule, and performance
in test plans increases the likelihood that testing does take place and is effective and
efficient. Planning also ensures tests are not forgotten or repeated unless necessary
for regression testing.
Requirements-Based Testing
The requirements section of the software specification does more than set
benchmarks and list features. It also provides the basis for all testing on the product.
After all, testing generally identifies defects that create, cause, or allow behavior not
expected in the software based on descriptions in the specification; thus, the test
team should be involved in the specification-writing process. Specification writers
should maintain the following standards when presenting requirements:
• All requirements must be testable in a way that ensures the program complies.
You should begin designing test cases as the specification is being written. Analyze
each specification from the viewpoint of how well it supports the development of test
cases. The actual exercise of developing a test case forces you to think more
critically about your specifications.
The test plan outlines the entire testing process and includes the individual test
cases. To develop a solid test plan, you must systematically explore the program to
ensure coverage is thorough, but not unnecessarily repetitive. A formal test plan
establishes a testing process that does not depend upon accidental, random testing.
Testing, like development, can easily become a task that perpetuates itself. As such,
the application specifications, and subsequently the test plan, should define the
minimum acceptable quality to ship the application.
Two common approaches to testing are the waterfall approach and the evolutionary
approach.
An alternative is the evolutionary approach in which you develop a modular piece (or
unit) of an application, test it, fix it, feel somewhat satisfied with it, and then add
another small piece that adds functionality. You then test the two units as an
integrated component, increasing the complexity as you proceed. Some of the
advantages to this approach are as follows:
• You are constantly delivering a working, useful product. If you are adding
functionality in priority order, you could stop development at any time and know
that the most important work is completed.
• Rather than trying to develop one huge test plan, you can start with small,
modular pieces of what will become part of the large, final test plan. In the
interim, you can use the smaller pieces to find bugs.
• You can add new sections to the test plan or go into depth in new areas, and use
each part.
The range of the arguments associated with different approaches to testing is very
large and well beyond the scope of this documentation. If the suggestions here do
not seem to fit your project, you may want to do further research.
Optimization
Visual Studio
Unit Testing
The primary goal of unit testing is to take the smallest piece of testable software in
the application, isolate it from the remainder of the code, and determine whether it
behaves exactly as you expect. Each unit is tested separately before integrating
them into modules to test the interfaces between modules. Unit testing has proven
its value in that a large percentage of defects are identified during its use.
The most common approach to unit testing requires drivers and stubs to be written.
The driver simulates a calling unit and the stub simulates a called unit. The
investment of developer time in this activity sometimes results in demoting unit
testing to a lower level of priority and that is almost always a mistake. Even though
the drivers and stubs cost time and money, unit testing provides some undeniable
advantages. It allows for automation of the testing process, reduces difficulties of
discovering errors contained in more complex pieces of the application, and test
coverage is often enhanced because attention is given to each unit.
For example, if you have two units and decide it would be more cost effective to glue
them together and initially test them as an integrated unit, an error could occur in a
variety of places:
Finding the error (or errors) in the integrated module is much more complicated than
first isolating the units, testing each, then integrating them and testing the whole.
Drivers and stubs can be reused so the constant changes that occur during the
development cycle can be retested frequently without writing large amounts of
additional test code. In effect, this reduces the cost of writing the drivers and stubs
on a per-use basis and the cost of retesting is better controlled.
Visual Studio
Integration Testing
Integration testing is a logical extension of unit testing. In its simplest form, two
units that have already been tested are combined into a component and the interface
between them is tested. A component, in this sense, refers to an integrated
aggregate of more than one unit. In a realistic scenario, many units are combined
into components, which are in turn aggregated into even larger parts of the program.
The idea is to test combinations of pieces and eventually expand the process to test
your modules with those of other groups. Eventually all the modules making up a
process are tested together. Beyond that, if the program is composed of more than
one process, they should be tested in pairs rather than all at once.
Integration testing identifies problems that occur when units are combined. By using
a test plan that requires you to test each unit and ensure the viability of each before
combining units, you know that any errors discovered when combining units are
likely related to the interface between units. This method reduces the number of
possibilities to a far simpler level of analysis.
You can do integration testing in a variety of ways but the following are three
common strategies:
• The bottom-up approach requires the lowest-level units be tested and integrated
first. These units are frequently referred to as utility modules. By using this
approach, utility modules are tested early in the development process and the
need for stubs is minimized. The downside, however, is that the need for drivers
complicates test management and high-level logic and data flow are tested late.
Like the top-down approach, the bottom-up approach also provides poor support
for early release of limited functionality.
Visual Studio
Regression Testing
Any time you modify an implementation within a program, you should also do
regression testing. You can do so by rerunning existing tests against the modified
code to determine whether the changes break anything that worked prior to the
change and by writing new tests where necessary. Adequate coverage without
wasting time should be a primary consideration when conducting regression tests.
Try to spend as little time as possible doing regression testing without reducing the
probability that you will detect new failures in old, already tested code.
Some strategies and factors to consider during this process include the following:
• Test fixed bugs promptly. The programmer might have handled the symptoms
but not have gotten to the underlying cause.
• Watch for side effects of fixes. The bug itself might be fixed but the fix might
create other bugs.
• If two or more tests are similar, determine which is less effective and get rid of it.
• Identify tests that the program consistently passes and archive them.
• Make changes (small and large) to data and find any resulting corruption.
Building a Library