0% found this document useful (0 votes)
43 views

Software Testing

The document discusses strategies for software testing, including unit testing, integration testing using top-down and bottom-up approaches, and regression testing. It provides details on each type of testing and the steps involved.

Uploaded by

lollol
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Software Testing

The document discusses strategies for software testing, including unit testing, integration testing using top-down and bottom-up approaches, and regression testing. It provides details on each type of testing and the steps involved.

Uploaded by

lollol
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

A Strategy Approach to software testing

• Testing is a set of activities that can be planned in advance and conducted


systematically.
• For this reason a template for software testing—a set of steps into which
you can place specific test case design techniques and testing methods—should be
defined for the software process.

Generic characteristics of Strategy Approach to software testing.


 To perform effective testing, you should conduct effective technical reviews. By doing
this, many errors will be eliminated before testing commences.
 Testing begins at the component level and works “outward” toward the integration
of the entire computer-based system.
 Different testing techniques are appropriate for different software engineering
approaches and at different points in time.
 Testing is conducted by the developer of the software and (for large projects) an
independent test group.
 Testing and debugging are different activities, but debugging must be accommodated
in any testing strategy.
Verification and Validation (V & V)

Software testing is one element of a broader topic that is often


referred to as verification and validation (V&V).
Verification refers to the set of tasks that ensure that software
correctly implements a specific function.
Validation refers to a different set of tasks that ensure that the
software that has been built is traceable to customer requirements.

Boehm states this another way:


Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
Who Tests the Software?
Strategic Issues
•The best strategy will fail if a series of overriding issues are not addressed.
•Specify product requirements in a quantifiable manner long before testing commences.
 Objective of testing is to find errors, a good testing strategy also assesses other
quality characteristics such as portability, maintainability, and usability.
•State testing objectives explicitly.
 The specific objectives of testing should be stated in measurable terms.
•Understand the users of the software and develop a profile for each user category.
•Develop a testing plan that emphasizes “rapid cycle testing.
•Build “robust” software that is designed to test itself.
•Use effective technical reviews as a filter prior to testing.
•Conduct technical reviews to assess the test strategy and test cases themselves
•Develop a continuous improvement approach for the testing process.
Short note on Software Testing Strategic

• The software process may be viewed as the spiral illustrated in Fig.


• System engineering defines the role of software and leads to software
requirements analysis, where the information domain, function, behavior,
performance, constraints, and validation criteria for software are established.
• Moving inward along the spiral, you come to design and finally to
coding.
Software Testing Strategic

Unit testing begins at the vortex of the spiral and concentrates on each unit
(e.g., component, class, or WebApp content object) of the software as
implemented in source code.

Integration testing, where the focus is on design and the construction of the
software architecture.

Validation testing, where requirements established as part of requirements


modeling are validated against the software that has been constructed.

Finally, system testing, where the software and other system elements are tested
as a whole.

Process from a procedural point of view :


Testing within the context of software engineering is actually a series of
four steps that are implemented sequentially.
Introduction - strategy for software testing

Testing is the process of exercising a program with the specific intent of


finding errors prior to delivery to the end user.

A strategy for software testing provides a road map that describes the
steps to be conducted as part of testing.

Any testing strategy must incorporate


 Test planning,
 Test case design,
 Test execution,
 Resultant data collection and Evaluation.
What Testing Shows
Test strategies for Conventional Software

There are many strategies that can be used to test software,


• At one extreme, you can wait until the system is fully constructed and
then conduct tests on the overall system in hopes of finding errors.
 This approach simply does not work. It will result in buggy software.
• At the other extreme, you could conduct tests on a daily basis,
whenever any part of the system is constructed.
 This approach, although less appealing to many, can be very effective.
• A testing strategy that is chosen by most software teams falls between
the two extremes.
• It takes an incremental view of testing,
 Beginning with the testing of individual program units,
 Moving to tests designed to facilitate the integration of the units,
 Culminating with tests that exercise the constructed system.
Unit Test
• Unit testing focuses verification effort on the smallest unit of
software design—the software component or module.
• The unit test focuses on the internal processing logic and data
structures within the boundaries of a component.
• This type of testing can be conducted in parallel for multiple
components.
• The module interface is tested to ensure that information properly flows
into and out of the program unit under test.

• Local data structures are examined to ensure that data stored


temporarily maintains its integrity during all steps in an algorithm’s execution.

• All independent paths through the control structure are exercised to


ensure that all statements in a module have been executed at least once.

• Boundary conditions are tested to ensure that the module operates


properly at boundaries established to limit or restrict processing.

• Finally, all error-handling paths are tested

• Good design anticipates error conditions and establishes error-handling


paths to reroute or cleanly terminate processing when an error does occur.

• Unit testing is simplified when a component with high cohesion is


designed.
Integration Testing

• Integration testing is a systematic technique for constructing the


software architecture while at the same time conducting tests to uncover
errors associated with interfacing.

Different Integration Testing Strategies :


 Top-down testing
 Bottom-up testing
 Regression Testing
 Smoke Testing
Top-down testing

• Top-down integration testing is an incremental approach to


construction of the software architecture.
• Modules are integrated by moving downward through the control
hierarchy, beginning with the main control module (main program).
• Modules subordinate (and ultimately subordinate) to the main control
module are incorporated into the structure in either a depth-first or breadth-first
manner.
Integration Testing :

The integration process is performed in a series of five steps

1. The main control module is used as a test driver and stubs are substituted for
all components directly subordinate to the main control module.

2. Depending on the integration approach selected (i.e., depth or breadth first),


subordinate stubs are replaced one at a time with actual components.

3. Tests are conducted as each component is integrated.

4. On completion of each set of tests, another stub is replaced with the real
component.

5. Regression testing may be conducted to ensure that new errors have not been
introduced.

The process continues from step 2 until the entire program structure is built.
Problem encountered during top-down integration testing.

Top-down strategy sounds relatively uncomplicated, but in practice,


logistical problems can arise.

The most common of these problems occurs when processing at low


levels in the hierarchy is required to adequately test upper levels.

Stubs replace low-level modules at the beginning of top-down testing;


therefore, no significant data can flow upward in the program structure.

As a tester, you are left with three choices:


(1) delay many tests until stubs are replaced with actual modules,
(2) develop stubs that perform limited functions that simulate the
actual module, or
(3) integrate the software from the bottom of the hierarchy upward.
Bottom-Up Integration
Bottom-up Integration Testing :

• Bottom-up integration testing, It begins construction and testing with


atomic modules (i.e., components at the lowest levels in the program structure).
• Because components are integrated from the bottom up, the
functionality provided by components subordinate to a given level is always
available and the need for stubs is eliminated.
A bottom-up integration strategy may be implemented with the
following steps

1. Low-level components are combined into clusters (sometimes called builds)


that perform a specific software subfunction.
2. A driver (a control program for testing) is written to coordinate test case
input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the
program structure.
Regression Testing :

Regression testing is the re-execution of some subset of tests that have


already been conducted to ensure that changes have not propagated unintended
side effects.

• Whenever software is corrected, some aspect of the software configuration (the


program, its documentation, or the data that support it) is changed.
• Regression testing helps to ensure that changes (due to testing or for other
reasons) do not introduce unintended behavior or additional errors.
• Regression testing may be conducted manually, by re-executing a subset of all test
cases or using automated capture/playback tools.
• It is impractical and inefficient to reexecute every test for every program function
once a change has occurred.
• Regression testing is a type of software testing that seeks to uncover new software
bugs.
• Regression testing is the process of testing, changes to computer programs to make
sure that the older programming still works with the new changes. Here changes such as
enhancements, patches or configuration changes, have been made to them.
Smoke Testing :

• Smoke testing is an integration testing approach that is commonly


used when product software is developed
• Smoke testing is performed by developers before releasing the build
to the testing team and
• after releasing the build to the testing team it is performed by testers
whether to accept the build for further testing or not.
• It is designed as a pacing (Speedy) mechanism for time-critical
projects, allowing the software team to assess the project on a frequent basis.
Smoke Testing Example

For Example in a project there are five modules like,


 login, view user, user detail page, new user creation, and task
creation etc.

 So in this five modules first of all developer perform the smoke


testing by executing all the major functionality of modules like user is able to
login or not with valid login credentials and after login new user can created
or not, and user that is created viewed or not.

 So it is obvious that this is the smoke testing always done by


developing team before submitting (releasing) the build to the testing team.

• Now once the build is released to the testing team than the testing
team has to check whether to accept or reject the build by testing the major
functionality of the build. So this is the smoke test done by testers
• Smoke testing provides a number of benefits when it is applied on
complex, time critical software projects.
• Integration risk is minimized. Because smoke tests are conducted
daily, incompatibilities and other show-stopper errors are uncovered early.
• The quality of the end product is improved.
• Because the approach is construction (integration) oriented, smoke
testing is likely to uncover functional errors as well as architectural and
component-level design errors.
• If these errors are corrected early, better product quality will result.
• Error diagnosis and correction are simplified.
• Progress is easier to assess.
What Are Reviews?

• Software reviews are a “filter” for the software process.


• Reviews are applied at various points during software engineering
and serve to uncover errors and defects that can then be removed.
• Software reviews “purify” software engineering work products,
including requirements and design models, code, and testing data.
• Technical reviews – TR (Peer Reviews) are the most effective
mechanism for finding mistakes early in the software process.
• Six Steps are employed (Planning-Preparation-Structuring meeting
Noting error-Making correction-Verifying correction)
What Do We Look For?

Errors and defects


Error — A quality problem found before the software is released to end
users
Defect — A quality problem found only after the software has been
released to end-users

• The primary objective of technical reviews is to find errors during


the process so that they do not become defects after release of the software.
• The obvious benefit of technical reviews is the early discovery of
errors so that they do not propagate to the next step in the software process.
Short note on Reviews Guidelines

• Review the product, not the producer.


• Set an agenda and maintain it.
• Limit debate and rebuttal.
 spending time debating the question, the issue should be recorded for
further discussion off-line.
• Enunciate (Identify) problem areas, but don't attempt to solve every problem
noted.
 Review is not a problem-solving session. The solution of a problem can
often be accomplished by the producer alone or with the help of only one
other individual.
 Problem solving should be postponed until after the review meeting.
• Take written notes.
• Limit the number of participants and insist upon advance preparation.
• Develop a checklist for each product that is likely to be reviewed.
• Allocate resources and schedule time for FTRs.
• Conduct meaningful training for all reviewers.
• Review your early reviews.
Short note on Formal Technical Reviews. (FTR)

Formal technical review (FTR) is a software quality control activity


performed by software engineers (and others).

The objectives of an FTR are:


(1) To uncover errors in function, logic, or implementation for any
representation of the software;
(2) To verify that the software under review meets its requirements;
(3) To ensure that the software has been represented according to
predefined standards
(4) To achieve software that is developed in a uniform manner;
(5) To make projects more manageable. In addition, the FTR serves as a
training ground, enabling junior engineers to observe different
approaches to software analysis, design, and implementation.

The FTR is actually a class of reviews that includes walkthroughs and


inspections
The Review Meeting:

Every review meeting should abide by the following constraints:


• Between three and five people (typically) should be involved in the
review.
• Advance preparation should occur but should require no more than
two hours of work for each person.
• The duration of the review meeting should be less than two hours.
Given these constraints, it should be obvious that an FTR focuses on a
specific (and small) part of the overall software.
• For example, rather than attempting to review an entire design,
walkthroughs are conducted for each component or small group of
components.
Review Summary Report
• What was reviewed?
• Who reviewed it?
• What were the findings and conclusions?
The Players of Review Meeting

Producer
The individual who has developed the work product, informs the
project leader that the work product is complete and that a review is required.

Review leader
Evaluates the product for readiness, generates copies of product
materials, and distributes them to two or three reviewers for advance
preparation.

Reviewer(s)
Expected to spend between one and two hours reviewing the product,
making notes, and otherwise becoming familiar with the work.

Recorder
A reviewer who records (in writing) all important issues raised during
the review.
Short note on Informal Reviews

Informal reviews include,


1. A simple desk check of a software engineering work product with a
colleague,
2. A casual meeting (involving more than two people) for the purpose of
reviewing a work product,
3. The review-oriented aspects of pair programming.

• A simple desk check or a casual meeting conducted with a colleague is


a review.
• However, because there is no advance planning or preparation, no
agenda or meeting structure, and no follow-up on the errors that are
uncovered, the effectiveness of such reviews is considerably lower than more
formal approaches.
• But a simple desk check can and does uncover errors that might
otherwise propagate further into the software process.
Defect Amplification (Extension / Increase)

A defect amplification model can be used to illustrate the


generation and detection of errors during the design and code generation
actions of a software process.

You might also like