0% found this document useful (0 votes)
58 views44 pages

SE Module 5

The document discusses various software testing techniques including unit testing, integration testing, validation testing, system testing, white-box testing (including basis path testing and control structure testing), and black-box testing. It provides details on testing principles, test case templates, and different types of software maintenance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views44 pages

SE Module 5

The document discusses various software testing techniques including unit testing, integration testing, validation testing, system testing, white-box testing (including basis path testing and control structure testing), and black-box testing. It provides details on testing principles, test case templates, and different types of software maintenance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

SE _ M o du l e _ 5:

Softwa re Te st in g
Compiled By:
Ms. Priti Rumao
Content

• Unit testing, Integration testing,Validation testing, System testing


• Testing Techniques, white-box testing: Basis path, Control structure testing, black-box testing: Graph based,
Equivalence, Boundary Value

• Types of Software Maintenance, Re-Engineering, Reverse Engineering


Introduction
• Testing is a process of executing a program with the intent of finding an error.
• A good test case is one that has a high probability of finding an as-yet-undiscovered error.
• A successful test is one that uncovers an as-yet-undiscovered error.
• Testing Principles:
• All tests should be traceable to customer requirements.
• Tests should be planned long before testing begins.
• The Pareto principle applies to software testing: Stated simply, the Pareto principle implies that 80 percent
of all errors uncovered during testing will likely be traceable to 20 percent of all program components. The
problem, of course, is to isolate these suspect components and to thoroughly test them.
• Testing should begin “in the small” and progress toward testing “in the large.”
• Exhaustive testing is not possible.
Introduction
• To be most effective, testing should be conducted by an independent third party.

Test Case Template:


Introduction
Test Case Example:
White Box Testing

• White-box testing, sometimes called glass-box testing, is a test case design method
that uses the control structure of the procedural design to derive test cases.
• Using white-box testing methods, the software engineer can derive test cases that
• (1) guarantee that all independent paths within a module have been exercised at
least once
• (2) exercise all logical decisions on their true and false sides
• (3) execute all loops at their boundaries and within their operational bounds
• (4) exercise internal data structures to ensure their validity.
White Box Testing (Cont…)
• Basic Path Method:
• The basis path method enables the test case designer to derive a logical complexity measure of a
procedural design and use this measure as a guide for defining a basis set of execution paths.
• 1. Flow Graph Notation: Before the basis path method can be introduced, a simple notation for the
representation of control flow, called a flow graph (or program graph) must be introduced.
• The flow graph depicts logical control flow using the notation illustrated in Figure below. Each structured
construct has a corresponding flow graph symbol.
White Box Testing (Cont…)
• Basic Path Method:
• 1. Flow Graph Notation:
• each circle, called a flow graph node, represents one or more procedural statements. A sequence of
process boxes and a decision diamond can map into a single node.
• The arrows on the flow graph, called edges or links, represent flow of control and are analogous to
flowchart arrows.
• An edge must terminate at a node, even if the node does not represent any procedural statements.

• Areas bounded by edges and nodes are called regions. When counting regions, we include the area outside
the graph as a region.
• Each node that contains a condition is called a predicate node and is characterized by two or more edges
emanating from it.
White Box Testing (Cont…)
• Basic Path Method:
• 2. Cyclomatic Complexity:
• Cyclomatic complexity is a software metric that provides a quantitative measure of the logical complexity
of a program.
• When used in the context of the basis path testing method, the value computed for cyclomatic complexity
defines the number of independent paths in the basis set of a program and provides us with an upper
bound for the number of tests that must be conducted to ensure that all statements have been executed at
least once.
• An independent path is any path through the program that introduces at least one new set of processing
statements or a new condition.
• Cyclomatic complexity has a foundation in graph theory and provides us with an extremely useful software
metric.
White Box Testing (Cont…)
• Basic Path Method:
• 2. Cyclomatic Complexity:
• Complexity is computed in one of three ways:

• 1. The number of regions of the flow graph correspond to the cyclomatic complexity.

• 2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as V(G) = E - N + 2

where E is the number of flow graph edges, N is the number of flow graph nodes.

• 3. Cyclomatic complexity, V(G), for a flow graph, G, is also defined as V(G) = P + 1

where P is the number of predicate nodes contained in the flow graph


White Box Testing (Cont…)
• Basic Path Method:
• 2. Cyclomatic Complexity:
• Q. Compute the cyclomatic complexity for graph
besides.
• Solution:

• 1. The flow graph has four regions.

• 2. V(G) = 11 edges 9 nodes + 2 = 4.

• 3. V(G) = 3 predicate nodes + 1 = 4.

• So, cyclomatic complexity for graph is 4.


White Box Testing (Cont…)
• Basic Path Method:
• 3. Deriving Test Cases:
• Using the design or code as a foundation, draw a corresponding flow graph.
• Determine the cyclomatic complexity of the resultant flow graph.
• Determine a basis set of linearly independent paths.
• Prepare test cases that will force execution of each path in the basis set.
• 4. Graph Matrices:
• A graph matrix is a square matrix whose size (i.e., number of rows and columns) is equal to the number of
nodes on the flow graph.
• Each row and column corresponds to an identified node, and matrix entries correspond to connections (an
edge) between nodes. A
White Box Testing (Cont…)
• Basic Path Method:
• 4. Graph Matrices:
• Each node on the flow graph is identified by numbers, while each edge is identified by letters. A letter entry
is made in the matrix to correspond to a connection between two nodes.
• The graph matrix is nothing more than a tabular representation of a flow graph.
• By adding a link weight to each matrix entry, the graph matrix can become a powerful tool for evaluating
program control structure during testing. The link weight provides additional information about control
flow.
White Box Testing (Cont…)
• Control Structure Testing:
• Condition Testing:

• Condition testing is a test case design method that exercises the logical conditions contained in a program
module.
• A simple condition is a Boolean variable or a relational expression, possibly preceded with one NOT (¬)
operator.
• A relational expression takes the form

E1<<relational-operator>> E2

• where E1 and E2 are arithmetic expressions and is one of the following: <, ≤, =, ≠ (nonequality), >, or ≥. A
compound condition is composed of two or more simple conditions, Boolean operators, and parentheses.
White Box Testing (Cont…)
• Control Structure Testing:
• Data Flow Testing:
• The data flow testing method selects test paths of a program according to the locations of definitions and
uses of variables in the program.
• To illustrate the data flow testing approach, assume that each statement in a program is assigned a unique
statement number and that each function does not modify its parameters or global variables. For a
statement with S as its statement number,
• DEF(S) = {X | statement S contains a definition of X}

• USE(S) = {X | statement S contains a use of X}

• If statement S is an if or loop statement, its DEF set is empty and its USE set is based on the condition of
statement S. The definition of variable X at statement S is said to be live at statement S' if there exists a path
from statement S to statement S' that contains no other definition of X.
White Box Testing (Cont…)
• Control Structure Testing:
• Loop Testing:
• Loop testing is a white-box testing technique that focuses exclusively on the validity of loop constructs. Four
different classes of loops can be defined: simple loops, concatenated loops, nested loops, and unstructured
loops.
• Simple loops: The following set of tests can be applied to simple loops, where n is the maximum number of
allowable passes through the loop.
• 1. Skip the loop entirely.
• 2. Only one pass through the loop.
• 3. Two passes through the loop.
• 4. m passes through the loop where m < n.
• 5. n-1, n, n + 1 passes through the loop.
White Box Testing (Cont…)
• Control Structure Testing:
• Loop Testing:
• Nested loops: If we were to extend the test approach for simple loops to nested loops, the number of
possible tests would grow geometrically as the level of nesting increases. This would result in an impractical
number of tests.
• To reduce the number of tests:
• 1. Start at the innermost loop. Set all other loops to minimum values.
• 2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum
iteration parameter (e.g., loop counter) values. Add other tests for out-of-range or excluded values.
• 3. Work outward, conducting tests for the next loop, but keeping all other outer loops at minimum values
and other nested loops to "typical" values.
• 4. Continue until all loops have been tested.
White Box Testing (Cont…)

• Control Structure Testing:


• Loop Testing:

• Concatenated loops: Concatenated loops can be tested using the approach defined for simple loops, if each
of the loops is independent of the other. However, if two loops are concatenated and the loop counter for
loop 1 is used as the initial value for loop 2, then the loops are not independent. When the loops are not
independent, the approach applied to nested loops is recommended.
• Unstructured loops Whenever possible, this class of loops should be redesigned to reflect the use of the
structured programming constructs
Black Box Testing
• Black-box testing, also called behavioral testing, focuses on the functional requirements of the
software.
• Black-box testing enables the software engineer to derive sets of input conditions that will fully
exercise all functional requirements for a program. Black-box testing is not an alternative to
white-box techniques.
• Black-box testing attempts to find errors in the following categories:
• (1) incorrect or missing functions
• (2) interface errors
• (3) errors in data structures or external database access
• (4) behavior or performance errors
• (5) initialization and termination errors
Black Box Testing (Cont…)
• Graph-Based Testing:
• Software testing using black box approach begins by creating a graph of important objects and
their relationships and then devising a series of tests that will cover the graph so that each object
and relationship is exercised and errors are uncovered.
• To accomplish these steps, the software engineer begins by creating a graph—a collection of
nodes that represent objects; links that represent the relationships between objects; node
weights that describe the properties of a node (e.g., a specific data value or state behavior); and
link weights that describe some characteristic of a link.
Black Box Testing (Cont…)
• Equivalence Partitioning:
• Equivalence partitioning is a black-box testing method that divides the input domain of a program
into classes of data from which test cases can be derived.
• An ideal test case single-handedly uncovers a class of errors (e.g., incorrect processing of all
character data) that might otherwise require many cases to be executed before the general error
is observed.
• Equivalence partitioning strives to define a test case that uncovers classes of errors, thereby
reducing the total number of test cases that must be developed.
• Test case design for equivalence partitioning is based on an evaluation of equivalence classes for
an input condition.
• Equivalence classes may be defined according to the following guidelines:
• 1. If an input condition specifies a range, one valid and two invalid equivalence classes are
defined.
Black Box Testing (Cont…)
• Equivalence Partitioning:
• 2. If an input condition requires a specific value, one valid and two invalid equivalence classes are
defined.
• 3. If an input condition specifies a member of a set, one valid and one invalid equivalence class are
defined.
• 4. If an input condition is Boolean, one valid and one invalid class are defined.
• Boundary Value Analysis:
• Boundary value analysis is a test case design technique that complements equivalence
partitioning.
• Rather than selecting any element of an equivalence class, BVA leads to the selection of test cases
at the "edges" of the class.
Black Box Testing (Cont…)
• Boundary Value Analysis:
• Guidelines for BVA are similar in many respects to those provided for equivalence partitioning:
• 1. If an input condition specifies a range bounded by values a and b, test cases should be designed
with values a and b and just above and just below a and b.
• 2. If an input condition specifies a number of values, test cases should be developed that exercise
the minimum and maximum numbers. Values just above and below minimum and maximum are
also tested.
• 3. Apply guidelines 1 and 2 to output conditions. For example, assume that a temperature vs.
pressure table is required as output from an engineering analysis program. Test cases should be
designed to create an output report that produces the maximum (and minimum) allowable
number of table entries.
• 4. If internal program data structures have prescribed boundaries (e.g., an array has a defined
limit of 100 entries), be certain to design a test case to exercise the data structure at its boundary.
Testing Strategies:
• The software engineering process may be viewed as the spiral illustrated in Figure below Initially, system
engineering defines the role of software and leads to software requirements analysis, where the information
domain, function, behavior, performance, constraints, and validation criteria for software are established.

• Moving inward along the spiral, we come to design and finally to coding.
• Unit testing begins at the vortex of the spiral and concentrates on each unit (i.e., component) of the software
as implemented in source code.
● Testing progresses by moving
outward along the spiral to
integration testing, where the focus is
on design and the construction of the
software architecture.
Testing Strategies:
• Taking another turn outward on the spiral, we encounter validation testing, where requirements established
as part of software requirements analysis are validated against the software that has been constructed.

• At system testing, where the software and other system elements are tested as a whole.
Unit Testing
• Unit testing focuses verification effort on the smallest unit of software design—the software component or
module.

• Unit testing is normally considered as an adjunct to the coding step.


• After source level code has been developed, reviewed, and verified for correspondence to component-level
design, unit test case design begins.

• A review of design information provides guidance for establishing test cases that are likely to uncover errors.
• Each test case should be coupled with a set of expected results.
• Because a component is not a stand-alone program, driver and/or stub software must be developed for each
unit test.
Unit Testing (Cont…)
• The unit test environment is illustrated in Figure besides.
• . In most applications a driver is nothing more than a "main program"
that accepts test case data, passes such data to the component (to be
tested), and prints relevant results.

• Stubs serve to replace modules that are subordinate (called by) the
component to be tested. A stub or "dummy subprogram" uses the
subordinate module's interface, may do minimal data manipulation,
prints verification of entry, and returns control to the module
undergoing testing.

• Drivers and stubs represent overhead. That is, both are software that
must be written but that is not delivered with the final software product.
Unit Testing (Cont…)
• If drivers and stubs are kept simple, actual overhead is relatively low. Unfortunately, many components cannot be
adequately unit tested with "simple" overhead software.

• In such cases, complete testing can be postponed until the integration test step.
• Unit testing is simplified when a component with high cohesion is designed. When only one function is addressed by a
component, the number of test cases is reduced and errors can be more easily predicted and uncovered.
Advantages Disadvantages

● It helps to write better code. ● It takes time to write test cases.

● It helps to catch bugs earlier. ● It’s difficult to write tests for legacy code.

● It helps to detect regression bugs. ● Tests require a lot of time for maintenance.

● It makes code easy to refactor. ● It can be challenging to test GUI code.

● It makes developer more efficient at writing code. ● Unit testing can’t catch all errors.
Integration Testing
• Integration testing is a systematic technique for constructing the program structure while at the same time
conducting tests to uncover errors associated with interfacing.

• The objective is to take unit tested components and build a program structure that has been dictated by
design.

• Top-down Integration:
• Top-down integration testing is an incremental approach to construction of program structure. Modules are
integrated by moving downward through the control hierarchy, beginning with the main control module
(main program). Modules subordinate (and ultimately subordinate) to the main control module are
incorporated into the structure in either a depth-first or breadth-first manner.
Integration Testing (Cont…)
• Top-down Integration:
• The integration process is performed in a series of five steps:
• 1. The main control module is used as a test driver and stubs are substituted for all components directly
subordinate to the main control module.

• 2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are
replaced one at a time with actual components.

• 3. Tests are conducted as each component is integrated.


• 4. On completion of each set of tests, another stub is replaced with the real component.
• 5. Regression testing may be conducted to ensure that new errors have not been introduced.
Integration Testing (Cont…)
• Bottom-up Integration:
• Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e.,
components at the lowest levels in the program structure). Because components are integrated from the bottom up,
processing required for components subordinate to a given level is always available and the need for stubs is
eliminated.

• A bottom-up integration strategy may be implemented with the following steps:


• 1. Low-level components are combined into clusters (sometimes called builds) that perform a specific software
subfunction.

• 2. A driver (a control program for testing) is written to coordinate test case input and output.
• 3. The cluster is tested.
• 4. Drivers are removed and clusters are combined moving upward in the program structure
Integration Testing (Cont…)
• Regression Testing:
• Each time a new module is added as part of integration testing, the software changes. New data flow paths are established, new
I/O may occur, and new control logic is invoked. These changes may cause problems with functions that previously worked
flawlessly. In the context of an integration test strategy, regression testing is the re-execution of some subset of tests that have
already been conducted to ensure that changes have not propagated unintended side effects.

• Regression testing may be conducted manually, by re-executing a subset of all test cases or using automated capture/playback
tools. Capture/playback tools enable the software engineer to capture test cases and results for subsequent playback and
comparison.

• The regression test suite (the subset of tests to be executed) contains three different classes of test cases:
• A representative sample of tests that will exercise all software functions.
• Additional tests that focus on software functions that are likely to be affected by the change.
• Tests that focus on the software components that have been changed.
Integration Testing (Cont…)
• Smoke Testing:
• Smoke testing is an integration testing approach that is commonly used when “shrink-wrapped” software products are being
developed. It is designed as a pacing mechanism for time-critical projects, allowing the software team to assess its project on a
frequent basis.

• In essence, the smoke testing approach encompasses the following activities:


• 1. Software components that have been translated into code are integrated into a “build.” A build includes all data files, libraries,
reusable modules, and engineered components that are required to implement one or more product functions.

• 2. A series of tests is designed to expose errors that will keep the build from properly performing its function. The intent should
be to uncover “show stopper” errors that have the highest likelihood of throwing the software project behind schedule.

• 3. The build is integrated with other builds and the entire product (in its current form) is smoke tested daily. The integration
approach may be top down or bottom up.
Validation Testing
•Validation Testing ensures that the product actually meets the client's needs. It can also be defined as to
demonstrate that the product fulfills its intended use when deployed on appropriate environment.

• Software validation is achieved through a series of black-box tests that demonstrate conformity with
requirements. A test plan outlines the classes of tests to be conducted and a test procedure defines specific
test cases that will be used to demonstrate conformity with requirements. Both the plan and procedure are
designed to ensure that all functional requirements are satisfied, all behavioral characteristics are achieved,
all performance requirements are attained, documentation is correct, and human engineered and other
requirements are met.

• After each validation test case has been conducted, one of two possible conditions exist: (1) The function or
performance characteristics conform to specification and are accepted or (2) a deviation from specification is
uncovered and a deficiency list is created.
Validation Testing (Cont…)
• Configuration Review:
• An important element of the validation process is a configuration review. The intent of the review is to ensure that all elements
of the software configuration have been properly developed, are cataloged, and have the necessary detail to bolster the support
phase of the software life cycle. The configuration review, sometimes called an audit.

• Alpha and Beta Testing:


• The alpha test is conducted at the developer's site by a customer. The software is used in a natural setting with the developer
"looking over the shoulder" of the user and recording errors and usage problems. Alpha tests are conducted in a controlled
environment.

• The beta test is conducted at one or more customer sites by the end-user of the software. Unlike alpha testing, the developer is
generally not present. Therefore, the beta test is a "live" application of the software in an environment that cannot be controlled
by the developer. The customer records all problems (real or imagined) that are encountered during beta testing and reports
these to the developer at regular intervals. As a result of problems reported during beta tests, software engineers make
modifications and then prepare for release of the software product to the entire customer base.
System Testing

• System testing is actually a series of different tests whose primary purpose is to fully exercise the
computer-based system. Although each test has a different purpose, all work to verify that system elements
have been properly integrated and perform allocated functions.

• Recovery Testing:
• Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery
is properly performed. If recovery is automatic (performed by the system itself), reinitialization, checkpointing
mechanisms, data recovery, and restart are evaluated for correctness. If recovery requires human
intervention, the mean-time-to-repair (MTTR) is evaluated to determine whether it is within acceptable
limits.
System Testing (Cont…)
• Security Testing:
• Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from
improper penetration. During security testing, the tester plays the role(s) of the individual who desires to
penetrate the system.

• Stress Testing:
• Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or
volume. For example, (1) special tests may be designed that generate ten interrupts per second, when one or
two is the average rate, (2) input data rates may be increased by an order of magnitude to determine how
input functions will respond, (3) test cases that require maximum memory or other resources are executed,
(4) test cases that may cause thrashing in a virtual operating system are designed, (5) test cases that may
cause excessive hunting for disk-resident data are created. Essentially, the tester attempts to break the
program.
System Testing (Cont…)

• Performance Testing:
• Performance testing is designed to test the run-time performance of software within the context of an
integrated system. Performance testing occurs throughout all steps in the testing process. Even at the unit
level, the performance of an individual module may be assessed as white-box tests are
conducted.Performance tests are often coupled with stress testing and usually require both hardware and
software instrumentation.
Types of Software Maintenance

• Software Maintenance is the process of modifying a software product after it has been delivered to the customer.
The main purpose of software maintenance is to modify and update software applications after delivery to correct
faults and to improve performance.
Need for Maintenance –
Software Maintenance must be performed in order to:
● Correct faults.
● Improve the design.
● Implement enhancements.
● Interface with other systems.
● Accommodate programs so that different hardware, software, system features, and telecommunications facilities can be used.
● Migrate legacy software.
● Retire software.
Types of Software Maintenance (Cont…)
● Corrective maintenance:
Corrective maintenance of a software product may be essential either to rectify some bugs observed while the system
is in use, or to enhance the performance of the system.
● Adaptive maintenance:
This includes modifications and updations when the customers need the product to run on new platforms, on new
operating systems, or when they need the product to interface with new hardware and software.
● Perfective maintenance:
A software product needs maintenance to support the new features that the users want or to change different types of
functionalities of the system according to the customer demands.
● Preventive maintenance:
● This type of maintenance includes modifications and updations to prevent future problems of the software. It goals to
attend problems, which are not significant at this moment but may cause serious issues in future.
Re-Engineering
● Software Re-engineering is a process of software development which is done to improve the
maintainability of a software system. Re-engineering is the examination and alteration of a system to
reconstitute it in a new form. This process encompasses a combination of sub-processes like reverse
engineering, forward engineering, reconstructing etc.
● Objectives of Re-engineering:
● To describe a cost-effective option for system evolution.
● To describe the activities involved in the software maintenance process.
● To distinguish between software and data re-engineering and to explain the problems of data
re-engineering.
Re-Engineering (Cont…)

Steps involved in Re-engineering:

1. Inventory Analysis
2. Document Reconstruction
3. Reverse Engineering
4. Code Reconstruction
5. Data Reconstruction
6. Forward Engineering
Re-Engineering (Cont…)
Advantages of Re-engineering: Disadvantages of Re-engineering:

● Reduced Risk: As the software is already existing, the risk is less as ● Practical limits to the extent of re-engineering.
compared to new software development. Development problems,
● Major architectural changes or radical
staffing problems and specification problems are the lots of problems
reorganizing of the systems data management
which may arise in new software development.
has to be done manually.
● Reduced Cost: The cost of re-engineering is less than the costs of
developing new software. ● Re-engineered system is not likely to be as

● Revelation of Business Rules: As a system is re-engineered , business maintainable as a new system developed using
rules that are embedded in the system are rediscovered. modern software Re-engineering methods.
● Better use of Existing Staff: Existing staff expertise can be maintained
and extended accommodate new skills during re-engineering.
Reverse Engineering
● Reverse Engineering is processes of extracting knowledge or design information from anything man-made
and reproducing it based on extracted information. It is also called back Engineering.
● Software Reverse Engineering is the process of recovering the design and the requirements specification of
a product from an analysis of it’s code. Reverse Engineering is becoming important, since several existing
software products, lack proper documentation, are highly unstructured, or their structure has degraded
through a series of maintenance efforts.
● Software Reverse Engineering is used in software design, reverse engineering enables the developer or
programmer to add new features to the existing software with or without knowing the source code.
● Reverse engineering is also useful in software testing, it helps the testers to study the virus and other
malware code ..

You might also like