0% found this document useful (0 votes)
14 views59 pages

Unit-6 Se

The document outlines the course structure and objectives for Software Engineering (PCCO6010T), focusing on software testing fundamentals and strategies. It details various testing techniques, including unit testing, integration testing, and regression testing, emphasizing the importance of verification and validation in ensuring software quality. Additionally, it discusses the roles of developers and independent test groups in the testing process and provides insights into testing strategies for conventional and object-oriented software, as well as web applications.

Uploaded by

nileshmahajan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views59 pages

Unit-6 Se

The document outlines the course structure and objectives for Software Engineering (PCCO6010T), focusing on software testing fundamentals and strategies. It details various testing techniques, including unit testing, integration testing, and regression testing, emphasizing the importance of verification and validation in ensuring software quality. Additionally, it discusses the roles of developers and independent test groups in the testing process and provides insights into testing strategies for conventional and object-oriented software, as well as web applications.

Uploaded by

nileshmahajan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Software Engineering (PCCO6010T)

Credits : 03
Examination Scheme
Term Test : 15 Marks
Teacher Assessment : 20 Marks
End Sem Exam : 65 Marks
Total Marks : 100 Marks

Prerequisite:
1. Concepts of Object Oriented Programming & Methodology.
2. Knowledge of developing applications with front end & back end connectivity.
• Course Objectives: To provide the knowledge of Standard Software Engineering discipline.
Unit-VI
05 Hrs.
• Software Testing Fundamentals: Strategic Approach to Software Testing, Unit Testing, Integration

Testing, Verification, Validation Testing, System Testing, Test Strategies for WebApps.

• Software Testing Techniques: White Box Testing, Basis Path Testing, Control Structure Testing

and Black Box Testing, TDD


Software Testing Fundamentals
A STRATEGIC APPROACH TO SOFTWARE TESTING
• Testing is a set of activities that can be planned in advance and conducted systematically.
• For this reason a template for software testing—a set of steps into which you can place specific test case design
techniques and testing methods—should be defined for the software process.

Characteristics of software testing strategies-


• To perform effective testing, you should conduct effective technical reviews. By doing this, many errors will be
eliminated before testing commences.
• Testing begins at the component level and works “outward” toward the integration of the entire computer-
based system.
• Different testing techniques are appropriate for different software engineering approaches and at different
points in time.
• Testing is conducted by the developer of the software and (for large projects) an independent test group.
• Testing and debugging are different activities, but debugging must be accommodated in any testing strategy.

A software testing strategy should include both low-level testing (to check if small code segments work correctly)
and high-level testing (to ensure major system functions meet customer requirements).
A STRATEGIC APPROACH TO SOFTWARE TESTING
 Verification and Validation
• Verification refers to the set of tasks that ensure that software correctly implements
a specific function.
• Validation refers to a different set of tasks that ensure that the software that has
been built is traceable to customer requirements.
-Verification: “Are we building the product right?”
-Validation: “Are we building the right product?”
- SQA activities: technical reviews, quality and configuration audits, performance
monitoring, simulation, feasibility study, documentation review, database review,
algorithm analysis, development testing, usability testing, qualification testing,
acceptance testing, and installation testing.
 Organizing for Software Testing
• People often misunderstand how software testing works. Some common wrong ideas are:
1. That the person who writes the code shouldn’t test it at all.
2. That the software should just be given to someone else to test without any help or context.
3. That testers should only join the project at the very end when testing starts.
All of these ideas are incorrect.

• The software developer is always responsible for testing the individual units (components) of the program,
ensuring that each performs the function or exhibits the behavior for which it was designed. In many cases, the
developer also conducts integration testing.
• The job of an Independent Test Group (ITG) is to avoid problems that happen when the person who builds the
software also tests it. ITG testing is fair because there's no personal interest involved. Their main goal is to find
mistakes, and they are paid to do just that.
A STRATEGIC APPROACH TO SOFTWARE TESTING
 Software Testing Strategy—The Big Picture
- A strategy for software testing may also be viewed in the context of the spiral (Figure 17.1).
- Unit testing begins at the vortex of the spiral and concentrates on each unit (e.g., component,
class, or WebApp content object) of the software as implemented in source code.
- Testing progresses by moving outward along the spiral to Integration testing, where the focus is on
design and the construction of the software architecture.
- Validation testing, where requirements established as part of requirements modeling are validated
against the software that has been constructed.
- System testing, where the software and other system elements are tested as a whole.
- Testing is done step by step, starting small and gradually covering more areas, like moving in a
spiral that gets wider with each round.
• Considering the process from a procedural point of view, testing within the
context of software engineering is actually a series of four steps that are
implemented sequentially.
• Initially, tests focus on each component individually, ensuring that it functions
properly as a unit. Hence, the name unit testing.
• Unit testing exercises specific paths in a component’s control structure to ensure
complete coverage and maximum error detection.
• Integration testing addresses the issues associated with the dual problems of
verification and program construction.
• After the software has been integrated (constructed), a set of high-order tests is
conducted. Validation criteria must be evaluated. Validation testing provides
final assurance that software meets all informational, functional, behavioral, and
performance requirements.
• System testing verifies that all elements mesh properly and that overall system
function/performance is achieved.
 Criteria for Completion of Testing
- “When are we done testing—how do we know that we’ve tested enough?” Sadly, there
is no definitive answer to this question .
- “You’re never done testing; the burden simply shifts from you (the software engineer)
to the end user.” Every time the user executes a computer program, the program is
being tested.
- “You’re done testing when you run out of time or you run out of money.”
- The cleanroom software engineering approach suggests statistical use techniques that
execute a series of tests derived from a statistical sample of all possible program
executions by all users from a targeted population
STRATEGIC ISSUES
software testing strategy will succeed when software testers:
• Specify product requirements in a proven manner long before testing commences.
• State testing objectives explicitly.
• Understand the users of the software and develop a profile for each user category.
• Develop a testing plan that emphasizes “rapid cycle testing.”
• Conduct technical reviews to assess the test strategy and test cases them self.
• Develop a continuous improvement approach for the testing process.
TEST STRATEGIES FOR CONVENTIONAL SOFTWARE
Unit Testing
- Unit testing is the process of testing individual components or functions of a software program in isolation
to ensure they work correctly.
- Unit testing focuses verification effort on the smallest unit of software design—the software component or
module. Using the component-level design description as a guide, important control paths are tested to
uncover errors within the boundary of the module.
- Unit testing focuses on the internal processing logic and data structures within the boundaries of a
component
• Unit-test considerations
- The module interface is tested to ensure that information properly flows into and out
of the program unit under test.
- Local data structures are examined to ensure that data stored temporarily maintains its
integrity during all steps in an algorithm’s execution.
- All independent paths through the control structure are exercised to ensure that all
statements in a module have been executed at least once.
- Boundary conditions are tested to ensure that the module operates properly at
boundaries established to limit or restrict processing.
- all error-handling paths are tested.
TEST STRATEGIES FOR CONVENTIONAL SOFTWARE
 Unit-test procedures
- A driver is a “main program” that accepts test case data, passes such data to the component (to be tested), and prints
relevant results.
- Stubs serve to replace modules that are subordinate (invoked by) the component to be tested.
- A stub or “dummy subprogram” uses the interface of subordinate module for may do minimal data manipulation,
prints verification of entry, and returns control to the module undergoing testing.
- Unit testing is simplified when a component with high cohesion is designed.
TEST STRATEGIES FOR CONVENTIONAL SOFTWARE
Integration Testing
• Integration Testing in Software Engineering is a type of testing where individual modules (small parts of the
software) are combined and tested together as a group.
• The goal is to check whether the different modules or components work together correctly.
• Instead of testing a single unit (like a function or class) alone, integration testing focuses on how they interact
with each other.

Key Points:
• It comes after unit testing and before system testing.
• It finds problems like data flow errors, interface mismatches, or wrong communication between modules.
• It checks interfaces, communication, and data sharing between parts of the system.
1. Top-down
• Start with: The top-level module (main module).
• Then: Gradually add and test lower-level (child) modules one by one.
• If a lower module isn't ready yet, a stub (a fake module) is used temporarily.
• Stub: A dummy piece of code that simulates the behavior of missing lower modules.

2. Bottom-Up

• Start with: The bottom-level modules (leaf modules).


• Then: Integrate and test them upwards toward the main module.
• If an upper module isn't ready yet, a driver (temporary control program) is used to call lower modules.
• Driver: A dummy program that simulates the behavior of missing upper modules.
1. Top-down
• Modules are integrated by moving downward through the control hierarchy, beginning with the main
control module (main program).
• Modules subordinate (and ultimately subordinate) to the main control module are incorporated into the
structure in either a depth-first or breadth-first manner.
• depth-first integration integrates all components on a major control path of the program structure.
• Breadth-first integration incorporates all components directly subordinate at each level, moving across the
structure horizontally.
• The top-down integration strategy verifies major control or decision points early in the test process
TEST STRATEGIES FOR CONVENTIONAL SOFTWARE
2. Bottom-up integration
1. Low-level components are combined into clusters (sometimes called builds) that perform a specific
software subfunction.
2. A driver (a control program for testing) is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.
Regression testing
• Regression Testing means retesting the software after changes (like bug fixes, new features, or updates) to
make sure that old features still work properly and nothing is broken.
• Whenever the software changes, regression testing checks that the existing functionalities are not damaged
by the new changes.

Why is Regression Testing Important?


• Software evolves all the time (updates, new features, bug fixes).
• Even a small change in one part can accidentally break another part.
• Regression testing protects the software from unexpected bugs after updates.

When is Regression Testing Done?


• After bug fixes.
• After adding new features.
• After performance improvements or configuration changes.
• After integration of new modules.
Regression testing
• How is it Performed?
• Run the old test cases (test scripts) again.
• Create new test cases if new functionalities are added.
• It can be done manually or using automation tools like Selenium, JUnit, etc.

Types of Regression Testing:


• Corrective Regression: No big changes, just retesting.
• Progressive Regression: When new test cases are added for new features.
• Selective Regression: Retesting only selected parts/modules.
• Complete Regression: Retesting the whole software (when major changes happen).
 Regression testing
• Each time a new module is added as part of integration testing, the software changes. New data flow paths
are established, new I/O may occur, and new control logic is invoked.
• These changes may cause problems with functions that previously worked flawlessly.
• In the context of an integration test strategy, regression testing is the reexecution of some subset of tests that
have already been conducted to ensure that changes have not propagated unintended side effects.
• successful tests (of any kind) result in the discovery of errors, and errors must be corrected. Whenever
software is corrected, some aspect of the software configuration (the program, its documentation, or the
data that support it) is changed.
• Regression testing helps to ensure that changes (due to testing or for other reasons) do not introduce
unintended behavior or additional errors. Regression testing may be conducted manually, by reexecuting a
subset of all test cases or using automated capture/playback tools.

• Capture/playback tools enable the software engineer to capture test cases and results for
subsequent playback and comparison.
• The regression test suite (the subset of tests to be executed) contains three different classes
of test cases:
 A representative sample of tests that will exercise all software functions.
Additional tests that focus on software functions that are likely to be affected by the change.
Tests that focus on the software components that have been changed.
 Smoke testing
• Smoke Testing is a basic, quick test done after a new build or update to check whether the main
functionalities of the software are working or not. If the basic functions fail, there’s no point in
doing deeper testing.

Why is it Called "Smoke" Testing?


• The name comes from hardware testing:
When you power on a new device, if it doesn’t catch fire or smoke, it’s good enough for deeper
inspection.
Similarly, in software, if the basic features don't crash, detailed testing can begin.

When is Smoke Testing Done?


• After a new software build is created.
• After major updates or patches.
• Before starting deeper testing like regression testing or system testing.
• What Smoke Testing Checks?
• Can the application launch properly?
• Can the user log in?
• Can you navigate to major screens/pages?
• Can you perform basic actions without errors?

• Example:
• Suppose you have a shopping app:
• Smoke testing would check if:
• App opens ✅
• Login works ✅
• Products load ✅
• Add to Cart works ✅
• If any of these basic features fail, testers reject the build, and developers fix it before any more
testing happens.
 Smoke testing
• used when product software is developed.
• It is designed as a pacing mechanism for time-critical projects, allowing the software team to assess the project on
a frequent basis.
• It encompasses the following activities:
1. Software components that have been translated into code are integrated into a build. A build includes all data files,
libraries, reusable modules, and engineered components that are required to implement one or more product
functions.
2. A series of tests is designed to expose errors that will keep the build from properly performing its function. The
intent should be to uncover “showstopper” errors that have the highest likelihood of throwing the software project
behind schedule.
3. The build is integrated with other builds, and the entire product (in its current form) is smoke tested daily. The
integration approach may be top down or bottom up.
benefits
• Integration risk is minimized.
• The quality of the end product is improved.
• Error diagnosis and correction are simplified.
• Progress is easier to assess.
TEST STRATEGIES FOR CONVENTIONAL SOFTWARE
Strategic options.
- the advantages of one strategy tend to result in disadvantages for the other
strategy.
- The major disadvantage of the top-down approach is the need for stubs and
the attendant testing difficulties that can be associated with them .
- major disadvantage of bottom-up integration is that “the program as an
entity does not exist until the last module is added .
- Selection of an integration strategy depends upon software characteristics
and, sometimes, project schedule
TEST STRATEGIES FOR OBJECT-ORIENTED SOFTWARE
Unit Testing in the OO Context
- When object-oriented software is considered, the concept of the unit changes. Encapsulation drives the definition of
classes and objects.
- class testing for OO software is driven by the operations encapsulated by the class and the state behavior of the class.
Integration Testing in the OO Context
 thread-based testing
- integrates the set of classes required to respond to one input or event for the system. Each thread is integrated and
tested individually.
 use-based testing
- begins the construction of the system by testing those classes (called independent classes) that use very few (if any)
server classes.
 dependent classes
- that use the independent classes are tested.
 Cluster testing
-a cluster of collaborating classes (determined by examining the CRC and object-relationship model) is exercised by
designing test cases that attempt to uncover errors in the collaborations.
TEST STRATEGIES FOR WEBAPPS
1. The content model for the WebApp is reviewed to uncover errors.
2. The interface model is reviewed to ensure that all use cases can be accommodated.
3. The design model for the WebApp is reviewed to uncover navigation errors.
4. The user interface is tested to uncover errors in presentation and/or navigation mechanics.
5. Each functional component is unit tested.
6. Navigation throughout the architecture is tested.
7. The WebApp is implemented in a variety of different environmental configurations and is tested
for compatibility with each configuration.
8. Security tests are conducted in an attempt to exploit vulnerabilities in the WebApp or within its
environment.
9. Performance tests are conducted.
10. The WebApp is tested by a controlled and monitored population of end users.
VALIDATION TESTING
-Validation testing begins at the culmination of integration testing, when individual components have been
exercised, the software is completely assembled as a package, and interfacing errors have been uncovered
and corrected
Validation-Test Criteria
- Software validation is achieved through a series of tests that demonstrate conformity with requirements.
- test procedure defines specific test cases that are designed to ensure that all functional requirements are
satisfied, all behavioural characteristics are achieved, all content is accurate and properly presented, all
performance requirements are attained, documentation is correct, and usability and other requirements
are met.
- two possible conditions exists:
(1) The function or performance characteristic conforms to specification and is accepted
(2) a deviation from specification is uncovered and a deficiency list is created.
Configuration Review
- ensure that all elements of the software configuration have been properly developed, are cataloged, and
have the necessary detail to support activities. The configuration review, sometimes called an audit
Alpha and Beta Testing
-When custom software is built for one customer, a series of acceptance tests are conducted to enable the
customer to validate all requirements. Conducted by the end user rather than software engineers.
- to uncover errors that only the end user seems able to find.
- The alpha test is conducted at the developer’s site by a representative group of end users. The software is
used in a natural setting with the developer “looking over the shoulder” of the users and recording errors
and usage problems. Alpha tests are conducted in a controlled environment.
- The beta test is conducted at one or more end-user sites.
- developer generally is not present. beta test is a “live” application of the software in an environment that
cannot be controlled by the developer. The customer records all problems (real or imagined) that are
encountered during beta testing and reports these to the developer at regular intervals
• A variation on beta testing, called customer acceptance testing, is
sometimes performed when custom software is delivered to a customer
under contract.
• The customer performs a series of specific tests in an attempt to uncover
errors before accepting the software from the developer.
• In some cases (e.g., a major corporate or governmental system) acceptance
testing can be very formal and encompass many days or even weeks of
testing.
SYSTEM TESTING
• System testing is actually a series of different tests whose primary purpose is to fully exercise the
computer-based system.
• all work to verify that system elements have been properly integrated and perform allocated functions.
• Recovery Testing
-a system must be fault tolerant; that is, processing faults must not cause overall system function to cease.
- a system failure must be corrected within a specified period of time or severe economic damage will occur.
- Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery
is properly performed.
- If recovery is automatic (performed by the system itself), reinitialization, checkpointing mechanisms, data
recovery, and restart are evaluated for correctness.
- If recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to determine
whether it is within acceptable limits.
• Security Testing
-Security testing attempts to verify that protection mechanisms built into a system will,
in fact, protect it from improper penetration.
- “The system’s security must, of course, be tested for invulnerability from frontal
attack—but must also be tested for invulnerability from flank or rear attack.”
- During security testing, the tester plays the role of the individual who desires to
penetrate the system.
• Stress Testing
- Stress testing executes a system in a manner that demands resources in abnormal
quantity, frequency, or volume .
- A variation of stress testing is a technique called sensitivity testing -attempts to
uncover data combinations within valid input classes that may cause instability or
improper processing
• Performance Testing
- For real-time and embedded systems, software that provides required function but does not
conform to performance requirements is unacceptable.
- Performance testing is designed to test the run-time performance of software within the
context of an integrated system
• Deployment Testing
- software must execute on a variety of platforms and under more than one operating system
environment.
- Deployment testing, sometimes called configuration testing, exercises the software in each
environment in which it is to operate.
- deployment testing examines all installation procedures and specialized installation software
(e.g., “installers”) that will be used by customers, and all documentation that will be used to
introduce the software to end users.
TESTING TACTICS
SOFTWARE TESTING FUNDAMENTALS
Testability- “Software testability is simply how easily [a computer program]
can be tested.” The following characteristics lead to testable software.
Operability. “The better it works, the more efficiently it can be tested.”
Controllability. “The better we can control the software, the more the testing
can be automated and optimized.”
Decomposability. “By controlling the scope of testing, we can more quickly
isolate problems and perform smarter retesting.”
Simplicity. “The less there is to test, the more quickly we can test it.”
(functional simplicity, structural simplicity, code simplicity)
Stability. “The fewer the changes, the fewer the disruptions to testing.”
Understandability. “The more information we have, the smarter we will test.”
Test Characteristics
A good test has a high probability of finding an error.
A good test is not redundant.
A good test should be “best of breed”
A good test should be neither too simple nor too
complex.
WHITE-BOX TESTING
• White-box testing, sometimes called glass-box testing,
• Using white-box testing methods, you can derive test cases that
(1) guarantee that all independent paths within a module have been exercised at
least once,
(2) exercise all logical decisions on their true and false sides,
(3) execute all loops at their boundaries and within their operational bounds, and
(4) exercise internal data structures to ensure their validity.
BASIS PATH TESTING
-Basis path testing is a white-box testing technique .
-it enables the test-case designer to derive a logical complexity measure of a procedural design and use this
measure as a guide for defining a basis set of execution paths.
-guaranteed to execute every statement in the program at least one time during testing.
• Flow Graph Notation
- flow graph depicts logical control flow using the notation illustrated in Figure . Each structured construct has a
corresponding flow graph symbol.
• flow graph node, represents one or more procedural statements.
• A sequence of process boxes and a decision diamond can map into a single node.
• The arrows on the flow graph, called edges or links, represent flow of control and are
analogous to flowchart arrows.
• An edge must terminate at a node, even if the node does not represent any procedural
statements
• Areas bounded by edges and nodes are called regions
BASIS PATH TESTING
• Independent Program Paths
- An independent path is any path through the program that introduces at least one new set of processing
statements or a new condition
• Path 1: 1-11 Path 2: 1-2-3-4-5-10-1-11
• Path 3: 1-2-3-6-8-9-10-1-11 Path 4: 1-2-3-6-7-9-10-1-11
BASIS PATH TESTING
Cyclomatic complexity is a software metric that provides a quantitative measure of the logical complexity of a program.
- Complexity is computed in one of three ways:
1. The number of regions of the flow graph corresponds to the cyclomatic complexity.
2. Cyclomatic complexity V(G) for a flow graph G is defined as
V(G)=E - N + 2
where E is the number of flow graph edges and N is the number of flow graph nodes.
3. Cyclomatic complexity V(G) for a flow graph G is also defined as
V(G)=P + 1
where P is the number of predicate nodes contained in the flow graph G.
- Figure 18.2b, the cyclomatic complexity can be computed using each of the algorithms just noted:
1. The flow graph has four regions.
2. V(G) _ 11 edges _ 9 nodes _ 2 _ 4.
3. V(G) _ 3 predicate nodes _ 1 _ 4.
Therefore, the cyclomatic complexity of the flow graph in Figure 18.2b is 4
BASIS PATH TESTING
• Deriving Test Cases
Using the design or code as a foundation, draw a corresponding flow graph.
Determine the cyclomatic complexity of the resultant flow graph.
Determine a basis set of linearly independent paths.
Prepare test cases that will force execution of each path in the basis set.
• V(G) = 6 regions
• V(G) = E-N+2
= 17 edges - 13 nodes + 2
=6
• V(G) = P+1
=5 predicate nodes + 1
=6
BASIS PATH TESTING
Graph Matrices
- A graph matrix is a square matrix whose size (i.e., number of rows and columns) is equal to the
number of nodes on the flow graph.
- Each row and column corresponds to an identified node, and matrix entries correspond to
connections (an edge) between nodes.
- A simple example of a flow graph and its corresponding graph matrix is shown in Figure
• Referring to the figure, each node on the flow graph is identified by numbers, while each edge is
identified by letters.
• A letter entry is made in the matrix to correspond to a connection between two nodes.
• For example, node 3 is connected to node 4 by edge b.
• the graph matrix is nothing more than a tabular representation of a flow graph.
• by adding a link weight to each matrix entry, the graph matrix can become a powerful tool for
evaluating program control structure during testing.
• the link weight is 1 (a connection exists) or 0 (a connection does not exist). But link weights can be
assigned other, more interesting properties:
1.The probability that a link (edge) will be execute.
2.The processing time expended during traversal of a link
3. The memory required during traversal of a link
4. The resources required during traversal of a link.
BASIS PATH TESTING
Control Structure Testing
i. Conditional Testing:-
- exercises the logical conditions contained in a program module. A simple
condition is a Boolean variable or a relational expression
i. Data Flow Testing:-
-selects test paths of a program according to the locations of definitions and
uses of variables in the program
i. Loop Testing :-
-Loop testing is a white-box testing technique that focuses exclusively on the
validity of loop constructs. Four different classes of loops can be
defined: simple loops, concatenated loops, nested loops, and
unstructured loops
1. Simple loop:-
2. Nested loop
3. Concatenated loop
4. Unstructured loop
BLACK-BOX TESTING
• Black-box testing, also called behavioral testing, focuses on the functional requirements of the
software.
• Black-box testing attempts to find errors in the following categories:
(1) incorrect or missing functions,
(2) interface errors,
(3) errors in data structures or external database access,
(4) behavior or performance errors, and
(5) initialization and termination errors

• Graph-Based Testing Methods


- software testing begins by creating a graph of important objects and their relationships and
then devising a series of tests that will cover the graph so that each object and relationship is
exercised and errors are uncovered.
• a graph—a collection of nodes that represent objects, links that represent the relationships
between objects, node weights that describe the properties of a node (e.g., a specific data
value or state behavior), and link weights that describe some characteristic of a link. The
symbolic representation of a graph is shown in Figure 18.8a.
• Nodes are represented as circles connected by links that take a number of different forms.
• A directed link (represented by an arrow) indicates that a relationship moves in only one
direction.
• A bidirectional link, also called a symmetric link, implies that the relationship applies in
both directions.
• Parallel links are used when a number of different relationships are established between
graph nodes.
 Equivalence Partitioning
- It is a black-box testing method that divides the input domain of a program into classes of data
from which test cases can be derived.
- equivalence class represents a set of valid or invalid states for input conditions. Typically, an input
condition is either a specific numeric value, a range of values, a set of related values, or a Boolean
condition.
- Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are
defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class are
defined.
4. If an input condition is Boolean, one valid and one invalid class are defined.
Boundary Value Analysis
• Boundary value analysis leads to a selection of test cases that exercise bounding
values.
• Rather than selecting any element of an equivalence class, BVA leads to the
selection of test cases at the “edges” of the class.
• Rather than focusing solely on input conditions, BVA derives test cases from the
output domain as well.
 Orthogonal Array Testing
• Orthogonal array testing can be applied to problems in which the input domain is
relatively small but too large to accommodate exhaustive testing.
• The orthogonal array testing method is particularly useful in finding region faults
an error category associated with faulty logic within a software component.
TDD(Test-Driven Development)
• Requirements drive design, and design establishes a foundation for construction. This simple software engineering
reality works reasonably well and is essential as a software architecture is created. However, a subtle change can provide
significant benefit when component-level design and construction are considered.
• In test-driven development (TDD), requirements for a software component serve as the basis for the creation of a series
of test cases that exercise the interface and attempt to find errors in the data structures and functionality delivered by
the component. TDD is not really a new technology but rather a trend that emphasizes the design of test cases before
the creation of source code.
• The TDD process follows the simple procedural flow illustrated in Figure 31.3.
• Before the first small segment of code is created, a software engineer creates a test to exercise the code (to try to make
the code fail). The code is then written to satisfy the test.
• If it passes, a new test is created for the next segment of code to be developed. The process continues until the
component is fully coded and all tests execute without error.
• However, if any test succeeds in finding an error, the existing code is refactored (corrected) and all tests created to that
point are reexecuted.
• This iterative flow continues until there are no tests left to be created, implying that the component meets all
requirements defined for it.
• During TDD, code is developed in very small increments (one subfunction at a time), and no code is written until a test
exists to exercise it. You should note that each iteration results in one or more new tests that are added to a regression
test suite that is run with every change. This is done to ensure that the new code has not generated side effects that
cause errors in the older code.
• In TDD, tests drive the detailed component design and the resultant
source code.
• The results of these tests cause immediate modifications to the
component design (via the code), and more important, the resultant
component (when completed) has been verified in stand-alone fashion

You might also like