Testing
Testing
three tests are required to make the value of E1 greater than, equal
to, or less than that of E2
Contd..
• If relational operators are incorrect and E1 and E2 are correct, then
these three tests guarantee the detection of the relational operator
error.
• BRO (branch and relational operator) testing, technique guarantees
the detection of branch and relational operator errors in a condition
provided that all Boolean variables and relational operators in the
condition occur only once and have no common variables.
• The BRO strategy uses condition constraints for a condition C.
Contd..
• For a Boolean variable, B, we specify a constraint on the outcome of B that states
that B must be either true (t) or false (f). Similarly, for a relational expression, the
symbols >, =, < are used to specify constraints on the outcome of the expression.
• As an example, consider the condition C1: B1 & B2 where B1 and B2 are Boolean
variables.
• The condition constraint for C1 is of the form (D1, D2), where each of D1 and D2
is t or f.
• The value (t, f) is a condition constraint for C1 and is covered by the test that
makes the value of B1 to be true and the value of B2 to be false.
• The BRO testing strategy requires that the constraint set {(t, t), (f, t), (t, f)} be
covered by the executions of C1. If C1 is incorrect due to one or more Boolean
operator errors, at least one of the constraint set will force C1 to fail.
Contd..
• As a second example, a condition of the form C2: B1 & (E3 = E4)
where B1 is a Boolean expression and E3 and E4 are arithmetic
expressions.
• A condition constraint for C2 is of the form (D1, D2), where each of
D1 is t or f and D2 is >, =, < or >. By replacing (t, t) and (f, t) with (t, =)
and (f, =), respectively, and by replacing (t, f) with (t,< ) and (t,>) the
resulting constraint set for C2 is {(t, =), (f, =), (t,< ), (t,>)}.
Contd..
• Data Flow Testing
• Data flow based testing method selects test paths of a program
according to the definitions and uses of different variables in a
program.
• Consider a program P . For a statement numbered S of P , let
• DEF(S) = {X /statement S contains a definition of X }
• USES(S)= {X /statement S contains a use of X }
Contd..
• For the statement S: a=b+c;, DEF(S)={a}, USES(S)={b, c}. The definition of variable X at
statement S is said to be live at statement S1 , if there exists a path from statement S to
statement S1 which does not contain any definition of X .
• All definitions criterion is a test coverage criterion that requires that an adequate test set
should cover all definition occurrences in the sense that, for each definition occurrence,
the testing paths should cover a path through which the definition reaches a use of the
definition.
• All use criterion requires that all uses of a definition should be covered.
Contd..
• The definition-use chain (or DU chain) of a variable X is of the form [X,
S, S1], where S and S1 are statement numbers, such that X∈ DEF(S)
and X∈ USES(S1) and the definition of X in the statement S is live at
statement S1 .
• One simple data flow testing strategy is to require that every DU
chain be covered at least once.
• Data flow testing strategies are especially useful for testing programs
containing nested if and loop statements.
Contd..
• Loop Testing
• Loop testing is a white-box testing technique that focuses
exclusively on the validity of loop constructs.
• Four different classes of loops can be defined: simple loops,
concatenated loops, nested loops, and unstructured loops.
• Simple loops: The following set of tests can be applied
to simple loops, where n is the maximum number of
allowable passes through the loop.
• 1. Skip the loop entirely.
• 2. Only one pass through the loop..
• 3. Two passes through the loop.
• 4. m passes through the loop where m < n.
• 5. n -1, n, n + 1 passes through the loop.
Contd..
Contd..
• Nested loops: Steps are
1. Start at the innermost loop. Set all other loops to minimum
values.
2. Conduct simple loop tests for the innermost loop while holding the
outer loops at their minimum iteration parameter values.
3. Add other tests for out-of-range or excluded values.
4. Work outward, conducting tests for the next loop, but keeping all
other outer loops at minimum values and other nested loops to
"typical" values.
5. Continue until all loops have been tested
Contd..
• Concatenated loops : Concatenated loops can be tested using the
approach defined for simple loops, if each of the loops is independent
of the other. When the loops are not independent, the approach
applied to nested loops is recommended.
• Unstructured loops: Whenever possible, this class of loops should be
redesigned to reflect the use of the structured programming
constructs.
Smoke Testing
• Smoke testing is an integration testing approach and carried out
before initiating system testing to ensure that system testing would
be meaningful, or whether many parts of the software would fail.
• The idea behind smoke testing is that if the integrated program
cannot pass even the basic tests, it is not ready for a vigorous testing.
Contd..
• The smoke testing approach encompasses the following activities:
• Software components that have been translated into code are integrated
into a “build.” A build includes all data files, libraries, reusable modules,
and engineered components that are required to implement one or
more product functions.
• A series of tests is designed to expose errors that will keep the build
from properly performing its function. The intent should be to uncover
“show stopper” errors that have the highest likelihood of throwing the
software project behind schedule.
• The build is integrated with other builds and the entire product (in its
current form) is smoke tested daily. The integration approach may be
top down or bottom up.
Contd..
• For smoke testing, a few test cases are designed to check whether the basic
and major functionalities are working.
• For example, for a library automation system, the smoke tests may check
whether books can be created and deleted, whether member records can be
created and deleted, and whether books can be loaned and returned.
Regression testing
• Regression testing spans unit, integration, and system testing.
• Regression testing is the practice of running an old test suite after each change to the
system or after each bug fix to ensure that no new bug has been introduced due to the
change or the bug fix.
• However, if only a few statements are changed, then the entire test suite need not be
run — only those test cases that test the functions and are likely to be affected by the
change need to be run.
• Whenever a software is changed to either fix a bug, or enhance or remove a feature,
regression testing is carried out.
Contd..
• The regression test suite (the subset of tests to be executed) contains
three different classes of test cases:
• A representative sample of tests that will exercise all software functions.
• Additional tests that focus on software functions that are likely to be affected
by the change.
• Tests that focus on the software components that have been changed.
VALIDATION TESTING
• Validation succeeds when software functions in a manner that can be
reasonably expected by the customer.
• Reasonable expectations are defined in the Software Requirements
Specification— a document that describes all user-visible attributes of
the software.
Contd..
• Validation Test Criteria
• Software validation is achieved through a series of black-box tests that
demonstrate conformity with requirements.
• A test plan outlines the classes of tests to be conducted and a test
procedure defines specific test cases that will be used to demonstrate
conformity with requirements.
• Both the plan and procedure are designed to ensure that all functional
requirements are satisfied, all behavioral characteristics are achieved, all
performance requirements are attained, documentation is correct, and
human-engineered and other requirements are met (e.g., transportability,
compatibility, error recovery, maintainability).
Contd..
• After each validation test case has been conducted, one of two
possible conditions exist:
(1) The function or performance characteristics conform to specification and are accepted
or
(2) a deviation from specification is uncovered and a deficiency list is
created.
• Deviation or error discovered at this stage in a project can rarely be
corrected prior to scheduled delivery.
• It is often necessary to negotiate with the customer to establish a
method for resolving deficiencies.
Contd..
• Configuration Review
• An important element of the validation process is a configuration
review.
• The intent of the review is to ensure that all elements of the software
configuration have been properly developed, are cataloged, and have
the necessary detail to bolster the support phase of the software life
cycle.
Contd..
• Acceptance Tests
• When custom software is built for one customer, a series of
acceptance tests are conducted to enable the customer to validate all
requirements.
• Conducted by the end-user rather than software engineers, an
acceptance test can range from an informal "test drive" to a planned
and systematically executed series of tests.
• In fact, acceptance testing can be conducted over a period of weeks
or months, thereby uncovering cumulative errors that might degrade
the system over time.
Contd..
• Alpha and Beta Testing
• If software is developed as a product to be used by many customers, it is
impractical to perform formal acceptance tests with each one.
• Most software product builders use a process called alpha and beta testing
to uncover errors that only the end-user seems able to find.
• The alpha test is conducted at the developer's site by a customer. The
software is used in a natural setting with the developer "looking over the
shoulder" of the user and recording errors and usage problems. Alpha tests
are conducted in a controlled environment.
Contd..
• The beta test is conducted at one or more customer sites by the end-user of the
software.
• Unlike alpha testing, the developer is generally not present.
• Therefore, the beta test is a "live" application of the software in an environment that
cannot be controlled by the developer.
• The customer records all problems (real or imagined) that are encountered during beta
testing and reports these to the developer at regular intervals.
• As a result of problems reported during beta tests, software engineers make
modifications and then prepare for release of the software product to the entire
customer base.
SYSTEM TESTING
• System testing is actually a series of different tests whose primary purpose is to fully
exercise the computer-based system.
• Although each test has a different purpose, all work to verify that system elements have
been properly integrated and perform allocated functions.
• Following are the different types of system tests.
• Recovery Testing
• Security Testing
• Stress Testing
• Performance Testing
Contd..
• Recovery Testing
• Many computer based systems must recover from faults and resume processing within a
pre-specified time.
• In some cases, a system must be fault tolerant; that is, processing faults must not cause
overall system function to cease.
• In other cases, a system failure must be corrected within a specified period of time or
severe economic damage will occur.
• Recovery testing is a system test that forces the software to fail in a variety of ways and
verifies that recovery is properly performed.
• If recovery is automatic (performed by the system itself), re-initialization, check-pointing
mechanisms, data recovery, and restart are evaluated for correctness.
• If recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to
determine whether it is within acceptable limits.
Contd..
• Security Testing
• Any computer-based system that manages sensitive information or causes
actions that can improperly harm (or benefit) individuals is a target for improper
or illegal penetration.
• Penetration spans a broad range of activities: hackers who attempt to penetrate
systems for sport; disgruntled employees who attempt to penetrate for revenge;
dishonest individuals who attempt to penetrate for illicit personal gain.
• Security testing attempts to verify that protection mechanisms built into a system
will, in fact, protect it from improper penetration.
Contd..
• During security testing, the tester plays the role(s) of the individual who desires to penetrate the system.
• The tester may attempt to acquire passwords through external clerical means; may attack the system with
custom software designed to breakdown any defenses that have been constructed; may overwhelm the
system, thereby denying service to others; may purposely cause system errors, hoping to penetrate during
recovery; may browse through insecure data, hoping to find the key to system entry.
• Given enough time and resources, good security testing will ultimately penetrate a system.
• The role of the system designer is to make penetration cost more than the value of the information that will
be obtained.
Contd..
• Stress Testing
• Stress tests are designed to confront programs with abnormal situations.
• In essence, the tester who performs stress testing asks: "How high can we
crank this up before it fails?"
• Stress testing executes a system in a manner that demands resources in
abnormal quantity, frequency, or volume.
Contd..
• For example
• 1. special tests may be designed that generate ten interrupts per second,
when one or two is the average rate.
• 2. input data rates may be increased by an order of magnitude to determine
how input functions will respond.
• 3. test cases that require maximum memory or other resources are executed.
• 4. test cases that may cause thrashing in a virtual operating system are
designed.
• 5. test cases that may cause excessive hunting for disk-resident data are
created.
• Essentially, the tester attempts to break the program
Contd..
• A variation of stress testing is a technique called sensitivity testing.
• In some situations (the most common occur in mathematical algorithms), a very small
range of data contained within the bounds of valid data for a program may cause
extreme and even erroneous processing or profound performance degradation.
• Sensitivity testing attempts to uncover data combinations within valid input classes that
may cause instability or improper processing.
Contd..
• Performance Testing
• Performance testing is designed to test the run-time performance of software
within the context of an integrated system.
• Performance testing occurs throughout all steps in the testing process.
• Even at the unit level, the performance of an individual module may be assessed
as white-box tests are conducted.
• However, it is not until all system elements are fully integrated that the true
performance of a system can be ascertained.
Contd..
• Performance tests are often coupled with stress testing and usually
require both hardware and software instrumentation.
• That is, it is often necessary to measure resource utilization (e.g.,
processor cycles) in an exacting fashion. External instrumentation can
monitor execution intervals, log events (e.g., interrupts) as they occur,
and sample machine states on a regular basis.
• By instrumenting a system, the tester can uncover situations that lead
to degradation and possible system failure.