0% found this document useful (0 votes)
18 views61 pages

Test Case & Debugin MK 1.2

The document provides an overview of software testing methodologies, including white-box and black-box testing, with a focus on test case design, validation, and debugging processes. It outlines various testing techniques such as requirements-based testing, partition testing, structural testing, and emphasizes the importance of verification and validation in ensuring software quality. Additionally, it discusses debugging strategies and the concept of reliability in both hardware and software contexts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views61 pages

Test Case & Debugin MK 1.2

The document provides an overview of software testing methodologies, including white-box and black-box testing, with a focus on test case design, validation, and debugging processes. It outlines various testing techniques such as requirements-based testing, partition testing, structural testing, and emphasizes the importance of verification and validation in ensuring software quality. Additionally, it discusses debugging strategies and the concept of reliability in both hardware and software contexts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 61

Software Testing

white-box black-box
methods methods

Methods

Strategies
Test Case
Design

OBJECTIVE to uncover errors

CRITERIA in a complete manner

CONSTRAINT with a minimum of effort and time


White-Box Testing

... our goal is to ensure that all


statements and conditions have
been executed at least once ...
Why Cover?
logic errors and incorrect assumptions
are inversely proportional to a path's
execution probability

we oftenbelieve that a path is not


likely to be executed; in fact, reality is
often counter intuitive

typographical errors are random; it's


likely that untested paths will contain
some
Test case design
• Involves designing the test cases (inputs and
outputs) used to test the system.
• The goal of test case design is to create a set of
tests that are effective in validation and defect
testing.
• Design approaches:
– Requirements-based testing;
– Partition testing;
– Structural testing.
Black-Box Testing
requirements

output

input events
Requirements based testing
• A general principle of requirements
engineering is that requirements should be
testable.
• Requirements-based testing is a validation
testing technique where you consider each
requirement and derive a set of tests for
that requirement.
Partition testing
• Input data and output results often fall into
different classes where all members of a class are
related.
• Each of these classes is an equivalence partition
or domain where the program behaves in an
equivalent way for each class member.
• Test cases should be chosen from each partition.
Equivalence
Partitioning

user output FK
queries mouse formats input
data
picks prompts
Equivalence partitioning

Invalid inputs Valid inputs

System

Outputs
Sample Equivalence
Classes
Valid data
user supplied commands
responses to system prompts
file names
computational data
physical parameters
bounding values
initiation values
output data formatting
responses to error messages
graphical data (e.g., mouse picks)

Invalid data
data outside bounds of the program
physically impossible data
proper value supplied in wrong place
Equivalence partitions
3 11
4 7 10

Less than 4 Between 4 and 0


1 More than 1
0

Number ofinput values

9999 100000
10000 50000 99999

Less than 1
0000 Between 10000 and 99999 More than 99999

Input values
Structural testing
• Sometime called white-box testing.
• Derivation of test cases according to
program structure. Knowledge of the
program is used to identify additional test
cases.
• Objective is to exercise all program
statements (not all path combinations).
Structural testing

Test data

Tests Derives

Component Test
code outputs
Path testing
• The objective of path testing is to ensure that the
set of test cases is such that each path through the
program is executed at least once.
• The starting point for path testing is a program
flow graph that shows nodes representing
program decisions and arcs representing the flow
of control.
• Statements with conditions are therefore nodes in
the flow graph.
flow graph
1

bottom > top while bottom <= top


5

elemArray [mid] != key


7 11
elemArray
elemArray [mid] > key elemArray [mid] < key
[mid] = key

8
12 13

14 10
Independent paths
• 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 14
• 1, 2, 3, 4, 5, 14
• 1, 2, 3, 4, 5, 6, 7, 11, 12, 5, …
• 1, 2, 3, 4, 6, 7, 2, 11, 13, 5, …
• Test cases should be derived so that all of
these paths are executed
• A dynamic program analyser may be used
to check that paths have been executed
Selective Testing
Selected path

loop < 20 X
Cyclomatic
Complexity
A number of industry studies have indicated
that the higher V(G), the higher the probability
or errors.

modules

V(G)

modules in this range are


more error prone
Basis Path
First, we compute the cyclomatic
Testing
complexity:

number of simple decisions + 1

or

number of enclosed areas + 1

In this case, V(G) = 4


Basis Path
Testingindependent paths:
Next, we derive the

Since V(G) = 4,
2
there are four paths

3 Path 1: 1,2,3,6,7,8
4
5 6
Path 2: 1,2,3,5,7,8
Path 3: 1,2,4,7,8
Path 4: 1,2,4,7,2,4,...7,8
7
Finally, we derive test
cases to exercise these
8
paths.
Basis Path Testing
Notes
you don't need a flow chart,
but the picture will help when
you trace program paths

count each simple logical test,


compound tests count as 2 or
more

basis path testing should be


applied to critical modules
Loop Testing

Simple
loop
Nested
Loops
Concatenated
Loops Unstructured
Loops
Loop Testing: Simple
Loops
Minimum conditions—Simple Loops
1. skip the loop entirely
2. only one pass through the loop
3. two passes through the loop
4. m passes through the loop m < n
5. (n-1), n, and (n+1) passes through
the loop
where n is the maximum number
of allowable passes
Loop Testing: Nested
Nested Loops Loops
Start at the innermost loop. Set all outer loops to their
minimum iteration parameter values.
Test the min+1, typical, max-1 and max for the
innermost loop, while holding the outer loops at their
minimum values.
Move out one loop and set it up as in step 2, holding all
other loops at typical values. Continue this step until
the outermost loop has been tested.
Concatenated Loops
If the loops are independent of one another
then treat each as a simple loop
else* treat as nested loops
endif*
for example, the final loop counter value of loop 1 is
used to initialize loop 2.
Alpha, Beta and Acceptance Testing
• The term Acceptance Testing is used when the software is developed
for a specific customer. A series of tests are conducted to enable the
customer to validate all requirements.
These tests are conducted by the end user / customer and may range from
adhoc tests to well planned systematic series of tests.
• The terms alpha and beta testing are used when the software is
developed
as a product for anonymous customers.
• Alpha Tests are conducted at the developer’s site by some potential
customers. These tests are conducted in a controlled environment. Alpha
testing may be started when formal testing process is near completion.
• Beta Tests are conducted by the customers / end users at their sites.
Unlike alpha testing, developer is not present here. Beta testing is
conducted in a real environment that cannot be controlled by the
developer.
Verification and Validation
• Verification is the process of evaluating a
system or component to determine whether the
products of a given development phase satisfy the
conditions imposed at the start of that phase.
• Validation is the process of evaluating a system
or component during or at the end of
development process to determine whether it
satisfies the specified requirements .
• Testing= Verification+Validation
Validation Testing
• It refers to test the software as a complete product.
• This should be done after unit & integration testing.
• Alpha, beta & acceptance testing are nothing but the
various ways of involving customer during testing.
• IEEE has developed a standard (IEEE standard 1059-
1993) entitled “ IEEE guide for software verification and
validation “ to provide specific guidance about planning
and documenting the tasks required by the standard so that
the customer may write an effective plan.
Validation testing improves the quality of software product
in terms of functional capabilities and quality attributes.
Static Testing
• Static testing is a form of software testing
where the software isn't actually used. This
is in contrast to dynamic testing. It is
generally not detailed testing, but checks
mainly for the sanity of the code, algorithm,
or document. It is primarily syntax checking
of the code and/or manually reviewing the
code or document to find errors. This type
of testing can be used by the developer who
wrote the code, in isolation.
Static Testing
• Code reviews, inspections and
walkthroughs are also used.
• From the black box testing point of view,
static testing involves reviewing
requirements and specifications. This is
done with an eye toward completeness or
appropriateness for the task at hand. This is
the verification portion of
Verification and Validation.
• Even static testing can be automated. A static
testing, test suite consists of programs to be
analyzed by an interpreter or a compiler that
asserts the programs syntactic validity.
• Bugs discovered at this stage of development
are less expensive to fix than later in the
development cycle.
• The people involved in static testing are
application developers, testers, and business
analyst.
Code Review
• Code review is systematic examination
(often as peer review) of computer source
code intended to find and fix mistakes
overlooked in the initial development
phase, improving both the overall quality of
software and the developers' skills. Reviews
are done in various forms such as pair
programming, informal walkthroughs, and
formal inspections
Code Inspection
• An inspection is one of the most common
sorts of review practices found in software
projects. The goal of the inspection is for all
of the inspectors to reach consensus on a
work product and approve it for use in the
project. Commonly inspected work
products include software requirements
specifications and test plans.
Code Inspection

• In an inspection, a work product is selected for


review and a team is gathered for an inspection
meeting to review the work product. A moderator
is chosen to moderate the meeting. Each inspector
prepares for the meeting by reading the work
product and noting each defect. The goal of the
inspection is to identify defects. In an inspection,
a defect is any part of the work product that will
keep an inspector from approving it. For example,
if the team is inspecting a software requirements
specification, each defect will be text in the
document which an inspector disagrees.
Walkthrough
• Walkthrough, or walk-through, is a term
describing the consideration of a process at an
abstract level.
• The term is often employed in the software
industry (see software walkthrough) to describe
the process of inspecting algorithms and source
code by following paths through the algorithms or
code as determined by input conditions and
choices made along the way.
Walkthrough
• The purpose of such code walkthroughs is generally
to provide assurance of the fitness for purpose of
the algorithm or code; and occasionally to assess
the competence or output of an individual or team.
• The term is employed in the theatrical and
entertainment industry to describe a rehearsal where
the major issues of choreography and interaction are
practiced and resolved, prior to more formal "dress
rehearsals".
Walkthrough
• The term is often used in the world of learning
where a tutor/trainer will walk through the process
for the first time. It is regarded as a literal walk
through of the learning at the groups pace ensuring
that everyone takes in the new knowledge and
skills.
• Something akin to walkthroughs are used in very
many forms of human endeavor since the process is
a thought experiment that seeks to determine the
likely outcome(s) of an affair based on starting
conditions and the effects of decisions taken.
Debugging:
A Diagnostic Process
The Art of Debugging
• The goal of testing is to identify errors (bugs) in the program. The
process of testing generates symptoms, and a program’s failure is a
clear symptom of the presence of an error.
• After getting a symptom, we begin to investigate the cause and place of
that error. After identification of place, we examine that portion to
identify the cause of the problem. This process is called debugging.
• Debugging Techniques
• Pressman explained few characteristics of bugs that provide some clues.
1. “The symptom and the cause may be geographically remote. That is, the
symptom may appear in one part of a program, while the cause may actually
be located in other part. Highly coupled program structures may complicate
this situation.
2. The symptom may disappear (temporarily) when another error is corrected.
Debugging…….
3. The symptom may actually be caused by non errors (e.g.
round off inaccuracies).
4. The symptom may be caused by a human error that is not
easily traced.

5. The symptom may be a result of timing problems rather


than processing problems.

6. The symptom may be intermittent. This is particularly


common in embedded system that couple hardware with
software inextricably.

7. The symptom may be due to causes that are distributed


across a number of tasks running on different processors”.
The Debugging Process
test cases

new test results


cases
regression
tests suspected
causes
corrections
Debugging
identified
causes
Debugging Effort
time required
to diagnose the
symptom and
time required determine the
to correct the error cause
and conduct
regression tests
Symptoms & Causes
symptom and cause may be
geographically separated

symptom may disappear when


another problem is fixed

cause may be due to a


combination of non-errors

cause may be due to a system


or compiler error

cause may be due to


symptom assumptions that everyone
cause believes

symptom may be intermittent


Debugging
Techniques
brute force / testing

backtracking

induction

deduction
Debugging Techniques
Induction approach
• Locate the pertinent data
• Organize the data
• Devise a hypothesis
• Prove the hypothesis
Deduction approach
• Enumerate the possible causes or hypotheses
• Use the data to eliminate possible causes
• Refine the remaining hypothesis
• Prove the remaining hypothesis
Debugging: Final
Thoughts
1. Don't run off half-cocked, think about the
symptom you're seeing.

2. Use tools (e.g., dynamic debugger) to gain


more insight.

3. If at an impasse, get help from someone else.

4. Be absolutely sure to conduct regression tests


when you do "fix" the bug.
Reliability
• Basic Concepts Reliability
• There are three phases in the life of any hardware
component i.e.,
• burn-in, useful life & wear-out.

• Failure rate increase in wear-out phase due to wearing


out/aging of components. The best period is useful life
period. The shape of this curve is like a “bath tub” and that
is why it is known as bath tub curve. The “bath tub curve” is
given in Fig

• During useful life period, failure rate is approximately


constant.

• In burn-in phase, failure rate is quite high initially, and it


HW Reliability
SW Reliability
• Software may be retired only if it becomes obsolete. Some
of contributing factors are given below:

• change in environment
• change in infrastructure/technology
• major change in requirements
• increase in complexity
• extremely difficult to maintain deterioration in
structure of the code
• slow execution speed
• poor graphical user interfaces
SW Reliability
“Software reliability means operational reliability. Who cares
how many bugs are in the program?
As per IEEE standard:
“Software reliability is defined as the ability of a system or
component to perform its required functions under stated
conditions for a specified period of time”.

• “It is the probability of a failure free operation of a


program for a specified time in a specified environment”.

• Software reliability is also defined as the probability that a


software system fulfills its assigned task in a given
environment for a predefined number of input cases,
assuming that the hardware and the inputs are free of error .
SW Reliability Assesment
• There are four general ways of
characterizing failure occurrences in time:
• 1. time of failure,
• 2. time interval between failures,
• 3. cumulative failure experienced up to a
given time,
• 4. failures experienced in a time interval.
Test Plan
1. Introduction
1.1. Test Plan Objectives
• 2. Scope
2.1. Data Entry
2.2. Reports File Transfer
2.3. File Transfer
2.4. Security
• 3. Test Strategy
3.1. System Test
3.2. Performance Test
3.3. Security Test
3.4. Automated Test
3.5. Stress and Volume Test
3.6. Recovery Test
3.7. Documentation Test
3.8. Alpha & Beta Test
3.9. User Acceptance Test
• 4. Environment Requirements
4.1. Data Entry workstations
4.2 MainFrame
• 5. Test Schedule
• 6. Control Procedures
6.1 Reviews
6.2 Bug Review meetings
6.3 Change Request
6.4 Defect Reporting
• 7. Functions To Be Tested
• 8. Resources and Responsibilities
8.1. Resources
8.2. Responsibilities
• 9. Deliverables
• 10. Suspension / Exit Criteria
• 11. Resumption Criteria
• 12. Dependencies
12.1 Personnel Dependencies
12.2 Software Dependencies
12.3 Hardware Dependencies
12.3 Test Data & Database
• 13. Risks
13.1. Schedule
13.2. Technical
13.3. Management
13.4. Personnel
13.5 Requirements
• 14. Tools
• 15. Documentation
• 16. Approvals

You might also like