0% found this document useful (0 votes)
17 views24 pages

Software Testing

The document outlines the concepts of software testing, verification, and validation, emphasizing the importance of distinguishing between verification (building the product right) and validation (building the right product). It discusses failures, errors, and faults in software, as well as the limitations of testing, including the impossibility of exhaustive testing due to the vast number of potential inputs and paths. Additionally, it describes various testing phases, techniques, and criteria for determining when to stop testing.

Uploaded by

pivoke4989
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views24 pages

Software Testing

The document outlines the concepts of software testing, verification, and validation, emphasizing the importance of distinguishing between verification (building the product right) and validation (building the right product). It discusses failures, errors, and faults in software, as well as the limitations of testing, including the impossibility of exhaustive testing due to the vast number of potential inputs and paths. Additionally, it describes various testing phases, techniques, and criteria for determining when to stop testing.

Uploaded by

pivoke4989
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Software

Testing
Verification and Validation
• Verification
• Have you built the product right?
• Does the product meet system specifications?
• Validation
• Have you built the right product?
• Does the product meet user expectations?
Failure, Error, and Fault
• Failure
• A failure is said to occur whenever the external behavior of a system does not
conform to what is prescribed in the system specification.
• Error
• An error is a state of the system.
• If a corrective action is not taken by the system, an error state could lead to system
failure
• Fault
• A fault is cause of an error
• A fault may remain undetected for a long period of time, until some event
activates it
Failure, Error, and Fault
Software Testing
• Software testing is the process of examining the software product
against its requirements.

• It is a process that involves verification of product with respect to its


written requirements and conformance of requirements with user
needs.

• Software testing is the process of executing software product on test


data and examining its output vis-à-vis the documented behavior.
Software Testing
Objective
• The correct approach to testing a scientific theory is not
to try to verify it, but to seek to refute the theory. That
is to prove that it has errors.
• The goal of testing is to expose latent defects in a
software system before it is put to use.
• A software tester tries to break the system. The
objective is to show the presence of a defect not the
absence of it.
• Testing cannot show the absence of a defect. It only
increases your confidence in the software.
• Exhaustive testing of software is not possible
Limitations of Testing
• In order to prove that a formula or hypothesis is incorrect all you have to do to
show only one example in which you prove that the formula or theorem is
not working.
• On the other hand, million of examples can be developed to support the
hypothesis but this will not prove that it is correct.
• You cannot test a program completely because:
• The domain of the possible inputs is too large to test.
• There are too many possible paths through the program to test.
• According to Discrete Mathematics
• To prove that a formula or hypothesis is incorrect you have to show only one
example.
• To prove that it is correct any numbers of examples are insufficient. You have to give a formal
proof of its correctness.
Test Cases and Test
Data
• In order to test a software application, it is necessary to
generate test cases and test data that is used in the
application.

• Test cases correspond to application functionality such that the


tester writes down steps that should be followed to achieve
certain functionality.

• Thus a test case involves


• Input and output specification plus a statement of the function under
test.
• Steps to perform the function
• Expected results that the software application produces

• Test data includes inputs that have been devised to test the
system.
Exam
ple
• Test a function that compares two strings of characters stored in an
array for equality.
Test Cases
A B Expected result
“cat” “dog” False
“” “” True
“hen” “hen” True
“hen” “heN” False
“” “ball” False
“cat” “” False
“HEN” “hen” False
“rat” “door” False
“ ” “ ” True
bool isStringsEqual(char a[], char b[])
{
bool result;
if (strlen(a) != strlen(b))
result = false;
else {
for (int i =0; i < strlen(a); i++)
if (a[i] == b[i])
result = true;
else
result = false;
}
return result;
}
Testing vs. Development
The Developer and Tester
Development Testing
Development is Testing is a
a creative destructive activity
activity
Objective of Objective of
development is testing is to show
to show that the that the program
program works does not work
Description of Testing Phases
• Unit Testing: testing individual components independent of other components.

• Module Testing: testing a collection of dependent components – a module


encapsulates related components so it can be tested independently.

• Subsystem Testing: testing of collection of modules to discover interfacing


problems among interacting modules.

• System Testing: Integrating subsystems into a system and testing this system as
a whole.
Description of Testing Phases
• Acceptance Testing: validation against user
expectations. Usually it is done at the client premises.

• Alpha Testing: acceptance testing for customized


projects, in-house testing for products.

• Beta Testing: field-testing of product with potential


customers who agree to use it and report problem
before system is released for general use
Complete
• Complete, or exhaustive, testing means there are no
Testing?
undiscovered faults at the end of the testing phase.
• For Most of the Systems, Complete Testing is
Impossible because
• Domain of Possible Input (Valid and Invalid) may be too
large
• All Combinations of Input Variables may be too large
• It may Not be Possible to Test All the Paths
through a Program
• Design may be Very Complex
• e.g., Global and Static Variables may Make Design Very
Complex
• Possible Execution Contexts/Environments may be Large or
Complex
Complete Testing?
• int smallProgram(int j)
•{
j = j -1; // should be j = j + 1
j = j / 30000;
return j;
•}
• Assume
Integers of Size
‘16‘ bits
• Lowest possible
input value is -
32768 and the
When to Stop Testing?
There is no single, valid, rational criterion for
stopping. Furthermore, given any set of
applicable criteria, how exactly each is weighted
depends very much upon the product, the
environment, the culture and the attitude to risk
Five basic criteria often used to
decide when to stop testing
• Coverage goals have been met
• Defect discovery rate has dropped below a previously defined
threshold
• Marginal cost of finding the "next" defect exceeds the expected loss
from that defect
• Project team reaches consensus that it is appropriate to release the
product
• Boss says, "Ship it!"
Test Adequacy
Criteria
• How to Measure Test Coverage?
• Define Test Adequacy Criteria
• 100% Statement Coverage is Satisfied if a Test Suite Causes
each Statement of a Program to Execute
• Other Test Adequacy Criteria:
• Specification Based Criteria
• And Many Others..
• You Have to Carefully Define Test Adequacy
Criteria for your Project
Testing
Techniques
One Common Goal across most of the Testing
Techniques:
Reduce the number of test cases to a manageable
level while still maintaining reasonable test coverage
Software Testing
Any engineered product can be tested in one of two
Types ways:
• External Testing
• If you know the functions of the product, you can
make tests to:
• Determine if each known function of the product is fully
operational
• Find errors in each function
• Internal Testing
• If you know the internal workings of a product, you
can make tests to ensure:
• All internal operations are performed according to specifications
• All internal components / algorithms have been properly exercised
Software Testing
• External Testing  Functional or Black Box Testing
Types… • Component is treated as a black box
• Inputs and outputs
• Is not concerned with how the inputs are transformed into
outputs
• Internal Testing  Structural or White Box Testing
• Analyses the internal structure of the program
• test cases are written to test these structures
Testing Techniques
• Black Box or Functional Testing
• White Box or Structured Testing

You might also like