0% found this document useful (0 votes)
10 views78 pages

Lect 16

The document outlines various levels and types of software testing, including unit, integration, system, and acceptance testing, emphasizing the roles of developers, users, and QA personnel. It discusses the importance of verification and validation, the challenges of thorough testing, and strategies for effective test design. Additionally, it highlights the evolution of testing tools and the necessity of systematic testing approaches to ensure software quality.

Uploaded by

pb508
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views78 pages

Lect 16

The document outlines various levels and types of software testing, including unit, integration, system, and acceptance testing, emphasizing the roles of developers, users, and QA personnel. It discusses the importance of verification and validation, the challenges of thorough testing, and strategies for effective test design. Additionally, it highlights the evolution of testing tools and the necessity of systematic testing approaches to ensure software quality.

Uploaded by

pb508
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 78

Lecture 16

Program Testing 03-04-2025

04/19/25 1
Test Levels

• Unit testing
 Test each module (unit, or component) independently

 Mostly done by developers of the modules

• Integration and system testing


 Test the system as a whole

 Often done by separate testing or QA team

• Acceptance testing
 Validation of system functions by the customer

04/19/25 2
Types of System Testing

• Based on test types:


 Functionality test

 Performance test

• Based on who performs testing:


 Alpha

 Beta

 Acceptance test
04/19/25 3
Performance test
• Determines whether a system or subsystem
meets its non-functional requirements:
• Response times
• Throughput
• Usability
• Stress
• Recovery
• Configuration,
• Safety etc.
04/19/25 4
04/19/25 5
Overview of Testing Activities

Subsystem Unit
Code Test
Tested
Subsystem
Subsystem Unit
Code Test
Tested Integration Functional
Subsystem
Test Test Functioning
Integrated System
Subsystems

Tested Subsystem

Module Unit
Spec Test

04/19/25 6
User Acceptance Testing

• User determines whether the


system fulfills his requirements
 Accepts or rejects the
delivered system based on
the test results.

04/19/25 7
Who Tests Software?
• Programmers:
 Unit testing
 Test their own or other programmers’ code

• Users:
 Usability and acceptance testing
 Volunteers atypically test beta versions

• Quality assurance personnel:


 All types of testing except unit and acceptance
 Develop test plans and strategy
04/19/25 8
Activities Undertaken During System and
Integration Testing

• Test Suite Design

• Run test cases Tester


• Check results to detect failures.

• Prepare failure list

• Debug to locate errors

• Correct errors.
Develope
r
04/19/25 9
Perspectives of Different Testers
Ack www.zelger.org

developer independent tester


Understands the system Must learn about the system
but, will test "gently" but, will attempt to break it
and, is driven by "delivery" and, is driven by quality

04/19/25 10
Verification versus Validation

• Verification is the process of


determining:
 Whether the output of one phase of
development conforms to its previous phase.

• Validation is the process of


determining:
 Whether a fully developed system conforms
to its SRS document..
04/19/25 11
Verification versus Validation

• Verification is concerned with phase


containment of errors:
 Whereas the aim of validation is that the
final product be error-free.

04/19/25 12
Verification and Validation Techniques

• Review • System testing

• Simulation

• Unit testing

• Integration
testing

04/19/25 13
Verification Validation
Are you building it Have you built the right
right? thing?
Checks whether an Checks the final product
artifact conforms to its against the specification.
previous artifact.

Done by developers. Done by Testers.

Static and dynamic Dynamic activities:


activities: reviews, unit Execute software and
testing. check against
requirements.
04/19/25 14
The Effective Tester!

04/19/25 15
Test How Long?

One way:

#
Bugs
Time

• Another way:
 Seed bugs… run test cases

 See if all (or most) are getting detected


04/19/25 16
On a
lighte
r
note

04/19/25 17
3 Test Types

Actual output
compared
… requirements Black box with
required output

Key design elementsGray box Confirmation


of expected
behavior

White
box Confirmation
…Code of expected
behavior

04/19/25 18
Why Thorough Testing is Hard?

• The input data domain is vast…

• There are too many paths through


the program…
 Further complexity: Many paths are
infeasible --- We cannot even determine
whether all paths have been covered
during testing.
04/19/25 19
What’s So Hard About Testing?

• Consider int check-equal(int x, int


y)
• Assuming a 64 bit computer
 Input space = 2128

•Assuming it takes 10secs to key-in an integer pair:


It would take about a billion years to enter all possible
values!

Automatic testing has its own problems!


04/19/25 20
How to Test Systematically?

• Most testing strategies are:


 Based on either explicit or implicit
model constructions.

 Models are known to reduce a large


problem to a manageable size.

 Examples: Data models, CFG, State


model, SDG, Regular expression, etc.
04/19/25 21
Pesticide Effect

• Errors that escape a fault detection


technique:
 Can not be detected by further
applications of that technique…

F F F F
I I I I
L L L L
T T T T
E E E E
R R R R

04/19/25 22
Capers Jones Rule of Thumb

Capers Jones

• Each of software review,


inspection, and test step will
find 30% of the bugs present.
In IEEE Computer, 1996

04/19/25 23
Pesticide Effect: Example

• Assume initially 1000 bugs were


present and we use four testing
techniques:
 Each detects only 30% of the bugs
present
 How many bugs would remain after
testing?
 1000*(0.7)4=241 bugs
04/19/25
24
Quiz

• Certain testing carried out over 5 testing stages


• Each stage can detect 60% of the defects present
at that stage.
• Immediately after each testing stage and before
the next testing stage starts:
 Bug fixes are carried out.

 Each bug fix has a 15% chance of creating a new bug.

• Assume initial number of bugs in a program to be


1000
 How many bugs would remain after all the five
04/19/25 25
testing stages are complete?
Dilbert: Scott Adams
04/19/25 26
When are verification undertaken in waterfall
model?
When is testing undertaken in waterfall
model?
When is validation undertaken in waterfall
model?
Feasibility
Study
Req. Analysis

Design

Coding

Testing

Maintenance
04/19/25 27
Quiz: Solution
1. When are verification undertaken in waterfall model?

2. When is testing undertaken in waterfall model?

Ans: Coding phase and Testing phase

3. When is validation undertaken in waterfall model?

Feasibility Study
Verification
Req. Analysis

Design
Validation
Coding

Testing

Maintenance
04/19/25 28
How Many Latent Errors?

• Several independent studies


[Jones],[schroeder] conclude:
 85% of errors get removed at the
end of a typical manual testing
process.
 Too bad?
 All practical test techniques are
heuristics… they help… but do not
04/19/25guarantee success. 29
Finding 85% of flaws is pretty good?

• Usually acceptable level of quality:


 Except for critical applications where life is
at stake!

“Relax, our
engineers found
and have
corrected
85% of the
flaws.”

04/19/25 30
Testing Strategy

• Test Strategy primarily addresses:


 Which types of tests to deploy?

 How much effort to devote to


which type of testing?
• Black-box: Usage–based testing (based
on customers’ actual usage pattern)

• White-box testing can be guided by


black box testing results

04/19/25 31
Test Plan Document: Overall Structure

32
Testing Strategy
• Part of any Test Strategy is to decide on
which tests to deploy:
1. Collect information to address and choose a test
model:
• Usage–based testing (based on customers’ actual usage
patterns)
• White-box testing (guided by black box testing results)
• Mutation testing (programming language, past
experience, etc.)

2. Analyze and construct the appropriate test


model & techniques
3. Validate the model and incrementally improve
the model for continuous usage
04/19/25 33
Sample 1: Model from “past” data

Percentage Breakdown of Problem Discovery and Fi

40% Quiz: How would you


use this for planning
Problems test effort?
Detected 25%

15%
10%
10%

04/19/25 Reviews Unit Integration System customer34


test test test reported
Sample 2: Model from “past” data

Percentage Breakdown of Problem Discovery and Fi

Quiz: How would


Proble 50% you use this for
ms planning test
Detecte 30%
d effort?

10% 10%

test test test customer


Technique 1Technique 2Technique 3reported
04/19/25 35
Sample 3: Distribution of Error Prone
Areas
customer reported bugs for Release 1

Quiz: How would you use


this for planning Release 2 testing?
Problems
Detected

04/19/25 F1 F2 F3 F4 F5 F6 36
Test Document: Recording the Test suite

Test Case number


Test Case author
A general description of the test purpose
Pre-condition: May include test results of other
pre-req modules
Test inputs
Expected outputs (if any)
Test Case history:
Test execution date
Test execution person
Test execution result (s): Pass Fail
If failed : Failure information
04/19/25 : fix status 37
Test - Human Resources (QA Professional)

• Test Planning (experienced people)


• Test scenario and test case design
( experienced people and test discipline
educated)
• Test execution (semi-experienced to
inexperienced)
• Test result analysis (experienced people)
• Test tool support (experienced people)
• May include external people:
 Users
 Industry experts (for COTs )
04/19/25 38
Unit Testing

04/19/25 39
When and Why of Unit Testing?

• When is unit testing carried out?


• Ans: After the coding of a unit is
complete and it compiles successfully.

• Unit testing reduces debugging


effort substantially.

04/19/25 40
Why unit test?

• Without unit test:


Failure
 Errors become more
difficult to track down.

 Debugging costs
increase substantially…

 The scope is smaller,


making it easier to
locate and fix errors…
Unit Testing --- Basic Idea

PROCEDURE
STUB CALL DRIVER
CALL UNDER TEST

Access To Nonlocal Variables

04/19/25 42
Quiz

• Unit testing can be considered


which of the following types of
activities?
 Verification?

 Validation?

04/19/25 43
Design of Unit Test Cases

• There are essentially three main


approaches to design test cases:
 Black-box approach

 White-box (or glass-box) approach

 Grey-box approach

04/19/25 44
Black-Box Testing

• Test cases are designed using only the


functional specification of the software:

 Without any Input Output


Softwar
e
knowledge of the
internal structure of the
software.

• Black-box testing is also known as


functional testing.
04/19/25 45
White-box Testing

• To design test cases:


 Knowledge of internal structure of
software necessary.

 White-box testing is also called


structural testing.

04/19/25 46
Black-Box Testing

• Equivalence class partitioning


 Scenario coverage

 Cause-effect (Decision Table) testing

 Combinatorial testing

• Special value (risk-based) testing


 Boundary value testing

04/19/25 47
White-Box Testing
• Several white-box testing strategies have
become very popular :
 Statement coverage
 Branch coverage
 Path coverage
 Condition coverage
 MC/DC coverage
 Mutation testing
 Data flow-based testing
04/19/25 48
Evolution of Testing Tools
1960- 1990-2000 2000-
1990
AI-Based
Testing
Solutions

Scripting

Capture and
Replay

Manual Manual Automate


Test Test d
design design Test
and and design
Automate
49Execution
04/19/25 and
d Execution
Testing Tools Abbot/JFCUnit/Marathon…

FIT/Fitnesse (High level)


GUI Perfomance and
Load Testing
JMeter/JUnitPerf
Cactus Business
Logic
HttpUnit/Canoo/Selenium

unit (Low level)


Web
UI

Persistence
Junit/SQLUnit/XMLUnit
Layer
04/19/25 50
04/19/25 51
Capture/Playback Tools
• Automated test tools mimic the actions of the
tester.
• During testing, the tester uses the keyboard and
mouse to perform specific tests or actions.
• Testing tool captures all keystrokes and results:
 Baselined in a test script.

• During test playback, scripts compare the


latest outputs with the previous baseline.
• Test tools typically provide for non-intrusive
testing:
 Interact with the “application-under-test” as if the test
04/19/25 52
QTP: Record and
Playback
• Steps to follow:
• 1 Open Quick Test.

• 2 Open a test:
 To create a new test,
click on New
 To open an existing test,
click on Open

• 3 Click the Record button If


you are recording for the
first time the Record and
Run Settings dialog box
opens.
53
Playback

Sprint\zXC
abcde

04/19/25 54
Design of Test Cases

• Exhaustive testing of any non-trivial


system is impractical:
 Input data domain is extremely large.

• Design an optimal test suite:


 Of reasonable size and

 Uncovers as many errors as possible.

04/19/25 55
Design of Test Cases

• If test cases are selected randomly:


 Many test cases would not contribute to the
significance of the test suite,
 Would not detect errors not already being
detected by other test cases in the suite.

• Number of test cases in a randomly


selected test suite:
 Not an indication of effectiveness of testing.

04/19/25 56
Design of Test Cases

• Testing a system using a large number of


randomly selected test cases:
 Does not mean that most errors in the
system will be uncovered.

• Consider a simple example:


 Find the maximum of two integers, x and y.

04/19/25 57
Design of Test Cases

• The code has a simple programming error:


• If (x>y) max = x;
else max = x;

• Test suite {(x=3,y=2);(x=2,y=3)} can


detect the error,
 A larger test suite {(x=3,y=2);(x=4,y=3);
(x=5,y=1)} does not detect the error.

04/19/25 58
Design of Test Cases

• Systematic approaches are required to


design an optimal test suite:
 Each test case in the suite should detect
different errors.

04/19/25 59
Design of Test Cases

• There are essentially three main


approaches to designing test cases:
 Black-box approach

 White-box (or glass-box) approach

 Grey-box testing

04/19/25 60
Black-box
Testing

03/08/10
04/19/25 61 61
Black Box Testing

• Considers the software as a black box:


 Test data derived from the specification
• No knowledge of code necessary

• Also known as:


 Data-driven or Input Output
Syste
 Input/output driven testing m

• The goal is to achieve the thoroughness of


exhaustive testing:
 With much less effort!!!!
62
Black-Box Testing

• Equivalence class partitioning


 Scenario coverage
 Combinatorial testing
 Cause-effect (Decision Table) testing
 Pair-wise testing

• Special value (risk-based) testing


 Boundary value testing

04/19/25 63
Back Box Testing:
Equivalence Class Testing

03/08/10
04/19/25 64 64
Why Define Equivalence Classes?
E2 E3
E1

• Premise:
 Testing code with any one
representative value from an
equivalence class is:

 As good as testing with any other


values from the equivalence class.

04/19/25 65
Equivalence Class Partitioning

• The program’s input value space:


 Partitioned into equivalence classes.

• Partitioning is done such that:


 The program behaves in a similar
manner for every input value belonging
to an equivalence class.
 At the very least, there should be as
many equivalence classes as scenarios.

04/19/25 66
Equivalence Class Partitioning

• How do we identify equivalence classes?


 Identify scenarios

 Examine the input data.

 Examine the output

• Few other guidelines are also available


for determining equivalence classes.

04/19/25 67
Equivalence Partitioning

• First-level partitioning:
 Valid vs. Invalid test cases

Valid Invalid

68
04/19/25
Equivalence Partitioning

• Further partition valid and invalid test cases into


equivalence classes

Invalid
Valid
69
04/19/25
Equivalence Partitioning

• Create a test case for at least one value


from each equivalence class

Invalid
Valid
70
04/19/25
Equivalence Class Partitioning

• If the input data to the program is


specified by a range of values:
 e.g. numbers between 1 to 100.

 One valid and two invalid equivalence


classes are defined.
1 100

71
04/19/25
Equivalence Class Partitioning

• If the input is an enumerated set of


values, e.g. :
 {a,b,c}

• Define:
 One equivalence class for valid input values.

 Another equivalence class for invalid input


values should be defined.
04/19/25 72
Equivalence Partitioning
• A set of input values constitute an
equivalence class if the tester believes
these are processed identically:
 Example : issue book(book id);
 A different set or sequence of
instructions may be executed based on
the book type. Book

Issue book Reference


book

04/19/25
Single Multiple volume 73
volume
Multiple Input Parameters: Weak
Equivalence Class Testing
age

>30

5-30

Education
School UG PG
04/19/25 74
Strong Equivalence Class Testing

age

>30

5-30

Education
School UG PG
04/19/25 75
Strong Robust Equivalence Class
Testing
age

>30

5-30

Education.
76
School UG PG
04/19/25
Quiz
• Design Equivalence class test cases:
• A bank pays different interest rates on
deposits depending on the deposit period.
 3% for deposit up to 15 days
 4% for deposit over 15days and up to 180 days
 6% for deposits over 180 days up to 1 year
 7% for deposits over 1 year but less than 3
years
 8% for deposits 3 years and above

04/19/25 77
Quiz: Handling Multiple Equivalence Classes

• For deposits of less than Rs. 1 Lakh,


rate of interest:
 6% for deposit upto 1 year
 7% for deposit over 1 year but less than 3 years
 8% for deposit 3 years and above

• For deposits of more than Rs. 1 Lakh,


rate of interest:
 7% for deposit upto 1 year
 8% for deposit over 1 year but less than 3 years
 9% for deposit 3 years and above
04/19/25 78

You might also like