Testing
Testing
Example: 3 inputs
I1 has 10 equivalence classes Total tests cases required:
I2 has 10 equivalence classes 10 x 10 x 10 = 1000 test cases.
I3 has 10 equivalence classes
Dependency Islands
• Each output is usually not dependent on all inputs
Example: suppose we have 6 inputs I1,.. I6 and 3 outputs O1, .., O3 Suppose O1
depends on I1, I2, I3 O2 depends on I4, I5 O3 depends on I6
If each input has 5 equivalence classes:
To test I1 we need 5 test cases
To test I2 we need 5 test cases
To test I1 and I2 together, we need 5 x 5 test cases
Thus for all 67 inputs, we need 56 = 15,625 test cases
Using dependency islands:
For O1: test only I1, I2, I3 : 53 test cases
For O2: test only I4, I5 : 52 test cases Total test cases = 155
For O3: test only I6 : 5 test cases
Stub Testing (Stubs and Drivers)
- Unit and Integration testing
Stubs: dummy modules used
Top-Down Integration Testing for testing if higher level
A modules are working properly.
b c d Stubs
B C D
e f g Stubs A
B C D
E F G
Bottom-Up Integration testing
a Driver
B C D
E F G
E F G D
• Cost of developing stubs and drivers
– generally, Driver modules are easier to develop -- so,
bottom-up integration testing is less costly.
• With Top-Down Integration Testing, major modules are coded
and tested first - strong psychological boost when major
modules are done.
• With Bottom-Up Integration testing, no working program can be
demonstrated until the last module is tested -- major design
errors may not be detected till the end, when entire programs
may need revision!
• Meet-in-the-middle approach may be best.
System Testing
• Recovery Testing
– forces software failure and verifies complete recovery
• Security Testing
– verify proper controls have been designed
• Stress Testing
– resource demands (frequency, volume, etc.)
Acceptance Testing
• Alpha Testing (Verification testing)
– real world operating environment
– simulated data, in a lab setting
– systems professionals present
• observers, record errors, usage problems, etc.
• Beta Testing (Validation Testing)
– live environment, using real data
– no systems professional present
– performance (throughput, response-time)
– peak workload performance, human factors test, methods and
procedures, backup and recovery - audit test