Lecture05 add (1)
Lecture05 add (1)
Engineering
Software Quality Testing
SQE-SE3613 1
Software Testing Techniques
and Strategies
Contents
14
Structural or White-box Testing
(WBT)
• Statement Coverage Testing
If A = 3, B = 9
If A = -3, B = -9
17
Comparing BBT & WBT
• Perspective: A Key difference
• BBT
• Views the objects of testing as a black-box
• Focus: on input-output relations or external functional behavior
• WBT
• Views the objects as a glass-box
• Focus: internal implementation details are visible and tested
• Objects:
• WBT tests small objects e.g. small software products or small parts of large
products
• BBT tests large objects i.e. software systems as a whole
18
Comparing BBT & WBT
• Timeline:
• WBT in early sub-phases of testing such as unit and component testing
• BBT in late sub-phases such as system and acceptance testing
• Defect focus:
• BBT detects failures related to external functions
• WBT focuses on implementation related faults
19
Comparing BBT & WBT
• Defect detection and fixing:
• BBT is effective for problems of interfaces and interactions
• WBT for problems localized within a small unit
• Tester:
• BBT by dedicated professional testers, or 3rd party IV&V (independent
verification and validation) bodies
• WBT by developers themselves
20
When to Stop Testing?
• The issue may be broken down into two sub-issues:
• Local/small scale:
• When to stop a specific test activity?
• Global scale
• When to stop all the major test activities?
• May yield different answers, leading us to different
testing techniques
• Decision to stop testing can usually be made taking two
approaches
• Informal
• Formal 21
When to Stop Testing? - Informal
Approaches
• Two types:
• Resource-based criteria: Employed only when cost & schedule is
dominant attribute
• Stop when you run out of time
• Stop when you run out of money
• Activity-based criteria:
• Stop when you complete planned test activities
• Irresponsible → quality/other problems
• Irresponsible criteria to stop testing based on time & money as far as
product quality is concerned
• Completion of planned activities does not mean quality is achieved
22
When to Stop Testing? - Formal
Approaches
• Two levels: Global level & Localized Level
• Global level: Exit from testing is associated with product release
• Stop when quality goals reached
• Obvious way to make such product release decisions is the use of various
reliability assessments (Direct quality measure)
• Various formal reliability assessment models exist
• Conclusion: reliability assessments should be close to what actual users would
expect
• This requires that the testing right before product release resembles actual usages by target users
• This requirement resulted in the so-called usage-based testing
23
When to Stop Testing? - Formal
Approaches
• Localized level:
• Reliability definition based on customer usage scenarios
may not hold good in this situation
• Reason: many of the internal components are never
directly used by actual users
• Result: reliability criterion may not be meaningful
• Alternative exit criteria are needed e.g. “Products should
not be released unless every component has been tested”
known as coverage criteria
24
Usage-based Statistical Testing
(UBST)
• Actual use of product by the users
• Users report the problems faced, post-product-release defect fixing
activities
• Expensive & harmful for reputation
• Vendors name it Beta testing probably to save reputation!!
• If actual usage can be captured and used in testing → product
reliability directly assured
• Problem: massive number of customers & diverse usage
patterns cannot be captured in an exhaustive set of test cases
• Statistical sampling is needed
25
Usage-based Statistical Testing
(UBST)
• For practical implementation of such a testing strategy:
• Actual usage information needs to be captured in various models,
commonly referred to as “Operational Profiles” or Ops
• Applicable to the final stage of testing
• Exit criteria: reliability goals
26
Coverage-based Testing (CBT)
• Both BBT/WBT employ test coverage (with different scope)
• Simple Checklists:
• BBT: Checklist of major functions
• WBT: Checklist of all the product components (e.g., all the statements)
• Other Approaches (More Formal methods)
• Formally defined partitions are similar to checklists but ensure:
• Mutual exclusion of checklist items to avoid unnecessary repetition,
• Complete coverage defined accordingly
• Specialized type of partitions:
• Input domain partitions into sub-domains
• Finite state machines
27
Coverage-based Testing (CBT)
• Applicability:
• Particularly unit and component testing
• If used in later phases, then works at high abstraction levels
• Termination criteria: coverage goals
28
Comparing CBT with UBST
• Perspective:
• UBST: user’s perspective
• CBT: developer’s perspective
• Stopping Criteria:
• UBST: reliability criteria
• CBT: coverage criteria
• Objects:
• UBST: large software systems
• CBT: small units
29
Comparing CBT with UBST
• Timeline:
• UBST: late sub-phases (system/acceptance)
• CBT: early sub-phases (unit/component)
• Testing environment:
• UBST: similar to customer installation
• CBT: specific test environment
• Customer and user roles:
• UBST: active
• CBT: not active
• Tester:
• UBST: dedicated, professional testers
• CBT: developers usually perform the testing 30
References
Software testing
glossary:
http://thiyagarajan.w
ordpress.com/glossa
ry/
31
Software Testing Levels
• Unit Testing
• Integration Testing
• System Testing
• Acceptance Testing
• Performance Testing
• Regression Testing