QUESTION BANK-complete Software Testing
QUESTION BANK-complete Software Testing
QUESTIONS MARKS CO RB
1. Define Boundary Value Analysis (BVA) and explain its significance in 10 CO1 L
software testing.
2. List the five boundary values (min, min+, nom, max-, max) for a variable 10 CO1 L
with range [10, 50]. How does BVA help in identifying defects?
3. What is the "single fault assumption" in boundary value testing? Provide 10 CO1 L
an example where this assumption is valid.
4. Apply boundary value analysis to generate test cases for a function that 10 CO2 L
accepts two integers (x: [1, 100], y: [50, 200]). How many test cases are
produced?
5. Given the equivalence classes for the Triangle Problem (e.g., equilateral, 10 CO2 L
isosceles, scalene), design test cases using weak equivalence class testing.
6. Explain how robustness testing extends boundary value analysis. Provide test 10 CO2 L
cases for a function with input range [0, 100] using robustness testing.
7. Analyse the limitations of boundary value analysis when variables are 10 CO3 L
interdependent. Use the NextDate function as an example.
8. Compare and contrast weak and strong equivalence class testing. Which 10 CO3 L
method is more thorough, and why?
10. Critically assess the role of special value testing in scenarios where formal 10 CO4
techniques (e.g., BVA, equivalence partitioning) fall short. Provide an
example.
MODULE 4 QUESTION BANK
QUESTIONS CO RBT
1) Define fault-based testing and list the key assumptions (competent CO1 L2
programmer hypothesis, coupling effect) that underpin its effectiveness.
Provide examples of common mutation operators like AOR or ROR.
2) Explain the mutation analysis process, including the roles CO1 L2
of mutants, mutation operators, and test adequacy criteria. Illustrate
with an example of a mutant in the Transduce program
3) Apply the fault-based adequacy criteria to evaluate a test suite for a CO2 L3
given program (e.g., the Transduce program). Calculate the adequacy
score if 3 out of 5 non-equivalent mutants are killed, and justify the
result.
4) Analyze the differences between strong mutation and weak CO3 L4
mutation testing. Discuss scenarios where weak mutation might fail to
detect faults that strong mutation would catch, using the Transduce
program mutants as examples.
5) Evaluate the competent programmer hypothesis and coupling CO4 L5
effect in fault-based testing. Argue whether these assumptions hold true
for complex logical errors (e.g., a flawed algorithm) versus syntactic
errors (e.g., > replaced by >=). Support your answer with examples.
6) Design a generic scaffolding framework for unit testing a Java . CO3 L4
Include components like test drivers, stubs, and oracles, and justify
how your design ensures controllability and observability.
7) Compare hardware fault-based testing (e.g., stuck-at-0/1 models) CO3 L4
with software mutation analysis. Highlight the challenges in adapting
hardware fault models to software testing.
8) Discuss the equivalent mutant problem in mutation analysis. Propose CO4 L5
strategies to minimize its impact on test adequacy evaluation,
referencing the Transduce program mutants.
9) Explain the role of test oracles in fault-based testing. Design a self- CO5 L6
checking oracle for the commission program to validate sales
calculations.
10) Critique statistical mutation analysis as a method for fault CO4 L5
estimation. Using the fish population analogy from the module, argue
its reliability for estimating residual software faults.