0% found this document useful (0 votes)
21 views6 pages

QUESTION BANK-complete Software Testing

The document is a question bank for the Software Testing course at Dayananda Sagar Academy of Technology & Management for the MCA program. It includes various questions covering topics such as error definitions, test automation, boundary value analysis, and fault-based testing, along with their respective marks and cognitive levels. The questions aim to assess students' understanding and application of software testing concepts and techniques.

Uploaded by

Sunita Jeevangi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views6 pages

QUESTION BANK-complete Software Testing

The document is a question bank for the Software Testing course at Dayananda Sagar Academy of Technology & Management for the MCA program. It includes various questions covering topics such as error definitions, test automation, boundary value analysis, and fault-based testing, along with their respective marks and cognitive levels. The questions aim to assess students' understanding and application of software testing concepts and techniques.

Uploaded by

Sunita Jeevangi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

DAYANANDA SAGAR ACADEMY OF TECHNOLOGY & MANAGEMENT

(An Autonomous Institute under VTU, Belagavi)


Third Semester MCA Semester End Examination – April/May 2025

Course: SOFTWARE TESTING

Course Code: MMC324


QUESTION BANK

QUESTIONS MARKS CO RBT


1 Define the terms error, fault, failure, and bug in the context of software 10 CO1 L2
testing. Provide an example for each term to illustrate their differences.
2 Explain the concepts of static and dynamic quality attributes in software 10 CO1 L2
testing. Provide examples for each category.
3 Describe the role of test automation in software testing. List and briefly 10 CO1 L2
explain any three tools used for test automation.
4 illustrate the process of testing and debugging with a diagram. Explain each 10 CO2 L3
step involved in the cycle.
5 Construct a test plan for a sorting program that sorts integers in ascending 10 CO2 L3
or descending order based on user input. Include test cases for valid and
invalid inputs.
6 Demonstrate the differences between valid and invalid inputs in software 10 CO2 L3
testing using an example of a program that calculates the square root of a
number.
7 Compare and contrast hardware and software testing techniques. Highlight 10 CO3 L4
the key differences in fault models and test domains.
8 Analyze the importance of model-based testing in software development. 10 CO3 L4
Provide an example of a model (e.g., finite-state machine) and explain how
tests can be derived from it.
9 Evaluate the effectiveness of random testing in software reliability 10 CO3 L4
estimation. Discuss its advantages and limitations.
10 Discuss the significance of defect management in software development. 10 CO4 L5
Explain the stages involved in defect prevention, discovery, and resolution.
11 Assess the impact of test metrics on software quality. Explain any two 10 CO4 L5
metrics (e.g., cyclomatic complexity, Halstead metrics) and their relevance
in testing.
12 Propose a strategy for improving software reliability using operational 10 CO5 L6
profiles. Explain how different usage patterns affect reliability.
13 Design an oracle mechanism for a video library management system 10 CO5 L6
(Hvideo) to verify correct data entry and search functionalities. Include a
diagram to support your explanation.
MODULE 2 QUESTION BANK

QUESTIONS MARKS CO RBT


1 Compare and contrast functional and structural testing methods, highlighting 10 CO1 L2
their respective advantages and limitations in software testing.
2 Analyse the significance of boundary value analysis (BVA) in identifying 10 CO2 L3
faults, using the Triangle Problem as an example to illustrate its effectiveness
3 Discuss the challenges associated with faults of omission and commission in 10 CO1 L2
software testing, providing examples from real-world scenarios.
4 Evaluate the suitability of equivalence class partitioning for the NextDate 10 CO4 L5
function, considering its input domain complexities
5 Examine the role of test coverage metrics in structural testing, and explain 10 CO3 L4
how they enhance testing management.
6 Critique the limitations of decision table-based testing when applied to the 10 CO4 L5
Commission Problem, contrasting it with its effectiveness in the Triangle
Problem.
7 Assess the importance of test case documentation, including pre-conditions, 10 CO5 L6
inputs, and expected outputs, in ensuring reliable software testing.
8 Explore the implications of the "Venn diagram" perspective on testing, 10 CO1 L2
focusing on the relationships between specified, programmed, and tested
behaviours.
9 Analyse the impact of input domain constraints on testing strategies, using the 10 CO3 L4
Saturn Windshield Wiper Controller as a case study.
10 Discuss the trade-offs between clarity and efficiency in software 10 CO2 L3
implementations, referencing the traditional and structured versions of the
Triangle Problem

QUESTIONS MARKS CO RB
1. Define Boundary Value Analysis (BVA) and explain its significance in 10 CO1 L
software testing.
2. List the five boundary values (min, min+, nom, max-, max) for a variable 10 CO1 L
with range [10, 50]. How does BVA help in identifying defects?

3. What is the "single fault assumption" in boundary value testing? Provide 10 CO1 L
an example where this assumption is valid.
4. Apply boundary value analysis to generate test cases for a function that 10 CO2 L
accepts two integers (x: [1, 100], y: [50, 200]). How many test cases are
produced?

5. Given the equivalence classes for the Triangle Problem (e.g., equilateral, 10 CO2 L
isosceles, scalene), design test cases using weak equivalence class testing.

6. Explain how robustness testing extends boundary value analysis. Provide test 10 CO2 L
cases for a function with input range [0, 100] using robustness testing.

7. Analyse the limitations of boundary value analysis when variables are 10 CO3 L
interdependent. Use the NextDate function as an example.

8. Compare and contrast weak and strong equivalence class testing. Which 10 CO3 L
method is more thorough, and why?

9. Evaluate the effectiveness of decision table-based testing for the 10 CO4


Commission Problem. Why it is less suitable compared to the Triangle
Problem?

10. Critically assess the role of special value testing in scenarios where formal 10 CO4
techniques (e.g., BVA, equivalence partitioning) fall short. Provide an
example.
MODULE 4 QUESTION BANK

QUESTION MARKS CO RBT


1) Analyze the concept of DD-Paths (Decision-to-Decision Paths) in path 10 CO1 L2
testing. Apply this concept to construct the DD-Path graph for
the triangle problem (provided in the module) and classify each node
according to the five cases of DD-Path definition. Justify your
classification with examples.
2) Evaluate the significance of test coverage metrics (e.g., C0C0, C1C1 10 CO1 L2
, CMCCCMCC) in structural testing. Explain how each metric ensures
different levels of test thoroughness with examples. Compare their
effectiveness in fault detection.
3) ` 10 CO2 L3
4) Analyze the feasible and infeasible paths in the basis path testing of 10 CO2 L3
the triangle problem. Compare the original basis paths with the
reduced set after eliminating infeasible paths. Discuss the impact of
logical dependencies on path selection.
5) Explain data flow testing with a focus on Define/Use testing. Illustrate 10 CO2 L3
the process by identifying du-paths and dc-paths for the
variable sales in the commission problem. Discuss how these paths
help detect anomalies.
6) Apply McCabe’s basis path testing method to the bubble sort 10 CO3 L4
algorithm (provided in the module). Demonstrate the steps to compute
cyclomatic complexity, derive the basis set of paths, and design test
cases to cover them.
7) Evaluate the hierarchy of Rapps-Weyuker data flow coverage 10 CO L5
metrics (All-Defs, All-Uses, All-DU-Paths). Discuss their subsumption 4
relationships and practical challenges in achieving higher-level coverage
(e.g., All-DU-Paths).
8) Synthesize the concept of slice-based testing by constructing slices for 10 CO3 L4
the variable commission in the commission problem. Create a lattice
diagram to show the subset relationships between slices and explain
their significance in debugging.
9) Compare static and dynamic data flow testing techniques. Contrast 10 CO4 L5
their advantages, limitations, and scenarios where each is most effective.
Use examples to explain the same
10) Critically analyze the essential complexity metric in the context of 10 CO5 L6
structured programming constructs. Explain how condensing a graph
using these constructs reduces cyclomatic complexity to 1. Provide
examples from the module.
MODULE 5 QUESTION BANK

QUESTIONS CO RBT
1) Define fault-based testing and list the key assumptions (competent CO1 L2
programmer hypothesis, coupling effect) that underpin its effectiveness.
Provide examples of common mutation operators like AOR or ROR.
2) Explain the mutation analysis process, including the roles CO1 L2
of mutants, mutation operators, and test adequacy criteria. Illustrate
with an example of a mutant in the Transduce program
3) Apply the fault-based adequacy criteria to evaluate a test suite for a CO2 L3
given program (e.g., the Transduce program). Calculate the adequacy
score if 3 out of 5 non-equivalent mutants are killed, and justify the
result.
4) Analyze the differences between strong mutation and weak CO3 L4
mutation testing. Discuss scenarios where weak mutation might fail to
detect faults that strong mutation would catch, using the Transduce
program mutants as examples.
5) Evaluate the competent programmer hypothesis and coupling CO4 L5
effect in fault-based testing. Argue whether these assumptions hold true
for complex logical errors (e.g., a flawed algorithm) versus syntactic
errors (e.g., > replaced by >=). Support your answer with examples.
6) Design a generic scaffolding framework for unit testing a Java . CO3 L4
Include components like test drivers, stubs, and oracles, and justify
how your design ensures controllability and observability.
7) Compare hardware fault-based testing (e.g., stuck-at-0/1 models) CO3 L4
with software mutation analysis. Highlight the challenges in adapting
hardware fault models to software testing.
8) Discuss the equivalent mutant problem in mutation analysis. Propose CO4 L5
strategies to minimize its impact on test adequacy evaluation,
referencing the Transduce program mutants.
9) Explain the role of test oracles in fault-based testing. Design a self- CO5 L6
checking oracle for the commission program to validate sales
calculations.
10) Critique statistical mutation analysis as a method for fault CO4 L5
estimation. Using the fish population analogy from the module, argue
its reliability for estimating residual software faults.

You might also like