Software Testing Scheme
Software Testing Scheme
COMPUTER SCIENCE
Software Testing
(NEP SCHEME)
SCHEME OF VALVATION
PART-A
1. .Define software testing. 2M
Software testing is the process of evaluating and verifying that a software product or
application does what it’s supposed to do. The benefits of good testing include preventing
bugs and improving performance.
1M
3. Write Equivalence classes.2M
Equivalence partitioning involves identifying “equivalence classes,” which are groups of
input values that exhibit similar behavior in the software. We group input values into these
classes and select representative test cases to validate the software’s functionality. This
approach helps us efficiently test the software with optimal coverage.
PART B
7.Explain testing life cycle.
The Software Testing Life Cycle (STLC) consists of the following six phases: 3M
1. **Requirement Analysis**
- Understand and analyze the requirements.
- Identify testable requirements.
- Prepare Requirement Traceability Matrix (RTM).
- Identify testing priorities and focus areas.
2. **Test Planning**
- Develop the test plan and strategy.
- Define scope and objectives of testing.
- Estimate effort and resources required.
- Identify testing tools and environment setup.
- Plan for risk management and mitigation.
5. **Test Execution**
- Execute test cases and log the results.
- Report any defects or issues found during testing.
- Retest the fixed defects.
- Track the status of defects and test coverage.
1. **Identification of Boundaries:**
- Determine the input and output boundaries of the software components.
- Identify minimum, maximum, just inside, just outside, typical values, and error values for
inputs.
3. **Testing Techniques:**
- **Boundary Value Analysis (BVA):** Focuses on creating test cases for the values at the
edges of the input range.
- **Equivalence Partitioning:** Often combined with BVA to divide input data into
partitions and test each partition's boundaries.
2. **Determine Boundaries:**
- Identify the lower and upper boundaries of the input range.
- Include values just outside the boundaries for negative testing.
3. **Create Test Cases:**
- Develop test cases for values at, just below, and just above the boundaries.
- Ensure coverage of typical and extreme input values.
5. **Analyze Results:**
- Compare actual outcomes with expected outcomes.
- Identify any discrepancies or defects.
9.Write the difference between Weak normal and Strong normal Equivalence testing.
### Key Differences 5M
- **Test Case Selection**:
- Weak Normal: Selects one representative value from each equivalence class.
- Strong Normal: Selects multiple values, including boundaries, from each equivalence
class.
- **Defect Detection**:
- Weak Normal: Less effective in detecting boundary-related defects.
- Strong Normal: More effective in detecting defects at boundaries and within equivalence
classes.
- **Effort and Complexity**:
- Weak Normal: Lower effort, simpler to implement.
- Strong Normal: Higher effort, more comprehensive testing.
- **Use Case**:
- Weak Normal: Suitable for initial, quick validation of functionality.
- Strong Normal: Suitable for thorough testing, especially in critical systems where
boundary-related defects are crucial to catch.
10.Write difference betweenTop-Down and Bottom-up Integration. 5M
The difference between Top-down and Bottom-up Integration testing is that the former
moves downward, and the latter moves upward. Top-down focuses on designing and testing
the top module before merging it with sub-modules. Bottom-up, on the other hand, combines
the sub-modules to test them and moves upward thereafter.
Characteristic Top-Down Integration Testing Bottom-Up Integration Testing
Main module calls the sub modules Sub modules integrated with the
top module(s)
Uses stubs as a replacement for
Other missing sub modules Makes use of drivers as a
Differences replacement for main/top
Implemented on modules
structured/procedure-oriented
programming Beneficial for object-oriented
programming
Significant to identify errors at the
top levels Good to determine defects at
lower levels
Difficult to observe the output
Simple to perform Easy to observe and record the
results
Highly complex and data-driven
11.Explain Model based testing.5M
Model-based testing is nothing but a simple testing technique in which we get different test
cases that are described by the model. In this type, the test cases are generated via both online
and offline test case models.
Significance of Model-Based Testing 5M
1. Early Defect Detection: Model-Based Testing (MBT) makes it possible for testers to
find problems during the requirements or design phases by using model validation. By
doing this, flaws are kept from spreading to more expensive fixing development
phases.
2. Lower Maintenance Costs: Since test cases are derived from models, any
modifications to the system can immediately update the appropriate test cases, which
in turn can reflect any changes made to the models. This lessens the work and expense
of maintaining test cases, particularly in complex and large-scale systems.
3. Reusable Test Assets: Models and tes3Mt cases developed during the software
development lifecycle can be utilized again for regression testing. This guarantees
uniformity in testing procedures across projects and helps optimize the return on
investment in testing efforts.
4. Encouragement of Agile and DevOps Methods: Because it facilitates quick
feedback loops and continuous testing, it works well with Agile and DevOps
approaches. As part of the CI/CD pipeline, test cases can be automatically developed
and run, giving developers quick feedback and guaranteeing the quality of
deliverables.
5. Enhanced Test Coverage: It generates test cases from models that describe all
potential system behaviors and scenarios, assisting in ensuring thorough test coverage.
This aids in the early detection of possible flaws throughout the development process.
12.Write Retrospective on MDD verseus TDD.
Approach and Focus: 1M
MDD: Focuses on high-level design and automated code generation, reducing manual
coding efforts.
TDD: Emphasizes thorough testing and incremental development, with a strong focus
on code quality and functionality.
Development Process: 2M
MDD: Starts with creating models that are later transformed into code. The process is
more top-down.
TDD: Starts with writing tests, followed by code implementation to pass the tests.
The process is more bottom-up.
Tool Support: 1M
MDD: Relies on modeling and code generation tools (e.g., UML tools, EMF, etc.).
TDD: Relies on testing frameworks and tools (e.g., JUnit, NUnit, etc.).
Flexibility and Adaptability: 1M
MDD: May struggle with rapid changes due to its reliance on predefined models.
TDD: More adaptable to changes, as it encourages incremental development and
continuous refactoring.
PART C
13.Discuss the test cases for Commision problem.
10M(EACH TEST CASE 1M)
### Test Cases for Commission Problem
14.Discuss the guidelines for Boundary Value Testing. (10M(EACH POINT 1M)
### Guidelines for Boundary Value Testing in Software Testing
1. **Identify Boundaries:**
- Clearly define the input and output boundaries for the application under test.
- Determine the lower and upper limits for each input parameter.
15.List the characteristics of Data flow testing. (10M EACH POINT ONE MARK)
### Characteristics of Data Flow Testing in Software Testing
2. **Detection of Anomalies:**
- Identifies potential data anomalies such as uninitialized variables, unused variables, and
improperly modified variables.
3. **Path-Based Testing:**
- Involves testing specific paths in the program where variables are defined, used, and
potentially redefined.
7. **Coverage Criteria:**
- Employs specific coverage criteria such as definition-use (du) pairs, definition-clear (dc)
paths, and all-paths testing to ensure thorough testing of data flow.
16.Discuss top ten best practices for software testing excellence.(10M,EACH POINT 1M)
Achieving excellence in software testing involves implementing best practices that ensure the
delivery of high-quality, reliable, and efficient software. Here are the top ten best practices
for software testing excellence:
**Detailed Test Plan:** Develop a comprehensive test plan that outlines the testing strategy,
scope, objectives, resources, schedule, and deliverables. This plan should cover all types of
testing, including functional, non-functional, regression, and acceptance testing.
**Automate Where Possible:** Implement automated testing for repetitive and regression
test cases. Automated tests increase efficiency, ensure consistency, and allow testers to focus
on more complex and exploratory testing.
**Prioritize by Risk:** Focus testing efforts on areas of the application that pose the highest
risk to the business. Identify and prioritize critical functionalities and high-impact areas to
ensure they are thoroughly tested.
**TDD and BDD Practices:** Adopt TDD to write tests before code, ensuring that the
software meets its requirements from the outset. Use BDD to improve collaboration between
developers, testers, and business stakeholders by defining tests in natural language.
**CI/CD Pipelines:** Integrate testing into the CI/CD pipeline to ensure that code changes
are automatically tested before being merged and deployed. This practice helps catch defects
early and ensures faster delivery of high-quality software.
**Realistic Test Data:** Use realistic and diverse test data that represents actual usage
scenarios. Anonymize and sanitize production data to protect sensitive information while
ensuring test accuracy.
**Non-Functional Testing:** Include performance, load, stress, and security testing in your
test strategy. Ensuring that the application meets performance benchmarks and is secure
against vulnerabilities is crucial for overall software quality.
1. **Integration Levels**:
- **Class Level**: Testing interactions between methods within a single class.
- **Cluster Level**: Testing interactions between groups of related classes (also known as
clusters).
- **System Level**: Testing interactions across the entire system, involving multiple
clusters or subsystems.
2. **Testing Approaches**:
- **Bottom-Up Integration Testing**: Starts with testing the lowest-level components
(classes or clusters) and progressively integrates and tests higher-level components.
- **Top-Down Integration Testing**: Starts with the top-level components and integrates
and tests lower-level components progressively.
- **Sandwich Integration Testing**: Combines both bottom-up and top-down approaches
to integrate and test components simultaneously at different levels.
3. **Types of Tests**:
- **Unit Tests**: Focus on individual methods and classes to ensure they function correctly
in isolation.
- **Integration Tests**: Focus on the interactions between objects and classes to ensure
they work together as expected.
- **System Tests**: Validate the entire system's behavior, including interactions across
different clusters and subsystems.
1. **Use Automated Testing Tools**: Leverage tools and frameworks like JUnit (for Java),
NUnit (for .NET), and pytest (for Python) to automate integration tests.
2. **Mock Objects and Stubs**: Use mock objects and stubs to simulate the behavior of
complex objects and external dependencies, allowing for isolated testing of interactions.
3. **Continuous Integration**: Integrate testing into the continuous integration (CI) pipeline
to ensure that integration tests are run frequently, catching issues early.
4. **Incremental Testing**: Integrate and test components incrementally to manage
complexity and identify integration issues early in the development process.
5. **Thorough Test Coverage**: Ensure comprehensive test coverage by designing tests that
cover various interaction paths, use cases, and states.
1. **Task Scheduling**:
- **Preemptive Scheduling**: The operating system can interrupt and switch tasks,
ensuring that high-priority tasks get CPU time. Common in modern operating systems.
- **Cooperative Scheduling**: Tasks voluntarily yield control, which can lead to simpler
designs but risks one task hogging the CPU.
2. **Synchronization Mechanisms**:
- **Mutexes and Locks**: Ensure that only one thread or task can access a resource at a
time, preventing race conditions.
- **Semaphores**: Signaling mechanisms that control access to shared resources.
- **Monitors**: High-level synchronization constructs that manage concurrent access to
objects.
1. **Concurrency Testing**:
- Focuses on ensuring that concurrent tasks or threads execute correctly without interfering
with each other.
- Use of stress tests and load tests to simulate high levels of concurrency and identify
potential issues.
3. **Deadlock Testing**:
- Detect situations where tasks or threads are unable to proceed due to mutual resource
dependency.
- Static analysis tools, runtime detection, and deadlock prevention algorithms can help
identify and prevent deadlocks.
4. **Performance Testing**:
- Measure the impact of task interactions on overall system performance.
- Identify bottlenecks caused by contention for CPU or other resources.
- Use profiling tools to analyze CPU usage, task switching, and resource contention.
5. **Correctness Testing**:
- Ensure that tasks interact correctly, maintaining data integrity and correct sequencing of
operations.
- Use of assertions, invariants, and formal methods to verify correctness.
1. **Testing Frameworks**:
- Use of frameworks like JUnit for Java, NUnit for .NET, and pytest for Python to write and
manage tests.
- Specialized tools for concurrency testing, such as ConTest for Java.
2. **Incremental Testing**:
- Test interactions incrementally, starting with small, isolated components and gradually
integrating more complex interactions.
4. **Thorough Documentation**:
- Document interactions, synchronization mechanisms, and potential concurrency issues to
aid in understanding and testing.
5. **Continuous Integration**:
- Integrate testing into the continuous integration (CI) pipeline to ensure that interaction
tests are run frequently, catching issues early in the development process.