0% found this document useful (0 votes)
13 views

qwert

Uploaded by

Saad Nadaf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

qwert

Uploaded by

Saad Nadaf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Answers

a) Define Static and Dynamic Testing

1. Static Testing:

o Testing that involves checking the software without executing the code.

o Examples: Code reviews, walkthroughs, and inspections.

2. Dynamic Testing:

o Testing that involves executing the software to check its functionality and behavior.

o Examples: Unit testing, integration testing, and system testing.

b) State Any Two Examples of Integration Testing

1. Top-Down Integration Testing:

o Testing is done by integrating modules starting from the top-level module and
moving down.

2. Bottom-Up Integration Testing:

o Testing starts from the lower-level modules and progresses upward, integrating
higher-level modules.

c) Enlist Any Two Activities Involved in Test Planning

1. Defining the Scope of Testing:

o Determine what features, modules, and functionalities will be tested.

2. Resource Allocation:

o Assign testers, tools, and infrastructure required for testing.

d) Enlist Objectives of Software Testing

1. Ensure that the software meets the requirements and works as expected.

2. Identify and fix defects to improve software quality.

3. Validate that the software performs well in different environments.

4. Ensure that the product is ready for release without critical issues.

e) Define Defect

A defect is a flaw or deviation in the software that causes it to behave incorrectly or not meet its
requirements.
f) State Any Four Advantages of Using Tools

1. Improved Accuracy:

o Tools minimize human errors during testing.

2. Faster Execution:

o Automated tools speed up repetitive tasks like regression testing.

3. Enhanced Reporting:

o Tools provide detailed and consistent reports for test results.

4. Reusable Test Scripts:

o Scripts created for automation can be reused across multiple test cycles.

g) Define Bug, Error, Fault, and Failure

1. Bug:

o An issue found during testing that causes incorrect or unintended behavior.

2. Error:

o A mistake made by a programmer that leads to incorrect code.

3. Fault:

o A defect in the system caused by an error in the code or design.

4. Failure:

o The inability of the software to perform a required function under specified


conditions.

a) Describe Boundary Value Analysis with Suitable Example

Boundary Value Analysis (BVA):

• BVA is a testing technique that focuses on testing the boundaries of input values.

• Errors are more likely to occur at the "boundaries" than in the middle of input ranges.

• Test cases are created for values just inside, at, and just outside these boundaries.

Example:
Consider a system that accepts input values between 1 and 100:

1. Valid Range: 1 to 100.

2. Boundary Values:

o Lower Boundary: 0 (outside), 1 (valid), 2 (inside).


o Upper Boundary: 99 (inside), 100 (valid), 101 (outside).

Test Cases:

1. Input = 0 → Should fail (invalid).

2. Input = 1 → Should pass (valid).

3. Input = 100 → Should pass (valid).

4. Input = 101 → Should fail (invalid).

BVA ensures the software handles boundary conditions effectively.

b) Differentiate Between Drivers and Stubs (Any Four Points)

Aspect Drivers Stubs

A program used to call a module for A program that simulates a module’s


Definition
testing. behavior.

Purpose Acts as a caller for the tested module. Acts as a called module during testing.

Use Case Used in bottom-up integration testing. Used in top-down integration testing.

Dependency Replaces the higher-level module. Replaces the lower-level module.

A driver calls a sorting function for


Example A stub mimics a database response.
testing.

c) State the Contents of Test Summary Reports Used in Test Reporting

A Test Summary Report provides an overview of the testing process and results. Its contents include:

1. Test Objectives:

o Goals and scope of testing.

2. Testing Scope:

o Features/modules covered in testing.

3. Test Metrics:

o Total test cases executed, passed, failed, and skipped.

4. Defect Summary:

o Details of identified defects categorized by severity (critical, major, minor).

5. Environment Details:

o Description of hardware, software, and configurations used for testing.

6. Key Observations:
o Notable issues, bottlenecks, or risks discovered.

7. Conclusion:

o Assessment of the system’s readiness for release.

8. Recommendations:

o Suggestions for further testing or fixes, if needed.

d) State Any Eight Limitations of Manual Testing

1. Time-Consuming:

o Manual testing takes longer, especially for repetitive tasks like regression testing.

2. Error-Prone:

o Human errors can occur due to fatigue or oversight.

3. Not Suitable for Large Projects:

o Testing large and complex applications manually becomes challenging.

4. Lack of Reusability:

o Test cases cannot be reused easily compared to automation scripts.

5. Limited Test Coverage:

o Manual testing often results in incomplete coverage due to time and resource
constraints.

6. Difficult to Perform Load Testing:

o Simulating multiple users or large-scale interactions is impractical manually.

7. Inconsistent Results:

o Results may vary when executed by different testers due to interpretation


differences.

8. No Support for Continuous Testing:

o Manual testing does not integrate well into continuous integration/continuous


deployment (CI/CD) pipelines.

9. Inefficient for Regression Testing:

o Repeating the same test cases after every code change is tedious and inefficient.

10. Lack of Detailed Reporting:

o Manual methods may not provide the detailed, consistent reports that tools
generate.

You might also like