0% found this document useful (0 votes)
9 views11 pages

NPTEL

The document outlines essential concepts in software testing, including the importance of testing, various testing phases, and methodologies such as black-box and white-box testing. It emphasizes the significance of structured test case design techniques like equivalence class partitioning and boundary value analysis, as well as integration and regression testing strategies. Additionally, it addresses challenges in object-oriented testing and proposes model-based strategies for effective testing.

Uploaded by

Neithal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views11 pages

NPTEL

The document outlines essential concepts in software testing, including the importance of testing, various testing phases, and methodologies such as black-box and white-box testing. It emphasizes the significance of structured test case design techniques like equivalence class partitioning and boundary value analysis, as well as integration and regression testing strategies. Additionally, it addresses challenges in object-oriented testing and proposes model-based strategies for effective testing.

Uploaded by

Neithal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Week 1

1. Fundamentals of Software Testing

• Faults & Failures: A failure occurs when a fault (or bug) manifests during execution. Not all
faults lead to failures.

• Errors, Faults, and Failures: Defined by IEEE 1044 (1993, revised in 2010) to distinguish
between them.

• Bug Distribution:

o 60% originate from specification/design errors.

o 40% occur due to implementation errors.

• Bug Reduction Methods:

o Reviews, testing, formal specification, verification, structured development


processes.

2. Importance of Software Testing

• Cost of Poor Testing: Example of Ariane 5 rocket failure ($1 billion loss) due to an unhandled
exception during type conversion.

• Effort Required: Software testing takes 50% of development effort but only 10% of
development time due to parallel testing phases.

3. Testing Concepts and Activities

• Verification vs. Validation:

o Verification: Ensures each development phase produces correct outputs.

o Validation: Ensures the final product meets requirements.

• Testing Phases:

o Unit Testing: Individual modules tested independently.

o Integration Testing: Modules combined and tested for proper interaction.

o System Testing: Entire system tested against requirements.

o Regression Testing: Ensures new changes don’t break existing functionality.

• Types of System Testing:

o Alpha Testing: Internal team testing before release.

o Beta Testing: End-user testing before final deployment.

o Performance Testing: Ensuring non-functional requirements like response time,


stress handling, and recovery.

4. Test Case Design & Strategies


• Test Case Definition: A test case consists of Input (I), System State (S), and Expected Output
(O).

• Negative Test Cases: Ensure software handles invalid inputs gracefully.

• Test Suite: A collection of test cases.

• Test Plan: Documents testing scope, strategy, criteria, schedule, and resources.

• Black-box Testing:

o Based on functional requirements.

o No knowledge of internal code structure.

o Techniques: Equivalence Class Partitioning, Boundary Value Analysis, Decision Table


Testing.

• White-box Testing:

o Based on internal logic of the software.

o Requires code knowledge.

• Gray-box Testing:

o Combination of black-box and white-box testing.

5. Testing Life Cycle Models

• Waterfall Model: Testing happens in the later stages.

• V-Model: Testing activities run parallel to development phases.

o Strengths: High reliability, verification & validation throughout the cycle.

o Weaknesses: No support for iterations or late requirement changes.

6. Test Automation & Advanced Concepts

• Evolution of Test Automation:

o 1960-1990: Manual Testing

o 1990-2000: Capture & Replay Techniques

o 2000-Present: Model-based Automated Testing

• Pesticide Effect: The same tests repeatedly fail to detect new bugs.

• Defect Clustering: Most defects reside in a small percentage of modules.

Introduction to Software Testing

• Testing helps identify faults, defects, and bugs that may lead to failures.

• IEEE standards define errors and faults differently:

• Error – A human mistake during programming.


• Fault (Bug, Defect) – Incorrect code due to an error.

• Failure – System not functioning as expected due to faults.

• 2. Importance of Testing

• Software testing takes 50% of development effort but only 10% of development time.

• Bugs originate 60% from specification/design and 40% from coding.

• Example of impact: Ariane 5 rocket failure due to an undetected software exception bug
(costing $1 billion).

• 3. Testing Process

• Steps:

• Input test data.

• Observe outputs.

• If failure occurs, debug and correct.

• How are bugs reduced?

• Review

• Testing

• Formal verification

• Better development processes

• 4. Verification vs. Validation

• Verification ("Are we building it right?")

• Ensures each phase conforms to the previous one.

• Done by developers (static analysis, reviews).

• Validation ("Have we built the right thing?")

• Ensures final product meets requirements.

• Done by testers (dynamic analysis, executing tests).

• 5. Levels of Testing

• Unit Testing – Testing individual modules in isolation.

• Integration Testing – Testing module interactions.

• System Testing – Testing the entire system.

• Regression Testing – Checking if new changes introduce bugs.

• Types of system testing:

• Functional testing – Checks functionality.


• Performance testing – Response time, stress, recovery.

• Alpha testing – Internal users.

• Beta testing – External users.

• User acceptance testing (UAT) – Final validation.

• 6. V-Model of Testing

• A variant of the waterfall model emphasizing verification and validation (V&V).

• Testing activities are planned in parallel with development.

• Strengths:

• Well-structured and systematic.

• Each deliverable is testable.

• Weaknesses:

• No phase overlaps.

• Rigid and does not support requirement changes.

• 7. Testing Techniques

• Black-Box Testing (Functional Testing)

• No internal code knowledge needed.

• Methods:

• Equivalence class partitioning

• Boundary value testing

• Decision table testing

• White-Box Testing (Structural Testing)

• Tests internal logic and code structure.

• Uses control flow and data flow analysis.

• Grey-Box Testing

• Combination of black-box and white-box testing.

• 8. Test Cases and Test Suites

• Test Case: A set of inputs, execution conditions, and expected outputs.

• Test Suite: A collection of test cases.

• Negative Test Cases: Ensure software handles invalid inputs gracefully.

• 9. Pesticide Paradox

• Repeatedly running the same tests finds fewer new bugs.


• Solution: Periodically change test cases and techniques.

• 10. Test Automation

• Evolution:

• Manual testing → Capture & replay → Model-based testing → Fully automated testing.

• Helps improve efficiency but has limitations.

Week 2
1. Equivalence Class Partitioning
• Concept:
o Divide input values into equivalence classes where each class is expected to
behave similarly.
o Testing one value from each class is as effective as testing all values from
the class.
• Guidelines:
o If input is a range (e.g., 1-5000), define one valid and two invalid
equivalence classes.
o If input is a set of values (e.g., {a, b, c}), define one valid and one invalid
equivalence class.
o If input is Boolean, define one valid and one invalid class.
• Example:
o A function computing the square root of numbers between 1-5000
o Equivalence classes:
▪ Valid: 1 to 5000
▪ Invalid: Less than 1, greater than 5000
o Test cases: { -5, 500, 6000 }

2. Boundary Value Analysis (BVA)


• Concept:
o Errors often occur at the boundaries of equivalence classes.
o Test cases should focus on min, just below min, nominal, just above max,
and max values.
• Example:
o Testing square root function for 1 to 5000
o Test values: { 0, 1, 2, 4999, 5000, 5001 }
• Special Cases:
o Robustness Testing: Extends BVA to test values beyond the boundary.
o Single Fault Assumption: Only one variable is tested at extreme values while
others remain nominal.

3. Combinatorial Testing
• Concept:
o Some bugs appear only when multiple inputs interact in a specific way.
o Instead of testing all possible combinations, pair-wise testing can be used.
• Example:
o A mobile app must work across different screen sizes, operating systems,
and versions.
o Instead of testing every combination (which is impractical), pair-wise testing
selects the most relevant combinations.

4. Decision Table-Based Testing


• Used for:
o Systems with complex logical conditions.
o Example: A bank pays different interest rates based on deposit period.
• Structure:
o Conditions: Different input values.
o Actions: Expected outcomes.
o Rules: Test cases.
• Example:

Rule Condition: Deposit < 1 Year Deposit > 3 Years Action

R1 Yes No 6%
Rule Condition: Deposit < 1 Year Deposit > 3 Years Action

R2 No Yes 8%

5. White-Box Testing
• Concept:
o Tests the internal structure of the code.
o Focuses on coverage of code paths, conditions, and logic.
• Types:
1. Statement Coverage – Ensures every line of code is executed at least once.
2. Branch Coverage – Ensures both true & false conditions of every decision point are
tested.
3. Path Coverage – Ensures every possible execution path is tested.
4. Condition Coverage – Ensures each boolean condition takes true & false values.
• Example:
c
CopyEdit
while (x != y) {
if (x > y) x = x - y;
else y = y - x;
}
o Statement coverage: Test {(x=3,y=3), (x=4,y=3), (x=3,y=4)}
o Branch coverage: Test {(x=3,y=3), (x=3,y=2), (x=4,y=3), (x=3,y=4)}

Final Summary
This document provides a deep understanding of systematic test case design techniques. It
covers:
• Equivalence Class Partitioning – Grouping input values into classes.
• Boundary Value Testing – Focusing on input boundaries.
• Combinatorial Testing – Selecting test cases for input interactions.
• Decision Table Testing – Handling conditional logic.
• White-Box Testing – Ensuring code coverage at different levels.

Week 3
Important Concepts from the Document on Software Testing

1. Condition Testing

• Basic Condition Testing: Ensures every atomic condition (Boolean expression) has both
True and False values.

• Condition/Decision Coverage: Ensures both atomic conditions and decisions evaluate to


True and False.

• Multiple Condition Coverage (MCC): Tests all possible combinations of truth values.

2. Shortcomings of Condition Testing

• Redundancy: Due to compiler-dependent short-circuit evaluation.

• Unachievable Coverage: Some conditions cannot take both True and False due to
dependencies.

3. Modified Condition/Decision Coverage (MC/DC)

• Goal: Reduce the number of test cases while ensuring each condition independently
affects the decision.

• Requirements:

1. Each decision must take True and False values.

2. Each condition in a decision must take True and False values.

3. Each condition must independently influence the outcome.

4. Path Testing

• Path Coverage: Ensures all linearly independent paths are executed at least once.

• Control Flow Graph (CFG): A representation of how control flows in the program.

• Cyclomatic Complexity (McCabe’s Metric):

o Formula: V(G) = E - N + 2 (Edges - Nodes + 2)

o Represents the minimum number of test cases needed.

5. Data Flow Testing

• Focuses on definitions (DEF) and uses (USES) of variables.

• Definition-Use (DU) Chain: Ensures that defined variables are properly tested.

6. Mutation Testing
• Idea: Introduce small faults (mutants) in a program and check if the test suite catches
them.

• Mutation Operators:

o Altering arithmetic operators.

o Replacing Boolean expressions.

o Changing constants or variables.

• Equivalent Mutants: Mutated versions that cannot be detected through testing.

7. Integration Testing

• Big Bang Testing: All modules are integrated at once and tested.

• Top-Down Approach: Higher-level modules are tested first.

• Bottom-Up Approach: Lower-level modules are tested first.

• Mixed Approach: Combination of both.

Week 4
The document provides an in-depth understanding of software testing techniques, focusing on
integration testing, system testing, regression testing, object-oriented testing, and state-based
testing. Below is a structured summary of key concepts:

1. Integration Testing

Integration testing focuses on combining individual modules and verifying their interactions.
Approaches include:

• Big Bang Approach: Combines all modules and tests them together.

o Issue: Hard to localize errors; debugging is expensive.

• Bottom-up Approach: Tests lower-level modules first.

o Issue: Requires drivers and system-level functions are hard to observe.

• Top-down Approach: Tests higher-level modules first.

o Issue: Stub design becomes complex as lower modules are missing.

• Mixed (Sandwiched) Approach: Combines top-down and bottom-up approaches.

o Advantage: Requires fewer stubs and drivers.


2. System Testing

System testing validates the fully developed software against its requirements. Types include:

• Alpha Testing: Conducted by the development team.

• Beta Testing: Performed by selected customers.

• Acceptance Testing: Conducted by the customer before deployment.

System Testing Approaches

• Functional Testing: Ensures the system meets functional requirements.

• Performance Testing: Checks non-functional requirements like:

o Stress Testing: Tests system under extreme conditions.

o Load Testing: Assesses system performance under various loads.

o Volume Testing: Evaluates large data handling.

o Compatibility Testing: Ensures software works across environments.

o Recovery Testing: Examines system response to failures.

o Usability Testing: Validates user interface and experience.

3. Regression Testing

Regression testing ensures that code changes do not introduce new bugs. Key tasks include:

• Test Revalidation: Ensuring tests remain valid.

• Test Selection: Identifying tests covering modified code.

• Test Minimization: Removing redundant tests.

• Test Prioritization: Running critical tests first.

Automated Regression Testing

• Uses tools like capture and replay for repeated testing.

• Issues: Test failures due to changes in time/date, test maintenance complexity.

4. Object-Oriented Testing

Testing OO programs is more complex due to encapsulation, inheritance, and polymorphism.

Key Challenges

• Encapsulation: Hard to access internal state; solved by state reporting methods.

• Inheritance: Retesting required when a subclass changes inherited behavior.

• Polymorphism: Each binding requires separate testing.


• Dynamic Binding: Function implementation unknown until runtime, making test coverage
complex.

Testing Strategies

• State-Based Testing: Focuses on state transitions rather than control flow.

• Integration Strategies:

o Thread-based Testing: Tests classes responding to an input/event.

o Use-based Testing: Tests classes required for a use case.

o Cluster Testing: Tests a group of collaborating classes.

5. Test Coverage Analysis

Defines the thoroughness of testing. Traditional coverage methods may not apply to OO programs,
requiring new strategies.

Conclusion

This document provides an extensive view of software testing methodologies, focusing on


integration, regression, and object-oriented testing challenges. It highlights issues with traditional
approaches and proposes model-based strategies for testing OO applications effectively.

You might also like