Testing Strategies Software Testing: Verification
Testing Strategies Software Testing: Verification
Software Testing
Software testing is the process of executing a program with the specific intent of detecting errors
before delivering it to the end user. It acts as a critical quality control measure to ensure the
reliability and correctness of the software product.
Testing reveals the presence of defects but cannot guarantee their absence. It helps uncover
issues in functionality, performance, integration, and compliance with user requirements.
Software is tested by developers, independent test groups (ITGs), and sometimes end users.
While developers are familiar with the design and implementation, ITGs bring a fresh, unbiased
perspective to the testing process.
A strategic approach to testing incorporates structured methods for planning and executing
tests. Various strategies proposed in literature share common characteristics:
● Formal technical reviews are essential before testing begins, reducing potential errors.
● Testing begins at the component level and proceeds outward to system-level integration.
● Testing and debugging are distinct; debugging is necessary when tests uncover defects.
● Misconceptions exist, such as the belief that only testers should test or that testing
begins only after development.
● Independent Test Groups (ITGs) remove bias and collaborate with developers
throughout the project.
Testing Strategies
● For object-oriented software, testing focuses on classes, attributes, methods, and their
collaboration.
● Testing is "complete" when deadlines or budgets are reached, though ideally, it's driven
by metrics and reliability models.
Unit Testing
Integration Testing
● Big Bang Approach: Combine all components at once — risky and less efficient.
● Incremental Strategy: Integrate and test one component at a time — more systematic.
Regression Testing
● Re-runs previously executed tests after changes to ensure no new errors are introduced.
● Includes:
Smoke Testing
Black-Box Testing
Black-box testing examines software functionality without considering internal code structure.
● How does the system behave under extreme data rates or combinations?
White-box testing evaluates internal code paths, logic conditions, and loops.
Validation Testing
Validation testing confirms that the software meets business and user requirements.
Validation Activities
● Configuration Audit: Ensure all components are accounted for and correctly
configured.
● Alpha/Beta Testing:
System Testing
System testing evaluates the fully integrated system for performance, reliability, and adherence
to requirements.
Debugging is the process of finding and correcting the root cause of software defects identified
during testing.
Debugging Process
● Either the error cause is found and fixed, or suspicion leads to further testing and
iterations.
● Bugs can result from human errors, timing issues, or distributed processing.
Psychological Considerations
Debugging Strategies
● Cause Elimination: Use deduction and binary partitioning to isolate the problem.
Here's well-structured documentation-ready content for the topics you mentioned, aligned with
typical software engineering and project documentation standards. Each section includes clear
explanations and examples, where appropriate.
Product Metrics
Product metrics are quantitative measures that provide insights into the characteristics and
performance of software products. They help in evaluating software quality, functionality,
complexity, maintainability, and usability.
Software quality metrics help in assessing how well a software product satisfies the specified
requirements and user expectations. These metrics typically fall under the categories of
functionality, reliability, usability, efficiency, maintainability, and portability.
Common Metrics:
● Defect Density:
Defect Density = Total Defects / Size of Software (KLOC)
● Usability Index:
A composite metric calculated from learnability, satisfaction, and efficiency scores.
These metrics assess the completeness, correctness, and consistency of the software
requirements and analysis models.
Common Metrics:
● Completeness of Requirements:
Ratio of defined to required functionalities.
Design metrics measure the effectiveness, modularity, and maintainability of the software
design.
Common Metrics:
● Modularity:
Measures how well the system is divided into modules.
○ High cohesion and low coupling are desirable for maintainable design.
These metrics evaluate the quality and complexity of the source code, often used for refactoring
and optimization.
Common Metrics:
● Comment Density:
Comment Density = Number of Comment Lines/Total Lines of Code
These metrics assess test effectiveness, coverage, and defect detection capability.
Common Metrics:
● Test Coverage:
Percentage of code executed by tests.
Coverage = (Number of Lines Executed/Total Lines of Code)×100
● Defect Detection Percentage (DDP):
DDP = (Defects Found During Testing / Total Defects)×100
● Test Case Effectiveness:
Measures how many test cases successfully identify defects.
These metrics focus on the effort, time, and effectiveness of software maintenance activities.
Common Metrics:
● Maintenance Effort:
Measured in person-hours or person-days spent on maintenance tasks.
Metrics in this category are used to evaluate both the software development process and the
resulting product.
Software Measurement
Software measurement is the process of quantifying various attributes of the software or the
development process to gain better control and understanding.
Types:
Software quality metrics serve to quantify the degree to which a product meets specified
requirements and user expectations.
These metrics help in making data-driven decisions during software development and
maintenance and ultimately ensure that both the product and process are aligned with business
goals and quality standards.