0% found this document useful (0 votes)
10 views10 pages

Testing Strategies Software Testing: Verification

The document outlines various software testing strategies, emphasizing the importance of detecting errors before software delivery to ensure quality. It describes different testing methods, including unit, integration, and system testing, as well as verification and validation processes. Additionally, it discusses metrics for assessing software quality, functionality, and maintainability, highlighting the significance of structured testing approaches and continuous improvement practices.

Uploaded by

eshwar27acha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views10 pages

Testing Strategies Software Testing: Verification

The document outlines various software testing strategies, emphasizing the importance of detecting errors before software delivery to ensure quality. It describes different testing methods, including unit, integration, and system testing, as well as verification and validation processes. Additionally, it discusses metrics for assessing software quality, functionality, and maintainability, highlighting the significance of structured testing approaches and continuous improvement practices.

Uploaded by

eshwar27acha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

TESTING STRATEGIES

Software Testing

Software testing is the process of executing a program with the specific intent of detecting errors
before delivering it to the end user. It acts as a critical quality control measure to ensure the
reliability and correctness of the software product.

What Testing Shows

Testing reveals the presence of defects but cannot guarantee their absence. It helps uncover
issues in functionality, performance, integration, and compliance with user requirements.

Who Tests the Software?

Software is tested by developers, independent test groups (ITGs), and sometimes end users.
While developers are familiar with the design and implementation, ITGs bring a fresh, unbiased
perspective to the testing process.

A Strategic Approach to Software Testing

A strategic approach to testing incorporates structured methods for planning and executing
tests. Various strategies proposed in literature share common characteristics:

●​ Formal technical reviews are essential before testing begins, reducing potential errors.​

●​ Testing begins at the component level and proceeds outward to system-level integration.​

●​ Different testing techniques apply at various stages of development.​

●​ Both developers and independent test groups are involved.​

●​ Testing and debugging are distinct; debugging is necessary when tests uncover defects.

Verification and Validation (V&V)

●​ Verification ensures the software correctly implements specified functions — Are we


building the product right?​
●​ Validation ensures the built software meets user requirements — Are we building the
right product?

Activities in V&V include: technical reviews, audits, monitoring, simulation, documentation


review, and usability testing.

Organizing for Software Testing

●​ Developers test their own code, but may lack objectivity.​

●​ Misconceptions exist, such as the belief that only testers should test or that testing
begins only after development.​

●​ Independent Test Groups (ITGs) remove bias and collaborate with developers
throughout the project.​

●​ Developers perform integration testing before ITG involvement.

Testing Strategies

●​ For conventional software, testing starts with individual modules, followed by


integration.​

●​ For object-oriented software, testing focuses on classes, attributes, methods, and their
collaboration.

Testing Completion Criteria

●​ Testing is "complete" when deadlines or budgets are reached, though ideally, it's driven
by metrics and reliability models.​

●​ Every user interaction post-deployment is an implicit test.

Strategic Issues in Testing

●​ Define objectives clearly.​

●​ Understand and profile user categories.​

●​ Plan for rapid-cycle testing and self-testing features.​

●​ Conduct technical reviews of test plans and cases.​


●​ Establish continuous improvement practices.​

Test Strategies for Conventional Software

Unit Testing

●​ Focuses on verifying individual components or modules.​

●​ Ensures each unit functions correctly in isolation.​

Integration Testing

●​ Aims to detect interface-related errors.​

●​ Big Bang Approach: Combine all components at once — risky and less efficient.​

●​ Incremental Strategy: Integrate and test one component at a time — more systematic.​

Top-Down Integration Testing

●​ Begins with the main module and integrates downward.​

●​ Uses stubs to simulate lower modules.​

●​ Allows early verification of high-level logic.​

Bottom-Up Integration Testing

●​ Begins with atomic (lowest-level) modules.​

●​ Uses drivers to simulate higher modules.​

●​ Builds upward by combining tested clusters.​

Regression Testing

●​ Re-runs previously executed tests after changes to ensure no new errors are introduced.​
●​ Includes:​

○​ Core functional tests​

○​ Tests related to recent changes​

○​ Tests for modified components​

Smoke Testing

●​ Used in daily builds to identify major failures early.​

●​ Tests major functionalities with minimal resources.​

●​ Uncovers "showstopper" bugs early in the integration process.​

Black-Box Testing

Black-box testing examines software functionality without considering internal code structure.

Key questions addressed:

●​ Does the system handle inputs/outputs correctly?​

●​ Are data class boundaries managed properly?​

●​ How does the system behave under extreme data rates or combinations?​

White-Box (Glass-Box) Testing

White-box testing evaluates internal code paths, logic conditions, and loops.

Why Cover Code Paths?

●​ Less executed paths are more prone to errors.​


●​ Incorrect assumptions or typos are often uncovered in rare paths.​

●​ Ensures high confidence in procedural logic.​

Validation Testing

Validation testing confirms that the software meets business and user requirements.

Validation Activities

●​ Criteria Review: Verify functional and non-functional requirements.​

●​ Configuration Audit: Ensure all components are accounted for and correctly
configured.​

●​ Alpha/Beta Testing:​

○​ Alpha Testing: Internal testing by end users in a controlled environment.​

○​ Beta Testing: Real-world testing by actual users to gain feedback.​

System Testing

System testing evaluates the fully integrated system for performance, reliability, and adherence
to requirements.

Types of System Testing

●​ Recovery Testing: Assesses software’s ability to recover from crashes or failures.​

●​ Security Testing: Verifies defense against unauthorized access.​

●​ Stress Testing: Checks system stability under high resource demands.​

●​ Performance Testing: Measures execution speed, responsiveness, and throughput.​


The Art of Debugging

Debugging is the process of finding and correcting the root cause of software defects identified
during testing.

Debugging Process

●​ Either the error cause is found and fixed, or suspicion leads to further testing and
iterations.​

●​ Symptoms may be misleading or intermittent, making debugging complex.​

●​ Bugs can result from human errors, timing issues, or distributed processing.​

Psychological Considerations

●​ Debugging is often frustrating, as it combines puzzle-solving with admitting one’s own


mistakes.​

●​ Finding and fixing bugs leads to a sense of relief and satisfaction.​

Debugging Strategies

●​ Brute Force: Trial-and-error, usually inefficient.​

●​ Backtracking: Retrace code execution to find faults.​

●​ Cause Elimination: Use deduction and binary partitioning to isolate the problem.​

Automated Debugging Tools

●​ Enhance efficiency by providing semi-automated error tracing and correction support.​

Here's well-structured documentation-ready content for the topics you mentioned, aligned with
typical software engineering and project documentation standards. Each section includes clear
explanations and examples, where appropriate.
Product Metrics

Product metrics are quantitative measures that provide insights into the characteristics and
performance of software products. They help in evaluating software quality, functionality,
complexity, maintainability, and usability.

Software Quality Metrics

Software quality metrics help in assessing how well a software product satisfies the specified
requirements and user expectations. These metrics typically fall under the categories of
functionality, reliability, usability, efficiency, maintainability, and portability.

Common Metrics:

●​ Defect Density:​
Defect Density = Total Defects / Size of Software (KLOC)

Lower defect density indicates better software quality.

●​ Mean Time to Failure (MTTF):​


Average operational time between two consecutive failures. Higher MTTF indicates
greater reliability.​

●​ Customer Problem Metric:​


Tracks issues raised by customers per release.​

●​ Usability Index:​
A composite metric calculated from learnability, satisfaction, and efficiency scores.

Metrics for Analysis Model

These metrics assess the completeness, correctness, and consistency of the software
requirements and analysis models.

Common Metrics:

●​ Functionality Size (Function Point Analysis):​


Measures the software's functionality from the user's perspective.​

●​ Requirements Stability Index (RSI):​


RSI=1−(Number of Requirements Changes/Total Initial Requirements)
●​ High RSI indicates stable requirements.​

●​ Completeness of Requirements:​
Ratio of defined to required functionalities.

Metrics for Design Model

Design metrics measure the effectiveness, modularity, and maintainability of the software
design.

Common Metrics:

●​ Design Complexity (e.g., Fan-in, Fan-out):​

○​ Fan-in: Number of modules that call a given module.​

○​ Fan-out: Number of modules called by a given module.​

●​ Modularity:​
Measures how well the system is divided into modules.​

●​ Cohesion and Coupling:​

○​ High cohesion and low coupling are desirable for maintainable design.

Metrics for Source Code

These metrics evaluate the quality and complexity of the source code, often used for refactoring
and optimization.

Common Metrics:

●​ Lines of Code (LOC):​


Raw count of lines; can indicate size but not quality.​

●​ Cyclomatic Complexity (McCabe’s Metric):​


V(G)=E−N+2P

Where E = edges, N = nodes, and P = connected components.​


●​ Code Churn:​
Measures the frequency of code changes over time.​

●​ Comment Density:​
Comment Density = Number of Comment Lines/Total Lines of Code

Metrics for Testing

These metrics assess test effectiveness, coverage, and defect detection capability.

Common Metrics:

●​ Test Coverage:​
Percentage of code executed by tests.​
Coverage = (Number of Lines Executed/Total Lines of Code)×100
●​ Defect Detection Percentage (DDP):​
DDP = (Defects Found During Testing / Total Defects)×100
●​ Test Case Effectiveness:​
Measures how many test cases successfully identify defects.

Metrics for Maintenance

These metrics focus on the effort, time, and effectiveness of software maintenance activities.

Common Metrics:

●​ Mean Time to Repair (MTTR):​


Average time taken to fix a defect.​

●​ Maintenance Effort:​
Measured in person-hours or person-days spent on maintenance tasks.​

●​ Change Request Frequency:​


Number of change requests over time; helps understand software stability.​

●​ Backward Compatibility Issues:​


Tracks how often new changes break old functionality.
Metrics for Process and Products

Metrics in this category are used to evaluate both the software development process and the
resulting product.

Software Measurement

Software measurement is the process of quantifying various attributes of the software or the
development process to gain better control and understanding.

Goals of Software Measurement:

●​ Improve software quality and productivity.​

●​ Identify bottlenecks and inefficiencies.​

●​ Ensure process adherence and compliance.

Types:

●​ Direct Metrics: LOC, execution speed, memory usage.​

●​ Indirect Metrics: Maintainability, usability, and reliability.

Metrics for Software Quality

Software quality metrics serve to quantify the degree to which a product meets specified
requirements and user expectations.

Key Metrics Include:

●​ Reliability Metrics: MTTF, MTTR, availability.​

●​ Maintainability Metrics: Time to implement changes, modularity score.​

●​ Portability Metrics: Effort to move software across environments.​

●​ Performance Metrics: Response time, throughput, resource usage.


These metrics help in making data-driven decisions during software development and
maintenance and ultimately ensure that both the product and process are aligned with business
goals and quality standards.

You might also like