0% found this document useful (0 votes)
17 views61 pages

STM Chapter 4

Test management involves organizing testing activities through defined roles, structured teams, and resource allocation to enhance efficiency and quality assurance. A comprehensive test plan outlines the scope, features, tasks, and risks associated with testing, while detailed test design specifications ensure thorough validation of software requirements. Managing test suite growth is crucial for maintaining testing efficiency, necessitating regular maintenance and prioritization of high-impact tests.

Uploaded by

Girish Chandra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views61 pages

STM Chapter 4

Test management involves organizing testing activities through defined roles, structured teams, and resource allocation to enhance efficiency and quality assurance. A comprehensive test plan outlines the scope, features, tasks, and risks associated with testing, while detailed test design specifications ensure thorough validation of software requirements. Managing test suite growth is crucial for maintaining testing efficiency, necessitating regular maintenance and prioritization of high-impact tests.

Uploaded by

Girish Chandra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

TEST MANAGEMENT

Test Organization
❑ Test Organization is a crucial component of the test management process, focusing on setting up
and managing a suitable organizational structure for testing activities.
❑ It involves defining explicit roles and responsibilities within the testing team to ensure a smooth
and efficient testing process.

Key Aspects of Test Organization:


1. Defining Roles and Responsibilities:
•This involves assigning specific tasks and duties to team members based on their skills and
expertise.
•Each role should be clearly defined to avoid confusion and ensure that all aspects of testing are
covered.
2. Structuring the Testing Team:
•Effective test teams can be structured using various organizational models such as functional
or matrix structures.
•The choice of structure depends on the company size, available resources, and project
requirements.
3. Setting Up the Test Environment:
•This includes determining the necessary hardware, software, and network configurations
required for testing.
•Ensuring that the test environment is properly set up is essential for accurate and reliable
test results.
4. Resource Allocation:
•Test organization involves allocating resources such as equipment, facilities, and personnel
effectively to meet testing needs.
•This ensures that testing activities are conducted efficiently and within budget.
5. Communication and Collaboration:
•Effective communication among team members is vital to ensure alignment with the test
plan and to address any issues promptly.
•Collaboration tools and regular meetings can facilitate this process.

Importance of Test Organization:


❑ Efficiency: Proper organization helps streamline testing processes, reducing delays and
improving productivity.
❑ Quality Assurance: By ensuring that all roles are well-defined and covered, test organization
contributes to higher quality testing outcomes.
❑ Risk Management: Clear roles and responsibilities help in identifying and mitigating risks more
effectively during the testing phase.
Structure of a Testing Group

Testing Group Hierarchy


Testing is an important part of any software project. One or two testers are not sufficient to
perform testing, especially if the project is too complex and large.

Therefore, many testers are required at various levels.

Roles and Responsibilities in a Testing Group:

1. Test Manager:
• Testing Planning and Strategy Development: Develops comprehensive test
strategies and detailed test plans aligned with project goals.
• Resource Management and Team Leadership: Oversees resource allocation and
leads the testing team to ensure efficient execution of testing activities
2. Test Leader:
• Team Supervision and Task Assignment : Supervises the testing team, assigns tasks, and
ensures that all team members are working towards the project's objectives.
• Test Execution Oversight: Monitors the execution of test cases, identifies issues, and
ensures that testing is conducted according to the plan.
3. Test Engineer:
• Test Case Development and Execution: Develops and executes complex test cases, focusing
on both manual and automated testing methods.
• Defect Reporting and Analysis: Documents defects found during testing and analyzes them
to provide insights for improvement.
4. Junior Test Engineers:
• Test Case Execution and Reporting: Executes test cases as assigned, reports defects, and
assists in maintaining test documentation.
• Learning and Skill Development: Participates in training and skill development activities
to enhance testing skills and contribute effectively to the team.

Each role plays a crucial part in ensuring that the testing process is thorough, efficient, and
aligned with project goals.
Test Planning
❑ In STLC, testing also needs planning as is needed in SDLC. Since software projects become
uncontrolled if not planned properly, the testing process is also not effective if not planned earlier.
❑ Moreover, if testing is not effective in a software project, it also affects the final software product.
❑ Therefore, for a quality software, testing activities must be planned as soon as the project
planning starts.

❑ A test plan is defined as a document that describes the scope, approach, resources, and schedule
of intended testing activities.
❑ Test plan is driven with the business goals of the product.
In order to meet a set of goals, the test plan identifies the following:
1. Test Items:
•: These are the specific parts of the software or system that will be tested.
•: Identifying test items helps ensure that all critical components are evaluated.
2. Features to be tested:
•: These are the functionalities or capabilities of the software that need to be verified.
•: Focusing on key features ensures that the most important aspects of the software are
thoroughly tested.
3. Testing Tasks:
•: These are the specific activities that need to be performed during testing.
•: Breaking down testing into manageable tasks helps in organizing and executing the
testing process efficiently.
4. Tool Selection:
•: This involves choosing the appropriate software tools or equipment needed for testing.
•: The right tools can automate tasks, improve efficiency, and enhance the accuracy of test
results.
5. Time and Effort Estimate:
•: This involves estimating how long each task will take and how much effort is required.
•: Accurate time and effort estimates help in scheduling testing activities and allocating
resources effectively.
6. Who Will Do Each Task:
•: Identifying the personnel responsible for each task.
•: Clear assignment of tasks ensures accountability and helps in managing the workload
among team members.
7. Any Risks:
•: Identifying potential risks or challenges that could impact the testing process.
•: Recognizing risks allows for the development of mitigation strategies to minimize their impact.
8. Milestones:
•: These are significant events or achievements during the testing process.
•: Milestones help track progress, provide a framework for evaluating success, and ensure that
testing stays on schedule.
DETAILED TEST DESIGN AND TEST SPECIFICATIONS
The ultimate goal of test management is to get the test cases executed.
Till now, test planning has not provided the test cases to be executed.
Detailed test designing for each validation activity maps the requirements or features
to the actual test cases to be executed.
One way to map the features to their test cases is to analyse the following:
1. Requirement traceability
2. Design traceability
3. Code traceability
The analyses can be maintained in the form of a traceability matrix such that every requirement or
feature is mapped to a function in the functional design.

Traceability Matrix
❑ This function is then mapped to a module (internal design and code) in which the function is being
implemented. This in turn is linked to the test case to be executed.
❑ This matrix helps in ensuring that all requirements have been covered and tested.
❑ Priority can also be set in this table to prioritize the test cases.
TEST DESIGN SPECIFICATION
• A Test Design Specification outlines the systematic approach to validate specific software
features or requirements.
• According to IEEE 829-1998 and related standards, it bridges the test plan and test cases by
defining what to test, how to test, and success criteria.

A test design specification should have the following components according to IEEE
recommendation:
Identifier: A unique identifier is assigned to each test design specification with a reference to its
associated test plan.
•A unique identifier (e.g., TDS_LoginModule_1.0) links the document to its associated test plan and
version history.
Features to be tested : The features or requirements to be tested are listed with reference to the
items mentioned in SRS/SDD.
Example:
•Feature 1: User login functionality (SRS Section 3.1)
•Feature 2: Volume discount calculation (SDD Section 5.2)

Approach refinements: In the test plan, an approach to overall testing was discussed. Here,
further details are added to the approach. For example, special testing techniques to be used for
generating test cases are explained.

•Details testing techniques (e.g., boundary value analysis, equivalence partitioning) and tools
(e.g., Selenium, JUnit).
•Specifies how test results will be analyzed (e.g., automated comparators, manual inspection).
•Example:
• Apply boundary testing for input validation of age fields (0–120 years)
Test case identification:
❑ The test cases to be executed for a particular feature/ function are identified here,
Moreover, test procedures are also identified.
❑ The test cases contain input/output information and the test procedures contain the
necessary steps to execute the tests.
❑ Each test design specification is associated with test cases and test procedures.
❑ A test case may be associated with more than one test design specifications.


Feature Test Case ID Test Procedure
ID
User login
TC-001 TP-001
validation
Feature pass/fail criteria:
The criteria for determining whether the test for a feature has passed or failed, is described.
Example:
•Pass: Login succeeds within 2 seconds for valid credentials.
•Fail: System crashes or displays incorrect error messages for invalid inputs
TEST CASE SPECIFICATIONS
❑ Since the test design specifications have recognized the test cases to be executed, there is a
need to define the test cases with complete specifications.
❑ A test case specification is a detailed document that defines the inputs, expected outputs,
and execution steps for a specific test case.
❑ It ensures that testing is systematic, repeatable, and aligned with the software's
requirements.
Key Components of Test Case Specification
Test Case Specification Identifier:
1. A unique identifier assigned to the test case for easy reference.
2. Includes version information, author details, and revision history.
Purpose:
1. Describes the objective of the test case.
2. Specifies the functionality or feature being validated.

Test Items Needed:


1. Lists references to related documents (e.g., SRS, SDD, user manuals).
2. Identifies the software items or features to be tested.
Special Environmental Needs
1. Specifies any hardware, software, tools, or configurations required for testing.
2. Includes any external dependencies like databases or network setups.
Special Procedural Requirements
1. Highlights any specific conditions or constraints necessary for executing the test case.
2. Examples: Pre-test data setup or specific user roles required.
Inter-Case Dependencies
1. Identifies dependencies between test cases.
2. Specifies which test cases must be executed before or after this one.
Input Specifications
1. Provides detailed input values required for the test case.
2. Avoids generalizations (e.g., instead of "0–360 degrees," specifies "233 degrees").
3. Notes relationships between input values if applicable.
Test Procedure
1. Describes step-by-step instructions for executing the test case.
2. Ensures consistency and repeatability across testers.
Output Specifications
1. Defines expected outputs in specific terms.
2. Used to compare actual results with expected results to determine pass/fail status.

Importance of Test Case Specifications


•Reusability: A single test case can be reused across multiple design specifications or projects.
•Traceability: Links directly to requirements and design elements, ensuring comprehensive
coverage.
•Clarity: Provides precise inputs and outputs, reducing ambiguity during execution.
•Efficiency: Enables testers to execute cases systematically, saving time and effort.
Example: There is a system for railway reservation system. There are many functionalities
in the system, as given below:
Test case Specification Identifier: T2
Purpose: To check the functionality of View Reservation Status’
Test Items Needed : Refer function F3.5 in SRS of the system.
Special Environmental Requirements: Internet should be in working condition. Database software
through which the data will be retrieved should be in running condition.
Special Procedural Requirements: The function ‘Login’ must be executed prior to the current test
case.
Inter-Case Dependencies: T1 test case must be executed prior to the current test case execution.
Input Specifications: Enter PNR number in 10 digits between 0–9 as given below:
4102645876, 21A2345672, 234, asdgggggggg
Test Procedure: Press ‘View Reservation status’ button.
Enter PNR number and press ENTER.
Output Specifications: The reservation status against the entered PNR number is displayed as S12
or RAC13 or WL123 as applicable.
Component Description
Identifier TC-001
Validate login functionality with valid
Purpose
credentials
SRS Section 3.1 (Login Module), User Manual
Test Items Needed
Section 2
Windows 10 OS, Chrome Browser v95+,
Special Environmental Needs
Database connection
User account must exist with valid
Special Procedural Requirements
credentials
Inter-Case Dependencies TC-000 (Database Setup)
Input Specifications Username: user123, Password: password123
1. Open browser
2. Navigate to login page
Test Procedure
3. Enter credentials
4. Click submit
Successful login message displayed:
Output Specifications
"Welcome, user123"
EFFICIENT TEST SUITE
MANAGEMENT
What is a Test Suite?
A test suite is a methodical arrangement of test cases designed to validate specific functionalities
and performance aspects of software applications. It is composed of various components, including:
•Test Cases: Specific steps or conditions executed to validate a feature or functionality.
•Test Scripts: Automated sequences of commands used to execute test cases.
•Test Data: Inputs required for test execution.
•Test Scenarios: Detailed documentation of test cases covering end-to-end functionality.
Software testing is a continuous process that takes place throughout the life cycle of a project.
Test cases in an existing test suite can often be used to test a modified program. However, if the
test suite is inadequate for retesting, new test cases may be developed and added to the test
suite. Thus, the size of a test suite grows as the software evolves.

Due to resource constraints, it is important to prioritize the execution of test cases so as to


increase the chances of early detection of faults. A reduction in the size of the test suite
decreases both the overhead of maintaining the test suite and the number of test cases that must
be rerun after changes are made to the software.
Reasons for the growth of a Test suite
A test suite grows over time due to a combination of factors related to software evolution, testing
requirements, and maintenance challenges. Below are the key reasons:
1. Evolving Software Requirements
•As software evolves, new features and functionalities are added. Each new feature requires
additional test cases to validate its functionality, increasing the size of the test suite.
•Changes in business logic or user requirements often necessitate updates to existing test cases or
the creation of new ones.
2. Achieving Coverage Goals
•To meet specific coverage criteria (e.g., statement, branch, or path coverage), additional test cases
are added until the desired level of coverage is achieved.
3. Regression Testing
•Test suites are reused for regression testing to ensure that changes in the codebase do not introduce
new bugs.
•Over time, as more features are added or modified, regression test cases accumulate, contributing
significantly to the growth of the suite.
4. Redundancy and Obsolescence
•Test suites often grow unnecessarily due to:
• Redundant Test Cases: Multiple test cases providing the same coverage or testing the same
functionality.
• Obsolete Test Cases: Tests that are no longer relevant due to removed or refactored features
but remain in the suite.
5. Complexity of Modern Software
•Modern applications often involve complex integrations, multiple platforms (e.g., web, mobile), and
diverse user scenarios. This complexity requires a broader range of tests to ensure compatibility and
reliability.
6. Continuous Integration and Deployment (CI/CD)
•In CI/CD pipelines, automated tests are executed frequently to validate each code change. To
ensure comprehensive validation, additional tests are continuously added to cover edge cases and
potential risks.
7. Non-Functional Testing Needs
•As user expectations grow, non-functional tests (e.g., performance, security, scalability) are
added to ensure that software meets quality standards beyond basic functionality.
8. Compliance and Standards
•Regulatory requirements or industry standards may mandate additional test cases to ensure
compliance with specific rules (e.g., GDPR for data privacy).
9. Lack of Test Suite Maintenance
•Without proper maintenance, outdated or redundant test cases accumulate over time, leading to
unnecessary growth in the suite size.
Managing Test Suite Growth
To address uncontrolled growth:
1.Test Suite Minimization: Remove redundant and obsolete test cases while maintaining coverage.
2.Prioritization: Focus on high-impact tests for critical functionalities.
3.Regular Maintenance: Periodically review and update the suite to align with current requirements.
4.Automation Tools: Use tools that optimize test management and execution.
Uncontrolled growth can lead to inefficiencies in execution time and resource usage, making
effective management crucial for maintaining a high-quality testing process.
Test Suite Minimization: Importance and Benefits
As software evolves, test suites grow in size, often becoming impractical to execute fully due to time,
resource, or cost constraints. Minimizing test suites—reducing their size while retaining coverage—is
critical for maintaining testing efficiency.

Why Minimization is Necessary


1.Approaching Release Deadlines
1. Near product release, time constraints demand rapid testing cycles. Minimization prioritizes
high-impact test cases, enabling faster defect detection without executing the entire suite.
2.Limited Staff or Expertise
1. Smaller test suites reduce the workload on testing teams, allowing them to focus on critical
test scenarios rather than redundant or obsolete cases.
3. Resource or Tool Limitations
1. Test environments often lack sufficient hardware, licenses, or tools to execute large suites.
Minimization optimizes resource usage by eliminating unnecessary tests.
4. Cost of Regression Testing
1. Repeated execution of large suites during iterative development increases costs. Minimized
suites lower maintenance overhead and execution time.
Benefits of Test Suite Minimization
1.Cost Efficiency
1. Minimization reduces test suite size by up to 70–95% in some cases, significantly
lowering execution and maintenance costs.
2.Elimination of Redundancy
1. Redundant tests (providing identical coverage) and obsolete tests (targeting removed
features) are removed, focusing on essential cases.
3. Faster Feedback Loops
1. Smaller suites enable quicker execution in CI/CD pipelines, accelerating development
cycles and defect resolution.
4. Scalability
1. Minimization ensures test suites remain manageable as software complexity grows,
preventing "test paralysis" where suites become too large to execute.
Test Suite Prioritization
Test suite prioritization is the process of ordering test cases to maximize testing efficiency,
focusing on high-impact tests first.

Why Prioritize Test Suites?


1.Resource Constraints: Limited staff, tools, or time demand focused testing on critical areas.
2.Approaching Deadlines: Prioritization ensures high-risk defects are caught before release.
3.Cost Reduction: Smaller, prioritized suites lower execution and maintenance costs.
4.Faster Feedback: Early detection of critical bugs accelerates development cycles.
Priority Levels in Practice
Most teams categorize test cases into priority tiers:
1.Priority 1 (Critical): Must be executed (e.g., core functionalities like login or payment processing).
2.Priority 2 (High): Execute if time permits (e.g., secondary features).
3.Priority 3 (Medium): Post-release testing (e.g., edge cases).
4.Priority 4 (Low): Negligible impact (e.g., deprecated features)

Example: E-Commerce Payment System Update


•Priority 1: Test payment gateway integration and transaction security.
•Priority 2: Validate cart functionality and discount logic.
•Priority 3: Verify UI responsiveness on mobile devices.
•Priority 4: Legacy currency converters not used in the current release
Test case prioritization is a critical process in software testing that determines the order in which test cases are
executed. The goal is to maximize testing efficiency, detect critical defects early, and optimize resource usage.

Types of Test Case Prioritization


Test case prioritization can be broadly categorized into General Test Case Prioritization and Version-Specific Test
Case Prioritization, each serving distinct purposes in software testing. These types define the scope and context
in which prioritization is applied.
1. General Test Case Prioritization
•Definition: In this type, test cases are prioritized for use across multiple future versions of the software,
without specific knowledge of upcoming changes.
•Purpose: Ensures that the prioritization effort benefits successive versions of the program, amortizing the cost
over time.
•When to Use: After the release of a program version, during off-peak hours, or when future modifications are
unknown.
•Examples:
• Prioritizing tests based on historical defect rates to ensure fault-prone areas are
consistently tested.
• Focusing on features with high usage frequency or business impact across all versions.
2. Version-Specific Test Case Prioritization
•Definition: Test cases are prioritized specifically for a modified version of the software, with
knowledge of recent changes or updates.
•Purpose: Targets regression testing to validate areas affected by recent modifications.
•When to Use: After code changes have been made and before regression testing begins.
•Examples:
• Prioritizing tests for modules impacted by recent commits.
• Running tests on new features or altered functionalities first.
Test Case Prioritization Techniques:
1. Risk-Based Prioritization
•Description: Focuses on identifying and prioritizing test cases that cover high-risk areas of the software.
•Use Case: Ideal for projects with strict deadlines or limited resources.
•Example: Testing payment processing in an e-commerce application before less critical features like user
profile updates.
•Advantages:
• Early detection of critical defects.
• Reduces the likelihood of high-impact failures post-release.
2. Requirement-Based Prioritization
•Description: Test cases are prioritized based on the importance of the requirements they cover, with
higher priority assigned to business-critical features or customer-facing functionalities.
•Use Case: Useful when certain features are essential to business goals or customer satisfaction.
•Example: Testing functionalities advertised in marketing campaigns before internal administrative
features.

3. History-Based Prioritization
•Description: Leverages past data, such as defect rates and test execution history, to prioritize tests that
have historically detected more defects or taken longer to execute.
•Use Case: Effective for legacy systems or modules with known defect patterns.
•Example: Giving higher priority to tests for modules with a history of frequent bugs.
4. Version-Based Prioritization
•Description: Focuses on test cases related to new or modified code in the latest software version,
particularly for regression testing.
•Use Case: Ensures that recent changes do not introduce new defects or break existing functionality.
•Example: Prioritizing tests for updated APIs after a system upgrade.

5. Coverage-Based Prioritization
•Description: Prioritizes test cases based on their ability to maximize code coverage (e.g., statement, branch,
or path coverage).
•Use Case: Commonly used in automated testing to ensure critical parts of the code are tested first.
•Example: Executing tests that cover 80% of statements before edge-case scenarios.
6. Time-Based Prioritization
•Description: Allocates time slots for executing test cases based on their estimated duration or
difficulty level, ensuring all tests fit within the available time frame.
•Use Case: Suitable for projects with tight schedules and limited time for testing cycles.

7. Cost-Based Prioritization
•Description: Considers the cost of executing each test case (e.g., time, resources, and setup effort)
and prioritizes those that provide the most value at the lowest cost.
•Use Case: Useful when balancing budget constraints with testing effectiveness.
Measuring the Effectiveness of a Prioritized Test Suite
To evaluate the effectiveness of a prioritized test suite, QA teams rely on metrics that quantify
coverage, fault detection rates, and efficiency.

Core Metrics for Effectiveness


1. Average Percentage of Fault Detection (APFD)
• Purpose: Measures how quickly a prioritized test suite detects faults during execution.
• Formula:

• Where:
• n = Total test cases in the suite.
• m = Total faults in the software.
• Fi = Position of the first test case detecting the i-th fault.
Example:
2. Test Coverage Metrics
• Definition: Measures the percentage of code/requirements covered by tests.
Statement Coverage:

Branch/Condition Coverage:
3. Defect Detection Rate (DDR)
•Formula:

•Goal: Higher DDR indicates effective prioritization.

4. Cost/Time Efficiency
•Formula:

•Target: Reduce execution time by 30–50% while retaining fault detection capability
SOFTWARE QUALITY
MANAGEMENT
❑ Software Quality Management (SQM) is a comprehensive process aimed at ensuring that software
products meet specified quality standards and fulfill user expectations throughout their development
lifecycle.
❑ This involves a systematic approach that integrates quality assurance, quality planning, and quality
control into the software development process.

Software Quality Metrics


Software quality metrics are quantifiable measures used to evaluate the quality of software products,
processes, and maintenance activities. These metrics provide insights into the effectiveness of
development practices and help ensure that software meets user expectations.
Key software quality metrics are broadly categorized into:
1. Product quality metrics
2. In-process quality metrics
3. Maintenance quality metrics.

1. Product Quality Metrics


These metrics focus on the final software product's quality, emphasizing reliability, defect rates, and
user satisfaction:
•Mean Time to Failure (MTTF): Measures the average time until a product's first failure. It is
particularly useful for safety-critical systems like air traffic control.
• Defect Density: Quantifies the number of defects relative to the size of the software,
commonly expressed as lines of code or function points.

• Customer problem metrics: This metric measures the problems which customers face while
using the product. The problems may be valid defects or usability problems, unclear
documentation, etc. The problem metrics is usually expressed in terms of problems per user
month (PUM), as given below:
• Customer Satisfaction Metrics: Assesses user satisfaction through surveys using a five-point scale (e.g.,
"Very Satisfied" to "Very Dissatisfied"). Metrics include percentages of satisfied or very satisfied customers.

2. In-process quality metrics


• Defect-density during testing: Higher defect rates found during testing is an indicator that the software
has experienced higher error injection during its development process.
This metric is a good indicator of quality while the software is still being tested.
It is especially useful to monitor subsequent releases of a product in the same development organization.
• Defect-arrival pattern during testing: The pattern of defect arrivals or the time between consecutive
failures gives more information. Different patterns of defect arrivals indicate different quality levels in
the field.
• Defect Removal Efficiency (DRE): Measures the percentage of defects removed compared to those
identified during a specific period.
3. Maintenance Quality Metrics
These metrics focus on post-release maintenance activities, ensuring timely fixes and customer satisfaction:
• Fix Backlog Metrics: Counts unresolved issues at the end of a period.
• Backlog Management Index (BMI): Measures efficiency in resolving reported problems.

A BMI > 100 indicates backlog reduction.


• Fix Response Time: Tracks the average time taken to resolve high-severity issues from detection to closure.
• Percent Delinquent Fixes: Monitors fixes exceeding agreed response times.

• Fix Quality: Evaluates fixes that fail to resolve issues or introduce new defects.
Software Quality Assurance (SQA) models provide structured frameworks to evaluate and enhance
software quality. Organizations must adopt a model for standardizing their SQA activities.
Many SQA models have been developed in recent years. Some of them are:
ISO/IEC 9126
ISO/IEC 9126 was an international standard for software product quality evaluation, later superseded
by ISO/IEC 25010. Its structure remains influential for defining quality attributes.
Key Quality Characteristics
The model classifies software quality into six characteristics, each with sub-characteristics:
1.Functionality: Suitability, accuracy, interoperability, security.
2.Reliability: Maturity, fault tolerance, recoverability.
3.Usability: Learnability, operability, accessibility.
4.Efficiency: Time behavior, resource utilization.
5.Maintainability: Analyzability, changeability, testability.
6.Portability: Adaptability, installability, replaceability.
2. Capability Maturity Model (CMM)
Developed by the Software Engineering Institute (SEI), CMM assesses organizational process
maturity across five levels:
Five Maturity Levels
1.Initial (Level 1): Ad-hoc processes, unpredictable outcomes.
2.Managed (Level 2): Basic project management (cost, schedule).
3.Defined (Level 3): Standardized, documented processes.
4.Quantitatively Managed (Level 4): Data-driven process control.
5.Optimizing (Level 5): Continuous process improvement.

Benefits:
•Reduces defects and rework through standardization.
•Enhances predictability and resource efficiency.
•Criticized for rigidity but remains foundational for process improvement
3. Software Total Quality Management (TQM)
TQM applies Deming’s principles to software development, emphasizing organization-wide quality
commitment.
Key Principles
•Customer focus: Align development with user needs.
•Continuous improvement: Iterative refinement of processes.
•Employee involvement: Quality as a shared responsibility.
Implementation in SDLC
•Integrates quality checks at every phase (requirements, design, testing).
•Uses metrics like defect density and customer-reported issues for feedback.
Deming’s 14 Points: Includes "drive out fear," "cease dependence on inspection," and "institute
training," fostering a culture of quality
Six Sigma
While not covered in the provided sources, Six Sigma is a data-driven methodology for defect reduction.
It complements SQA through:
DMAIC Framework
1.Define: Project goals and customer requirements.
2.Measure: Baseline performance metrics.
3.Analyze: Root causes of defects.
4.Improve: Implement solutions.
5.Control: Sustain improvements.
Relevance to SQA:
•Reduces variation in software processes (e.g., coding errors).
•Often integrated with CMM for higher maturity levels.
DEBUGGING
Debugging is not a part of the testing domain. Therefore, debugging is not testing.
It is a separate process performed as a consequence of testing.
But the testing process is considered a waste if debugging is not performed after the testing. Testing
phase in the SDLC aims to find more and more bugs, remove the errors, and build confidence in the
software quality. In this sense, testing phase can be divided into two parts:
1. Preparation of test cases, executing them, and observing the output. This is known as testing.
2. If output of testing is not successful, then a failure has occurred. Now the goal is to find
the bugs that caused the failure and remove the errors present in the software. This is called debugging.
Debugging is a two-part process. It begins with some indication of the existence of an error. It is the
activity of:
1. Determining the exact nature of the bug and location of the suspected
error within the program.
2. Fixing or repairing the error.
Debugging Process:
The goal of debugging process is to determine the exact nature of failure with the help of
symptoms identified, locate the bugs and errors, and finally correct it.
The debugging process is explained in the following steps:

Debugging Process
❑ Check the result of the output produced by executing test cases prepared in the testing
process. If the actual output matches with the expected output, it means that the results are
successful. Otherwise, there is failure in the output which needs to be analysed.
❑ Debugging is performed for the analysis of failure that has occurred, where we identify the
cause of the problem and correct it. It may be possible that symptoms associated with the
present failure are not sufficient to find the bug. Therefore, some additional testing is
required so that we may get more clues to analyse the causes of failures.
❑ If symptoms are sufficient to provide clues about the bug, then the cause of failure is
identified. The bug is traced to find the actual location of the error.
❑ Once we find the actual location of the error, the bug is removed with corrections.
❑ Regression testing is performed as bug has been removed with corrections in the software.
Thus, to validate the corrections, regression testing is necessary after every modification.

You might also like