STM Chapter 4
STM Chapter 4
Test Organization
❑ Test Organization is a crucial component of the test management process, focusing on setting up
and managing a suitable organizational structure for testing activities.
❑ It involves defining explicit roles and responsibilities within the testing team to ensure a smooth
and efficient testing process.
1. Test Manager:
• Testing Planning and Strategy Development: Develops comprehensive test
strategies and detailed test plans aligned with project goals.
• Resource Management and Team Leadership: Oversees resource allocation and
leads the testing team to ensure efficient execution of testing activities
2. Test Leader:
• Team Supervision and Task Assignment : Supervises the testing team, assigns tasks, and
ensures that all team members are working towards the project's objectives.
• Test Execution Oversight: Monitors the execution of test cases, identifies issues, and
ensures that testing is conducted according to the plan.
3. Test Engineer:
• Test Case Development and Execution: Develops and executes complex test cases, focusing
on both manual and automated testing methods.
• Defect Reporting and Analysis: Documents defects found during testing and analyzes them
to provide insights for improvement.
4. Junior Test Engineers:
• Test Case Execution and Reporting: Executes test cases as assigned, reports defects, and
assists in maintaining test documentation.
• Learning and Skill Development: Participates in training and skill development activities
to enhance testing skills and contribute effectively to the team.
Each role plays a crucial part in ensuring that the testing process is thorough, efficient, and
aligned with project goals.
Test Planning
❑ In STLC, testing also needs planning as is needed in SDLC. Since software projects become
uncontrolled if not planned properly, the testing process is also not effective if not planned earlier.
❑ Moreover, if testing is not effective in a software project, it also affects the final software product.
❑ Therefore, for a quality software, testing activities must be planned as soon as the project
planning starts.
❑ A test plan is defined as a document that describes the scope, approach, resources, and schedule
of intended testing activities.
❑ Test plan is driven with the business goals of the product.
In order to meet a set of goals, the test plan identifies the following:
1. Test Items:
•: These are the specific parts of the software or system that will be tested.
•: Identifying test items helps ensure that all critical components are evaluated.
2. Features to be tested:
•: These are the functionalities or capabilities of the software that need to be verified.
•: Focusing on key features ensures that the most important aspects of the software are
thoroughly tested.
3. Testing Tasks:
•: These are the specific activities that need to be performed during testing.
•: Breaking down testing into manageable tasks helps in organizing and executing the
testing process efficiently.
4. Tool Selection:
•: This involves choosing the appropriate software tools or equipment needed for testing.
•: The right tools can automate tasks, improve efficiency, and enhance the accuracy of test
results.
5. Time and Effort Estimate:
•: This involves estimating how long each task will take and how much effort is required.
•: Accurate time and effort estimates help in scheduling testing activities and allocating
resources effectively.
6. Who Will Do Each Task:
•: Identifying the personnel responsible for each task.
•: Clear assignment of tasks ensures accountability and helps in managing the workload
among team members.
7. Any Risks:
•: Identifying potential risks or challenges that could impact the testing process.
•: Recognizing risks allows for the development of mitigation strategies to minimize their impact.
8. Milestones:
•: These are significant events or achievements during the testing process.
•: Milestones help track progress, provide a framework for evaluating success, and ensure that
testing stays on schedule.
DETAILED TEST DESIGN AND TEST SPECIFICATIONS
The ultimate goal of test management is to get the test cases executed.
Till now, test planning has not provided the test cases to be executed.
Detailed test designing for each validation activity maps the requirements or features
to the actual test cases to be executed.
One way to map the features to their test cases is to analyse the following:
1. Requirement traceability
2. Design traceability
3. Code traceability
The analyses can be maintained in the form of a traceability matrix such that every requirement or
feature is mapped to a function in the functional design.
Traceability Matrix
❑ This function is then mapped to a module (internal design and code) in which the function is being
implemented. This in turn is linked to the test case to be executed.
❑ This matrix helps in ensuring that all requirements have been covered and tested.
❑ Priority can also be set in this table to prioritize the test cases.
TEST DESIGN SPECIFICATION
• A Test Design Specification outlines the systematic approach to validate specific software
features or requirements.
• According to IEEE 829-1998 and related standards, it bridges the test plan and test cases by
defining what to test, how to test, and success criteria.
A test design specification should have the following components according to IEEE
recommendation:
Identifier: A unique identifier is assigned to each test design specification with a reference to its
associated test plan.
•A unique identifier (e.g., TDS_LoginModule_1.0) links the document to its associated test plan and
version history.
Features to be tested : The features or requirements to be tested are listed with reference to the
items mentioned in SRS/SDD.
Example:
•Feature 1: User login functionality (SRS Section 3.1)
•Feature 2: Volume discount calculation (SDD Section 5.2)
Approach refinements: In the test plan, an approach to overall testing was discussed. Here,
further details are added to the approach. For example, special testing techniques to be used for
generating test cases are explained.
•Details testing techniques (e.g., boundary value analysis, equivalence partitioning) and tools
(e.g., Selenium, JUnit).
•Specifies how test results will be analyzed (e.g., automated comparators, manual inspection).
•Example:
• Apply boundary testing for input validation of age fields (0–120 years)
Test case identification:
❑ The test cases to be executed for a particular feature/ function are identified here,
Moreover, test procedures are also identified.
❑ The test cases contain input/output information and the test procedures contain the
necessary steps to execute the tests.
❑ Each test design specification is associated with test cases and test procedures.
❑ A test case may be associated with more than one test design specifications.
•
Feature Test Case ID Test Procedure
ID
User login
TC-001 TP-001
validation
Feature pass/fail criteria:
The criteria for determining whether the test for a feature has passed or failed, is described.
Example:
•Pass: Login succeeds within 2 seconds for valid credentials.
•Fail: System crashes or displays incorrect error messages for invalid inputs
TEST CASE SPECIFICATIONS
❑ Since the test design specifications have recognized the test cases to be executed, there is a
need to define the test cases with complete specifications.
❑ A test case specification is a detailed document that defines the inputs, expected outputs,
and execution steps for a specific test case.
❑ It ensures that testing is systematic, repeatable, and aligned with the software's
requirements.
Key Components of Test Case Specification
Test Case Specification Identifier:
1. A unique identifier assigned to the test case for easy reference.
2. Includes version information, author details, and revision history.
Purpose:
1. Describes the objective of the test case.
2. Specifies the functionality or feature being validated.
3. History-Based Prioritization
•Description: Leverages past data, such as defect rates and test execution history, to prioritize tests that
have historically detected more defects or taken longer to execute.
•Use Case: Effective for legacy systems or modules with known defect patterns.
•Example: Giving higher priority to tests for modules with a history of frequent bugs.
4. Version-Based Prioritization
•Description: Focuses on test cases related to new or modified code in the latest software version,
particularly for regression testing.
•Use Case: Ensures that recent changes do not introduce new defects or break existing functionality.
•Example: Prioritizing tests for updated APIs after a system upgrade.
5. Coverage-Based Prioritization
•Description: Prioritizes test cases based on their ability to maximize code coverage (e.g., statement, branch,
or path coverage).
•Use Case: Commonly used in automated testing to ensure critical parts of the code are tested first.
•Example: Executing tests that cover 80% of statements before edge-case scenarios.
6. Time-Based Prioritization
•Description: Allocates time slots for executing test cases based on their estimated duration or
difficulty level, ensuring all tests fit within the available time frame.
•Use Case: Suitable for projects with tight schedules and limited time for testing cycles.
7. Cost-Based Prioritization
•Description: Considers the cost of executing each test case (e.g., time, resources, and setup effort)
and prioritizes those that provide the most value at the lowest cost.
•Use Case: Useful when balancing budget constraints with testing effectiveness.
Measuring the Effectiveness of a Prioritized Test Suite
To evaluate the effectiveness of a prioritized test suite, QA teams rely on metrics that quantify
coverage, fault detection rates, and efficiency.
• Where:
• n = Total test cases in the suite.
• m = Total faults in the software.
• Fi = Position of the first test case detecting the i-th fault.
Example:
2. Test Coverage Metrics
• Definition: Measures the percentage of code/requirements covered by tests.
Statement Coverage:
Branch/Condition Coverage:
3. Defect Detection Rate (DDR)
•Formula:
4. Cost/Time Efficiency
•Formula:
•Target: Reduce execution time by 30–50% while retaining fault detection capability
SOFTWARE QUALITY
MANAGEMENT
❑ Software Quality Management (SQM) is a comprehensive process aimed at ensuring that software
products meet specified quality standards and fulfill user expectations throughout their development
lifecycle.
❑ This involves a systematic approach that integrates quality assurance, quality planning, and quality
control into the software development process.
• Customer problem metrics: This metric measures the problems which customers face while
using the product. The problems may be valid defects or usability problems, unclear
documentation, etc. The problem metrics is usually expressed in terms of problems per user
month (PUM), as given below:
• Customer Satisfaction Metrics: Assesses user satisfaction through surveys using a five-point scale (e.g.,
"Very Satisfied" to "Very Dissatisfied"). Metrics include percentages of satisfied or very satisfied customers.
• Fix Quality: Evaluates fixes that fail to resolve issues or introduce new defects.
Software Quality Assurance (SQA) models provide structured frameworks to evaluate and enhance
software quality. Organizations must adopt a model for standardizing their SQA activities.
Many SQA models have been developed in recent years. Some of them are:
ISO/IEC 9126
ISO/IEC 9126 was an international standard for software product quality evaluation, later superseded
by ISO/IEC 25010. Its structure remains influential for defining quality attributes.
Key Quality Characteristics
The model classifies software quality into six characteristics, each with sub-characteristics:
1.Functionality: Suitability, accuracy, interoperability, security.
2.Reliability: Maturity, fault tolerance, recoverability.
3.Usability: Learnability, operability, accessibility.
4.Efficiency: Time behavior, resource utilization.
5.Maintainability: Analyzability, changeability, testability.
6.Portability: Adaptability, installability, replaceability.
2. Capability Maturity Model (CMM)
Developed by the Software Engineering Institute (SEI), CMM assesses organizational process
maturity across five levels:
Five Maturity Levels
1.Initial (Level 1): Ad-hoc processes, unpredictable outcomes.
2.Managed (Level 2): Basic project management (cost, schedule).
3.Defined (Level 3): Standardized, documented processes.
4.Quantitatively Managed (Level 4): Data-driven process control.
5.Optimizing (Level 5): Continuous process improvement.
Benefits:
•Reduces defects and rework through standardization.
•Enhances predictability and resource efficiency.
•Criticized for rigidity but remains foundational for process improvement
3. Software Total Quality Management (TQM)
TQM applies Deming’s principles to software development, emphasizing organization-wide quality
commitment.
Key Principles
•Customer focus: Align development with user needs.
•Continuous improvement: Iterative refinement of processes.
•Employee involvement: Quality as a shared responsibility.
Implementation in SDLC
•Integrates quality checks at every phase (requirements, design, testing).
•Uses metrics like defect density and customer-reported issues for feedback.
Deming’s 14 Points: Includes "drive out fear," "cease dependence on inspection," and "institute
training," fostering a culture of quality
Six Sigma
While not covered in the provided sources, Six Sigma is a data-driven methodology for defect reduction.
It complements SQA through:
DMAIC Framework
1.Define: Project goals and customer requirements.
2.Measure: Baseline performance metrics.
3.Analyze: Root causes of defects.
4.Improve: Implement solutions.
5.Control: Sustain improvements.
Relevance to SQA:
•Reduces variation in software processes (e.g., coding errors).
•Often integrated with CMM for higher maturity levels.
DEBUGGING
Debugging is not a part of the testing domain. Therefore, debugging is not testing.
It is a separate process performed as a consequence of testing.
But the testing process is considered a waste if debugging is not performed after the testing. Testing
phase in the SDLC aims to find more and more bugs, remove the errors, and build confidence in the
software quality. In this sense, testing phase can be divided into two parts:
1. Preparation of test cases, executing them, and observing the output. This is known as testing.
2. If output of testing is not successful, then a failure has occurred. Now the goal is to find
the bugs that caused the failure and remove the errors present in the software. This is called debugging.
Debugging is a two-part process. It begins with some indication of the existence of an error. It is the
activity of:
1. Determining the exact nature of the bug and location of the suspected
error within the program.
2. Fixing or repairing the error.
Debugging Process:
The goal of debugging process is to determine the exact nature of failure with the help of
symptoms identified, locate the bugs and errors, and finally correct it.
The debugging process is explained in the following steps:
Debugging Process
❑ Check the result of the output produced by executing test cases prepared in the testing
process. If the actual output matches with the expected output, it means that the results are
successful. Otherwise, there is failure in the output which needs to be analysed.
❑ Debugging is performed for the analysis of failure that has occurred, where we identify the
cause of the problem and correct it. It may be possible that symptoms associated with the
present failure are not sufficient to find the bug. Therefore, some additional testing is
required so that we may get more clues to analyse the causes of failures.
❑ If symptoms are sufficient to provide clues about the bug, then the cause of failure is
identified. The bug is traced to find the actual location of the error.
❑ Once we find the actual location of the error, the bug is removed with corrections.
❑ Regression testing is performed as bug has been removed with corrections in the software.
Thus, to validate the corrections, regression testing is necessary after every modification.