Module-05
Module-05
Module-05
Concept of Quality:
Objective Assessment:
Development Perspective:
P a g e 1 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Quality Concerns:
Key points in the Step Wise framework where quality is particularly emphasized:
o Review the overall quality aspects of the project plan at this stage.
P a g e 2 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
General Expectation:
P a g e 3 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
System Requirements:
Measuring Quality:
P a g e 4 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Good Measure: Relates the number of units to the maximum possible (e.g., faults
per thousand lines of code).
Clarification Through Measurement: Helps to define and communicate what
quality really means, effectively answering "how do we know when we have been
successful?"
Direct Measurement: Measures the quality itself (e.g., faults per thousand lines
of code).
Indirect Measurement: Measures an indicator of the quality (e.g., number of user
inquiries at a help desk as an indicator of usability).
Setting Targets:
Impact on Project Team: Quality measurements set targets for team members.
Meaningful Improvement: Ensure that improvements in measured quality are
meaningful.
o Example: Counting errors found in program inspections may not be
meaningful if errors are allowed to pass to the inspection stage rather than
being eradicated earlier.
1. Definition/Description
o Definition: Clear definition of the quality characteristic.
o Description: Detailed description of what the quality characteristic entails.
P a g e 5 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
2. Scale
o Unit of Measurement: The unit used to measure the quality characteristic
(e.g., faults per thousand lines of code).
3. Test
o Practical Test: The method or process used to test the extent to which the
quality attribute exists.
4. Minimally Acceptable
o Worst Acceptable Value: The lowest acceptable value, below which the
product would be rejected.
5. Target Range
o Planned Range: The range of values within which it is planned that the
quality measurement value should lie.
6. Current Value
o Now: The value that applies currently to the quality characteristic.
1. Availability:
o Definition: Percentage of a particular time interval that a system is usable.
o Scale: Percentage (%).
o Test: Measure the system's uptime versus downtime over a specified
period.
o Minimally Acceptable: Typically high availability is desirable; specifics
depend on system requirements.
o Target Range: E.g., 99.9% uptime.
2. Mean Time Between Failures (MTBF):
o Definition: Total service time divided by the number of failures.
P a g e 6 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
These measurements help quantify and assess the reliability and maintainability of
software systems, ensuring they meet desired quality standards.
P a g e 7 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
ISO 9126 is a significant standard in defining software quality attributes and providing a
framework for assessing them. Here are the key aspects and characteristics defined by
ISO 9126:
1. Functionality:
o Definition: The functions that a software product provides to satisfy user
needs.
o Sub-characteristics: Suitability, accuracy, interoperability, security,
compliance.
2. Reliability:
o Definition: The capability of the software to maintain its level of
performance under stated conditions.
o Sub-characteristics: Maturity, fault tolerance, recoverability.
3. Usability:
o Definition: The effort needed to use the software.
o Sub-characteristics: Understandability, learnability, operability,
attractiveness.
4. Efficiency:
o Definition: The ability to use resources in relation to the amount of work
done.
o Sub-characteristics: Time behavior, resource utilization.
5. Maintainability:
o Definition: The effort needed to make changes to the software.
o Sub-characteristics: Analyzability, modifiability, testability.
6. Portability:
o Definition: The ability of the software to be transferred from one
environment to another.
P a g e 8 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Definition: Focuses on how well the software supports specific user goals in a
specific context of use.
Elements: Effectiveness, productivity, safety, satisfaction.
ISO 14598
Compliance
P a g e 9 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Interoperability
Definition: Refers to the ability of the software to interact with other systems
effectively.
Clarification: ISO 9126 uses "interoperability" instead of "compatibility" to avoid
confusion with another characteristic called "replaceability".
Importance: Ensures seamless integration and communication between different
software systems or components.
Sub-characteristics: Includes aspects like interface compatibility, data exchange
capabilities, and standards compliance to facilitate effective interaction.
Maturity
Recoverability
Definition: Refers to the capability of the software to restore the system to its
normal operation after a failure or disruption.
P a g e 10 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Importance of Distinction
Security: Focuses on access control and protecting the system from unauthorized
access, ensuring confidentiality, integrity, and availability.
Recoverability: Focuses on system resilience and the ability to recover from
failures, ensuring continuity of operations.
Learnability
Definition: Refers to the ease with which users can learn to operate the
software and accomplish basic tasks.
Focus: Primarily on the initial phase of user interaction with the software.
Measurement: Assessed by the time it takes for new users to become
proficient with the software, often measured in training hours or tasks
completed.
P a g e 11 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Operability
Definition: Refers to the ease with which users can operate and navigate the
software efficiently.
Focus: Covers the overall usability of the software during regular use and
over extended periods.
Measurement: Assessed by the user's experience with everyday tasks and
efficiency in completing them.
Importance of Distinction
Learnability: Critical for software that requires quick adoption and minimal
training, ensuring users can start using the software effectively from the
outset.
Operability: Crucial for software used intensively or for extended periods,
focusing on efficiency, ease of navigation, and user comfort.
P a g e 12 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Analysability
Definition: Refers to the ease with which the cause of a failure in the software can
be determined.
Focus: Helps in diagnosing and understanding software failures or issues quickly
and accurately.
Importance: Facilitates efficient debugging and troubleshooting during software
maintenance and support phases.
Changeability
Definition: Also known as flexibility, changeability refers to the ease with which
software can be modified or adapted to changes in requirements or environment.
Focus: Emphasizes the software's ability to accommodate changes without
introducing errors or unexpected behaviors.
P a g e 13 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Stability
Clarification of Terms
P a g e 14 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Portability Compliance
Definition: Refers to the adherence of the software to standards that facilitate its
transferability and usability across different platforms or environments.
Focus: Ensures that the software can run efficiently and effectively on various
hardware and software configurations without needing extensive modifications.
Importance: Facilitates broader deployment and reduces dependency on specific
hardware or software configurations.
Replaceability
P a g e 15 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Coexistence
Definition: Refers to the ability of the software to peacefully share resources and
operate alongside other software components within the same environment.
Focus: Does not necessarily involve direct data exchange but ensures
compatibility and non-interference with other software components.
Importance: Enables integration of the software into complex IT ecosystems
without conflicts or performance degradation.
Clarification of Terms
ISO 9126 provides structured guidelines for assessing and managing software quality
characteristics based on the specific needs and requirements of the software product. It
emphasizes the variation in importance of these characteristics depending on the type and
context of the software product being developed.
Once the software product requirements are established, ISO 9126 suggests the following
steps:
P a g e 16 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
2. Define Metrics and Measurements: Establish measurable criteria and metrics for
evaluating each quality characteristic, ensuring they align with the defined
objectives and user expectations.
3. Plan Quality Assurance Activities: Develop a comprehensive plan for quality
assurance activities, including testing, verification, and validation processes to
ensure adherence to quality standards.
4. Monitor and Improve Quality: Continuously monitor software quality
throughout the development lifecycle, identifying areas for improvement and
taking corrective actions as necessary.
5. Document and Report: Document all quality-related activities, findings, and
improvements, and provide clear and transparent reports to stakeholders on
software quality status and compliance.
Reliability: Critical for safety-critical systems where failure can have severe
consequences. Measures like mean time between failures (MTBF) are essential.
Efficiency: Important for real-time systems where timely responses are crucial.
Measures such as response time are key indicators.
P a g e 17 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Internal measurements like code execution times can help predict external
qualities like response time during software design and development.
Predicting external qualities from internal measurements is challenging and often
requires validation in the specific environment where the software will operate.
ISO 9126 acknowledges that correlating internal code metrics to external quality
characteristics like reliability can be difficult.
This challenge is addressed in a technical report rather than a full standard,
indicating ongoing research and development in this area.
P a g e 18 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
P a g e 19 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
It seems like you're describing a method for evaluating and comparing software products
based on their quality characteristics. Here's a summary and interpretation of your
approach:
P a g e 20 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
P a g e 21 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
are still influenced by the criteria set and the relative importance assigned to each
quality characteristic.
4. Community Needs: Understanding the specific needs and expectations of the user
community is essential. The assessment should align closely with the community's
goals and the functionalities they require from the software tools being evaluated.
5. Transparency and Feedback: Providing transparency in the assessment process
and gathering feedback from community members can enhance the credibility and
relevance of the evaluation results. This helps ensure that the assessment
adequately reflects the needs and perspectives of the community.
Understanding the differences between product metrics and process metrics is crucial in
software development:
1. Product Metrics:
o Purpose: Measure the characteristics of the software product being
developed.
o Examples:
Size Metrics: Such as Lines of Code (LOC) and Function Points,
which quantify the size or complexity of the software.
Effort Metrics: Like Person-Months (PM), which measure the
effort required to develop the software.
Time Metrics: Such as the duration in months or other time units
needed to complete the development.
2. Process Metrics:
o Purpose: Measure the effectiveness and efficiency of the development
process itself.
o Examples:
P a g e 22 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Differences:
Focus: Product metrics focus on the characteristics of the software being built
(size, effort, time), while process metrics focus on how well the development
process is performing (effectiveness, efficiency, quality).
Use: Product metrics are used to gauge the attributes of the final software product,
aiding in planning, estimation, and evaluation. Process metrics help in assessing
and improving the development process itself, aiming to enhance quality,
efficiency, and productivity.
Application: Product metrics are typically applied during and after development
phases to assess the product's progress and quality. Process metrics are applied
throughout the development lifecycle to monitor and improve the development
process continuously.
By employing both types of metrics effectively, software development teams can better
manage projects, optimize processes, and deliver high-quality software products that
meet user expectations.
P a g e 23 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Product quality management focuses on evaluating and ensuring the quality of the
software product itself. This approach is typically more straightforward to implement and
measure after the software has been developed.
Aspects:
P a g e 24 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
o Metrics may not always capture the full complexity or performance of the
final integrated product.
Process quality management focuses on assessing and improving the quality of the
development processes used to create the software. This approach aims to reduce errors
and improve efficiency throughout the development lifecycle.
Aspects:
P a g e 25 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
While product and process quality management approaches have distinct focuses,
they are complementary.
Effective software development teams often integrate both approaches to achieve
optimal results.
By improving process quality, teams can enhance product quality metrics, leading
to more reliable, efficient, and user-friendly software products.
ISO 9001:2000, now superseded by newer versions but still relevant in principle, outlines
standards for Quality Management Systems (QMS). Here’s a detailed look at its key
aspects and how it applies to software development:
ISO 9001:2000 is part of the ISO 9000 series, which sets forth guidelines and
requirements for implementing a Quality Management System (QMS).
The focus of ISO 9001:2000 is on ensuring that organizations have effective processes in
place to consistently deliver products and services that meet customer and regulatory
requirements.
Key Elements:
1. Fundamental Features:
o Describes the basic principles of a QMS, including customer focus,
leadership, involvement of people, process approach, and continuous
improvement.
o Emphasizes the importance of a systematic approach to managing
processes and resources.
P a g e 26 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
P a g e 27 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Ensure that subcontractors and external vendors also adhere to quality standards
through effective quality assurance practices.
Perceived Value: Critics argue that ISO 9001 certification does not guarantee the
quality of the end product but rather focuses on the process.
Cost and Complexity: Obtaining and maintaining certification can be costly and
time-consuming, which may pose challenges for smaller organizations.
Focus on Compliance: Some organizations may become overly focused on
meeting certification requirements rather than improving overall product quality.
Despite these criticisms, ISO 9001:2000 provides a structured framework that, when
implemented effectively, can help organizations improve their software development
processes and overall quality management practices.
1. Customer Focus:
o Understanding and meeting customer requirements to enhance satisfaction.
2. Leadership:
o Providing unity of purpose and direction for achieving quality objectives.
3. Involvement of People:
o Engaging employees at all levels to contribute effectively to the QMS.
4. Process Approach:
P a g e 28 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
P a g e 29 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Detailed Requirements
1. Documentation:
o Maintaining documented objectives, procedures (in a quality manual),
plans, and records that demonstrate adherence to the QMS.
o Implementing a change control system to manage and update
documentation as necessary.
2. Management Responsibility:
o Top management must actively manage the QMS and ensure that processes
conform to quality objectives.
3. Resource Management:
o Ensuring adequate resources, including trained personnel and infrastructure,
are allocated to support QMS processes.
4. Production and Service Delivery:
o Planning, reviewing, and controlling production and service delivery
processes to meet customer requirements.
o Communicating effectively with customers and suppliers to ensure clarity
and alignment on requirements.
5. Measurement, Analysis, and Improvement:
o Implementing measures to monitor product conformity, QMS effectiveness,
and process improvements.
P a g e 30 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Historical Perspective
1. Definition:
o TQM focuses on continuous improvement of processes through
measurement and redesign.
o It advocates that organizations continuously enhance their processes to
achieve higher levels of quality.
1. Objective:
P a g e 31 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
P a g e 32 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
The SEI Capability Maturity Model (CMM) is a framework developed by the Software
Engineering Institute (SEI) to assess and improve the maturity of software development
processes within organizations.
It categorizes organizations into five maturity levels based on their process capabilities
and practices:
1. Level 1: Initial
o Characteristics:
Chaotic and ad hoc development processes.
Lack of defined processes or management practices.
Relies heavily on individual heroics to complete projects.
o Outcome:
Project success depends largely on the capabilities of individual
team members.
High risk of project failure or delays.
2. Level 2: Repeatable
o Characteristics:
Basic project management practices like planning and tracking
costs/schedules are in place.
Processes are somewhat documented and understood by the team.
o Outcome:
Organizations can repeat successful practices on similar projects.
Improved project consistency and some level of predictability.
P a g e 33 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
3. Level 3: Defined
o Characteristics:
Processes for both management and development activities are
defined and documented.
Roles and responsibilities are clear across the organization.
Training programs are implemented to build employee capabilities.
Systematic reviews are conducted to identify and fix errors early.
o Outcome:
Consistent and standardized processes across the organization.
Better management of project risks and quality.
4. Level 4: Managed
o Characteristics:
Processes are quantitatively managed using metrics.
Quality goals are set and measured against project outcomes.
Process metrics are used to improve project performance.
o Outcome:
Focus on managing and optimizing processes to meet quality and
performance goals.
Continuous monitoring and improvement of project execution.
5. Level 5: Optimizing
o Characteristics:
Continuous process improvement is ingrained in the organization's
culture.
Process metrics are analyzed to identify areas for improvement.
Lessons learned from projects are used to refine and enhance
processes.
Innovation and adoption of new technologies are actively pursued.
o Outcome:
Continuous innovation and improvement in processes.
P a g e 34 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
SEI CMM has been instrumental not only in enhancing the software development
practices within organizations but also in establishing benchmarks for industry standards.
P a g e 35 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
P a g e 36 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Benefits of CMMI
P a g e 37 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
ISO/IEC 15504, also known as SPICE (Software Process Improvement and Capability
dEtermination), is a standard for assessing and improving software development
processes. Here are the key aspects of ISO 15504 process assessment:
Process Attributes
Nine Process Attributes: ISO 15504 assesses processes based on nine attributes,
which are:
1. Process Performance (PP): Measures the achievement of process-specific
objectives.
2. Performance Management (PM): Evaluates how well the process is
managed and controlled.
3. Work Product Management (WM): Assesses the management of work
products like requirements specifications, design documents, etc.
4. Process Definition (PD): Focuses on how well the process is defined and
documented.
5. Process Deployment (PR): Examines how the process is deployed within
the organization.
6. Process Measurement (PME): Evaluates the use of measurements to
manage and control the process.
7. Process Control (PC): Assesses the monitoring and control mechanisms in
place for the process.
P a g e 38 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Alignment with CMMI: ISO 15504 and CMMI share similar goals of assessing
and improving software development processes. While CMMI is more
comprehensive and applicable to a broader range of domains, ISO 15504 provides
a structured approach to process assessment specifically tailored to software
development.
P a g e 39 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
When assessors are judging the degree to which a process attribute is being fulfilled they
allocate one of the following scores:
P a g e 40 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Here’s how evidence might be identified and evaluated for assessing the process
attributes, taking the example of requirement analysis processes:
P a g e 41 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Importance of Evidence
P a g e 42 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Here’s a structured approach, drawing from CMMI principles, to address these issues and
improve process maturity:
1. Resource Overcommitment:
o Issue: Lack of proper liaison between the Head of Software Engineering
and Project Engineers leads to resource overcommitment across new
systems and maintenance tasks simultaneously.
o Impact: Delays in software deliveries due to stretched resources.
2. Requirements Volatility:
o Issue: Initial testing of prototypes often reveals major new requirements.
o Impact: Scope creep and changes lead to rework and delays.
3. Change Control Challenges:
o Issue: Lack of proper change control results in increased demands for
software development beyond original plans.
o Impact: Increased workload and project delays.
4. Delayed System Testing:
o Issue: Completion of system testing is delayed due to a high volume of bug
fixes.
o Impact: Delays in product release and customer shipment.
P a g e 43 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
P a g e 44 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
P a g e 45 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
P a g e 46 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Six Sigma
Here’s how UVW can adopt and benefit from Six Sigma:
1. Define:
o Objective: Clearly define the problem areas and goals for improvement.
o Action: Identify critical processes such as software development, testing,
and deployment where defects and variability are impacting quality and
delivery timelines.
2. Measure:
o Objective: Quantify current process performance and establish baseline
metrics.
o Action: Use statistical methods to measure defects, cycle times, and other
relevant metrics in software development and testing phases.
3. Analyse:
o Objective: Identify root causes of defects and variability in processes.
o Action: Conduct thorough analysis using tools like root cause analysis,
process mapping, and statistical analysis to understand why defects occur
and where process variations occur.
4. Improve:
P a g e 47 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Focus Areas:
o Addressing late deliveries due to resource overcommitment.
o Managing requirements volatility and change control effectively.
o Enhancing testing processes to reduce defects and delays in system testing
phases.
Tools and Techniques:
o Use of DMAIC (Define, Measure, Analyse, Improve, Control) for existing
process improvements.
o Application of DMADV (Define, Measure, Analyse, Design, Verify) for
new process or product development to ensure high-quality outputs from
the outset.
P a g e 48 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Cost Savings: Reduced rework and operational costs associated with defects.
The discussion highlights several key themes in software quality improvement over time,
emphasizing shifts in practices and methodologies:
1. Increasing Visibility:
o Early practices like Gerald Weinberg's 'egoless programming' promoted
code review among programmers, enhancing visibility into each other's
work.
o Modern practices extend this visibility to include walkthroughs,
inspections, and formal reviews at various stages of development, ensuring
early detection and correction of defects.
2. Procedural Structure:
o Initially, software development lacked structured methodologies, but over
time, methodologies with defined processes for every stage (like Agile,
Waterfall, etc.) have become prevalent.
o Structured programming techniques and 'clean-room' development further
enforce procedural rigor to enhance software quality.
3. Checking Intermediate Stages:
o Traditional approaches involved waiting until a complete, albeit imperfect,
version of software was ready for debugging.
o Contemporary methods emphasize checking and validating software
components early in development, reducing reliance on predicting external
quality from early design documents.
4. Inspections:
o Inspections are critical in ensuring quality at various development stages,
not just in coding but also in documentation and test case creation.
P a g e 49 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
P a g e 50 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Statistics are maintained so that the effectiveness of the inspection process can be
monitored.
The late 1960s marked a pivotal period in software engineering where the complexity of
software systems began to outstrip the capacity of human understanding and testing
capabilities. Here are the key developments and concepts that emerged during this time:
P a g e 51 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Overall, these methodologies aimed to address the challenges posed by complex software
systems by promoting structured, systematic development processes that prioritize
correctness from the outset rather than relying on post hoc testing and debugging.
P a g e 52 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Formal methods
It seems like you're discussing formal methods in software development and the concept
of software quality circles. Here's a summary of the points covered:
If you have any specific questions or if there's more you'd like to explore on these topics
or related areas, feel free to ask!
P a g e 53 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
The process you're describing involves the compilation of most probable error lists,
which is a proactive approach to improving software development processes. Here’s a
breakdown of the steps involved:
P a g e 54 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
This approach aligns well with quality circles and other continuous improvement
methodologies by fostering a culture of proactive problem-solving and learning from past
experiences.
If you have more questions or need further elaboration on any aspect, feel free to ask!
The concept of Lessons Learned reports and Post Implementation Reviews (PIRs) are
crucial for organizational learning and continuous improvement in project management.
Here’s a breakdown of these two types of reports:
P a g e 55 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Purpose: A PIR takes place after a significant period of operation of the new
system (typically after it has been in use for some time). Its focus is on evaluating
the effectiveness of the implemented system rather than the project process itself.
Timing: Conducted by someone who was not directly involved in the project to
ensure neutrality and objectivity.
Content: A PIR includes:
o System Performance: How well the system meets its intended objectives
and user needs.
o User Feedback: Feedback from users on system usability and
functionality.
o Improvement Recommendations: Changes or enhancements suggested to
improve system effectiveness.
Audience: The audience typically includes stakeholders who will benefit from
insights into the system’s operational performance and areas for improvement.
Outcome: Recommendations from a PIR often lead to changes aimed at
enhancing the effectiveness and efficiency of the system.
P a g e 56 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Testing
The text discusses the planning and management of testing in software development,
highlighting the challenges of estimating the amount of testing required due to unknowns,
such as the number of bugs left in the code.
1. Quality Judgement:
o The final judgement of software quality is based on its correct execution.
2. Testing Challenges:
o Estimating the remaining testing work is difficult due to unknown bugs in
the code.
3. V-Process Model:
o Introduced as an extension of the waterfall model.
o Diagrammatic representation provided in Figure 13.5.
o Stresses the necessity for validation activities matching the project creation
activities.
4. Validation Activities:
o Each development step has a matching validation process.
o Defects found can cause a loop back to the corresponding development
stage for rework.
5. Discrepancy Handling:
o Feedback should occur only when there is a discrepancy between specified
requirements and implementation.
P a g e 57 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
The V-process model provides a structure for making early planning decisions
about testing.
Decisions can be made about the types and amounts of testing required from the
beginning of the project.
P a g e 58 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Off-the-Shelf Software:
If software is acquired off-the-shelf, certain stages like program design and coding
are not relevant.
Consequently, program testing would not be necessary in this scenario.
1. Objectives:
o Both techniques aim to remove errors from software.
2. Definitions:
o Verification: Ensures outputs of one development phase conform to the
previous phase's outputs.
o Validation: Ensures fully developed software meets its requirements
specification.
3. Objectives Clarified:
o Verification Objective: Check if artifacts produced after a phase conform to
those from the previous phase (e.g., design documents conform to requirements
specifications).
o Validation Objective: Check if the fully developed and integrated software
satisfies customer requirements.
4. Techniques:
o Verification Techniques: Review, simulation, and formal verification.
o Validation Techniques: Primarily based on product testing.
5. Process Stages:
P a g e 59 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Testing activities
The text provides an overview of test case design approaches, levels of testing, and main
testing activities in software development.
It emphasizes the differences between black-box and white-box testing, the stages of
testing (unit, integration, system), and the activities involved in the testing process.
1. Black-Box Testing:
o Test cases are designed using only the functional specification.
o Based on input/output behavior without knowledge of internal structure.
o Also known as functional testing or requirements-driven testing.
2. White-Box Testing:
o Test cases are designed based on the analysis of the source code.
o Requires knowledge of the internal structure.
o Also known as structural testing or structure-driven testing.
P a g e 60 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Levels of Testing
1. Unit Testing:
o Tests individual components or units of a program.
o Conducted as soon as the coding for each module is complete.
o Allows for parallel activities since modules are tested separately.
o Referred to as testing in the small.
2. Integration Testing:
o Checks for errors in interfacing between modules.
o Units are integrated step by step and tested after each integration.
o Referred to as testing in the large.
3. System Testing:
o Tests the fully integrated system to ensure it meets requirements.
o Conducted after integration testing.
Testing Activities
1. Test Planning:
o Involves determining relevant test strategies and planning for any required
test bed.
o Test bed setup is crucial, especially for embedded applications.
2. Test Suite Design:
o Planned testing strategies are used to design the set of test cases (test suite).
3. Test Case Execution and Result Checking:
o Each test case is executed, and results are compared with expected
outcomes.
o Failures are noted for test reporting when there is a mismatch between
actual and expected results.
P a g e 61 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
The text describes the detailed process and activities involved in software test reporting,
debugging, error correction, defect retesting, regression testing, and test closure.
It highlights the importance of formal issue recording, the adjudication of issues, and
various testing strategies to ensure software quality.
Test Reporting
1. Issue Raising:
o Report discrepancies between expected and actual results.
2. Issue Recording:
o Formal recording of issues and their history.
3. Review Body Decisions:
o Dismissal: Misunderstanding of requirement by tester.
o Fault Identification: Developers need to correct the issue.
o Incorrect Requirement: Adding a new requirement and additional
work/payment.
o Off-Specification Fault: The application can operate with the error.
4. Test Failure Notification:
o Informal intimation to the development team to optimize turnaround time.
1. Debugging:
o Identify error statements by analyzing failure symptoms.
o Various debugging strategies are employed.
2. Error Correction:
o Correct the code after locating the error through debugging.
3. Defect Retesting:
o Retesting corrected code to check if the defect has been successfully
addressed (resolution testing).
P a g e 62 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
4. Regression Testing:
o Ensures unmodified functionalities still work correctly after bug fixes.
o Runs alongside resolution testing to check for new errors introduced by
changes.
Test Closure
1. Test Completion:
o Archiving documents related to lessons learned, test results, and logs for
future reference.
2. Time-Consuming Activity:
o Debugging is noted as usually the most time-consuming activity in the
testing process.
P a g e 63 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
The text describes who performs testing in organizations, the importance and benefits of
test automation, and various types of automated testing tools.
It emphasizes that while test automation can significantly reduce human effort, improve
thoroughness, and lower costs, different tools have distinct advantages and challenges.
Test Automation
P a g e 64 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
o Reduces monotony, boredom, and errors in running the same test cases
repeatedly.
o Substantial cost and time reduction in testing and maintenance phases.
1. Historical Data:
P a g e 65 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
o Use historical data to estimate errors per 1000 lines of code from past
projects.
o Apply this ratio to new system development to estimate potential errors
based on the code size.
Independent Reviews
Using these methods helps in obtaining a better estimation of latent errors, providing a
clearer understanding of the remaining testing effort needed to ensure software quality.
P a g e 66 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Software reliability
P a g e 67 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
o A bug may affect different users differently based on how they use the
software.
5. Reliability Improvement Over Time:
o Reliability usually improves during testing and operational phases as
defects are identified and fixed.
o This improvement can be modeled mathematically using Reliability
Growth Models (RGM).
6. Reliability Growth Models (RGM):
o RGMs describe how reliability improves as failures are reported and bugs
are corrected.
o Various RGMs exist, including the Jelinski–Moranda model, Littlewood–
Verall’s model, and Goel–Okutomo’s model.
o RGMs help predict when a certain reliability level will be achieved, guiding
decisions on when testing can be stopped.
Quality plans
Quality plans detail how standard quality procedures and standards from an
organization's quality manual will be applied to a specific project.
They ensure all quality-related activities and requirements are addressed.
P a g e 68 | 69
SOFTWARE ENGINEERING & PROJECT MANAGEMENT
Client Requirements:
For software developed for external clients, the client's quality assurance staff may
require a quality plan to ensure the quality of the delivered products.
This requirement ensures that the client’s quality standards are met.
A quality plan acts as a checklist to confirm that all quality issues have been
addressed during the planning process.
Most of the content in a quality plan references other documents that detail
specific quality procedures and standards.
P a g e 69 | 69