0% found this document useful (0 votes)
23 views9 pages

SQT Mid

The document outlines major software challenges, including time, cost, and quality, and introduces frameworks like the Capability Maturity Model (CMM), Testing Maturity Model (TMM), and Test Process Improvement (TPI) to enhance software development and testing processes. It emphasizes the importance of quality assurance (QA) in preventing, reducing, and containing defects, while also discussing the feasibility of complete testing and the necessity of measuring software quality. Additionally, it details Software Quality Engineering (SQE) activities, which encompass planning, executing, and assessing QA processes to ensure high-quality software delivery.

Uploaded by

hrm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views9 pages

SQT Mid

The document outlines major software challenges, including time, cost, and quality, and introduces frameworks like the Capability Maturity Model (CMM), Testing Maturity Model (TMM), and Test Process Improvement (TPI) to enhance software development and testing processes. It emphasizes the importance of quality assurance (QA) in preventing, reducing, and containing defects, while also discussing the feasibility of complete testing and the necessity of measuring software quality. Additionally, it details Software Quality Engineering (SQE) activities, which encompass planning, executing, and assessing QA processes to ensure high-quality software delivery.

Uploaded by

hrm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

# Major Software Challenges

1. Time
o Tight deadlines in software projects can result in rushed development and testing. This may lead to
incomplete or poorly implemented features, impacting the overall functionality and user experience.
Ensuring timely delivery while maintaining quality is a critical challenge.
2. Cost
o Software development is resource-intensive, requiring investment in tools, skilled professionals, and
infrastructure. Budget constraints can limit these resources, affecting the scope, quality, or timeline of
the project. Unexpected changes or issues can further escalate costs.
3. Quality
o Delivering high-quality software requires meticulous design, development, and testing, which takes
both time and money. Balancing quality with constraints often results in bugs, poor performance, or
usability issues, leading to dissatisfaction and increased maintenance efforts.

# What is CMM?
The Capability Maturity Model (CMM) is a framework that evaluates and improves the software development
processes of an organization. It helps organizations assess their current process maturity level, enabling them to
enhance their ability to deliver high-quality software efficiently and cost-effectively.
CMM provides a structured pathway for organizations to gradually improve their processes, ensuring consistency,
quality, and continuous optimization in software development.

The Five CMM Maturity Levels:


1. Level 1: Initial
o Description: Processes are disorganized, unpredictable, and chaotic. Success depends heavily on
individual efforts rather than established methods.
o Characteristics:
 Lack of standard practices.
 Outcomes are inconsistent and difficult to replicate.
 High risk of failure due to ad hoc processes.
2. Level 2: Repeatable
o Description: Basic project management practices are in place. The organization starts to document
processes, making it possible to repeat successful outcomes.
o Characteristics:
 Processes are established and repeatable.
 Focus on project-level management practices like tracking schedules and budgets.
 Dependence on skilled individuals reduces, but consistency is limited to specific projects.
3. Level 3: Defined
o Description: The organization establishes its own standardized and documented software
development lifecycle (SDLC) or process framework.
o Characteristics:
 Defined processes tailored to the organization’s needs.
 Strong emphasis on documentation, standardization, and integration.
 Processes are consistent across the organization.
4. Level 4: Managed
o Description: The organization uses quantitative data to monitor and control processes. Decisions are
made based on data analysis, ensuring better predictability and control over outcomes.
o Characteristics:
 Focus on metrics and data collection.
 Processes are measured, controlled, and predictable.
 Continuous monitoring improves process performance.
5. Level 5: Optimizing
o Description: The organization continuously improves its processes using feedback, innovations, and
lessons learned from monitoring.
o Characteristics:
 Focus on process optimization and innovation.
 Regularly integrates improvements into existing processes.
 Actively adopts new practices to enhance efficiency and quality.
Benefits of CMM:
 Provides a clear roadmap for process improvement.
 Enhances software quality and reduces defects.
 Promotes consistency and predictability in project outcomes.
 Reduces risks and inefficiencies in software development processes.
By following the CMM framework, organizations can evolve their processes from chaotic and unpredictable to
systematic, efficient, and continuously improving.

# What is TMM?
The Testing Maturity Model (TMM) is a framework used to assess and improve the testing processes in an
organization. Just like the Capability Maturity Model (CMM) focuses on software development, TMM focuses on
how testing is planned, executed, and improved.
TMM helps organizations evolve their testing processes to ensure better software quality, reduce bugs, and make
testing more efficient and effective.

The Five TMM Maturity Levels:


1. Level 1: Initial
o Description: There is no structured approach to testing. Testing happens randomly, often after the
code is written.
o Characteristics:
 Testing is not seen as important.
 Test cases are created without much planning.
 No tracking of testing progress.
2. Level 2: Phase Definition
o Description: The organization starts defining testing goals and creating basic plans.
o Characteristics:
 Testing goals are set, such as identifying objectives and analyzing risks.
 Basic testing strategies and methods are introduced.
 Resources are allocated for testing activities.
3. Level 3: Integration
o Description: Testing becomes a more formal process integrated into the software development
lifecycle.
o Characteristics:
 A dedicated testing team is created.
 Developers and testers collaborate, considering user needs.
 Testing is included in every phase of development, not just after coding.
 Training programs for testing are introduced.
4. Level 4: Management and Measurement
o Description: Testing is managed systematically, and data is collected to monitor and measure its
effectiveness.
o Characteristics:
 Testing progress and quality are tracked.
 A testing management program is established to organize testing tasks.
 Reviews and evaluations are conducted organization-wide.
5. Level 5: Optimization/Defect Prevention and Quality Control
o Description: The focus shifts to preventing defects and continuously improving the testing process.
o Characteristics:
 Data from testing is used to prevent future defects.
 Advanced quality control techniques are applied.
 The testing process is continuously improved for better efficiency and outcomes.

Summary of TMM:
 TMM Level 1: No formal testing, testing starts after coding.
 TMM Level 2: Basic testing plans and strategies are introduced.
 TMM Level 3: Testing is integrated into the development process, and a testing team is established.
 TMM Level 4: Testing is systematically managed, and its effectiveness is measured.
 TMM Level 5: Focus on defect prevention, quality control, and continuous improvement.
TMM helps organizations move from unorganized, reactive testing to structured, proactive, and optimized testing
practices.

# What is TPI?
Test Process Improvement (TPI) is a method used to enhance how testing activities are planned and carried out
within an organization. It focuses on improving all steps related to finding and fixing software defects to ensure better
software quality, faster delivery, and reduced costs.
A test process includes tasks like setting testing goals, designing test cases, hiring test engineers, running tests, and
reporting bugs. TPI aims to make these activities more efficient, organized, and effective.

How to Improve a Test Process?


Improving a test process involves four simple steps:
1. Determine an Area for Improvement
o What to do: Identify which part of the testing process needs to be better. For example:
 Are the test cases poorly written?
 Are there delays in testing?
 Are too many bugs slipping through to production?
o Why it matters: Focusing on specific problems prevents wasting time and resources.
2. Evaluate the Current State of the Test Process
o What to do: Assess how well the current testing process is working.
 Measure the quality of tests (are they finding the defects?).
 Analyze the time taken for testing (is it too slow?).
 Check the cost of testing (is it too expensive?).
o Why it matters: Understanding where you stand now helps you decide where to improve.
3. Identify the Desired State and Plan to Achieve It
o What to do: Decide what you want the improved process to look like. For example:
 Test cases should catch more bugs.
 Testing should be faster without compromising quality.
 Costs should be reduced without losing efficiency.
 Plan how to get there, like automating tests or training testers.
o Why it matters: Clear goals make it easier to implement changes effectively.
4. Implement the Necessary Changes
o What to do: Make the planned changes in the testing process.
 Introduce new tools for automation.
 Train the team on better testing practices.
 Update the test plans and strategies.
o Why it matters: Action brings improvement. After implementing, track the results to ensure the
changes are working.

Why is TPI Important?


 Improves the efficiency of testing.
 Reduces costs by removing unnecessary steps or errors.
 Ensures better software quality with fewer defects.
 Makes the testing process faster and more organized.
By following these steps, organizations can continuously refine their test processes, leading to reliable and high-
quality software development.

# What is Quality Assurance (QA)?


Quality Assurance (QA) ensures that software meets the desired standards of quality. It focuses on preventing
defects, reducing existing defects, and managing defects to maintain reliability and safety. QA is about ensuring that
the software is correct, reliable, and safe to use.

Defect Prevention
 What it is: Stopping errors from being introduced into the software.
 How it works:
1. Error Source Removal:
 Finding and fixing the root causes of mistakes (like unclear requirements or human
misunderstandings).
 Example: Providing training to developers to avoid common mistakes.
2. Error Blocking:
 Preventing errors through rules or automated checks.
 Example:
 Preventing invalid input (e.g., blocking a 0 in a divisor field).
 Using tools for design validation to ensure correct implementation.
o Why it matters: The earlier you stop errors, the fewer bugs you'll face later.

Defect Reduction
 What it is: Finding and fixing errors after they’ve been introduced.
 Techniques to reduce defects:
1. Inspection:
 Reviewing software code, designs, or test plans without running the program.
 Types:
 Informal Reviews: Simple discussions or quick checks (e.g., casual peer reviews).
 Formal Inspections: Structured reviews involving multiple people.
2. Testing:
 Running the software to check its behavior.
 If it fails, the issue is analyzed, located, and fixed.
3. Other Techniques:
 Risk Analysis: Identifying potential problem areas early.
 Boundary Value Testing: Checking edge cases in code (e.g., extreme inputs).
 Simulation and Prototyping: Testing how the system works in a safe, experimental
environment (e.g., autopilot simulations).
o Why it matters: Since errors can't always be prevented, finding and fixing them early is crucial.

Defect Containment
 What it is: Limiting the impact of defects that remain in the system.
 How it works:
1. Software Fault-Tolerance:
 Ensuring the system continues to work despite errors.
 Techniques:
 Recovery (Rollback and Redo): Fixing problems by undoing or redoing actions.
 N-Version Programming (NVP): Running multiple versions of software to handle
faults.
2. Safety Assurance and Failure Containment:
 Preventing severe accidents caused by system failures.
 Example:
 An autopilot system ensures the plane stays safe even if a fault occurs.
 Safety Measures: Identifying hazards, controlling risks, and minimizing damage.
o Why it matters: Defects can’t always be eliminated, so limiting their effects is essential for safety
and reliability.

Summary:
1. Defect Prevention: Stop errors before they happen.
2. Defect Reduction: Find and fix errors that slipped through.
3. Defect Containment: Minimize the damage caused by remaining errors.
QA focuses on making the software as error-free, safe, and reliable as possible.

# What is Complete Testing?


Complete testing is the process of thoroughly testing a system to ensure there are no undiscovered bugs or faults at
the end of the testing phase. It aims to detect all possible issues in the system, reduce risks, and provide full
confidence in the software's quality.

Why is Complete Testing Not Feasible?


1. Large Input Domain:
o Most software has an enormous range of possible inputs (both valid and invalid).
o Testing every possible input would take too much time and effort.
o Example: A calculator app handling billions of number combinations.
2. Complex Design:
o Some systems have highly intricate designs that make testing all scenarios practically impossible.
o Example: Testing every possible interaction in a complex autopilot system.
3. Resource Limitations:
o Testing requires time, money, and manpower. Completely testing large systems can be prohibitively
expensive or time-consuming.
4. Real-World Environment:
o It’s not always possible to replicate all the conditions in which the software will run.
o Example: Simulating every possible weather condition for an autopilot system.

Summary:
Complete testing is about finding every single bug in a system, but it is nearly impossible due to:
 The vast number of input combinations.
 The complexity of system design.
 Limited resources (time, money, manpower).
 Difficulty in recreating real-world conditions.
Instead of complete testing, testers aim for effective testing, focusing on the most critical areas of the system.

# Why Measure Software Quality?


Measuring software quality is essential because it provides a quantitative understanding of how well the software
meets desired standards. Here's why it's important:

1. Establish Baselines:
 Measurement helps set quality benchmarks that the software must achieve.
 Example: If a website should allow users to extract information within 20 minutes, this metric becomes a
baseline to evaluate usability.
 Why important? Baselines give teams clear goals for performance and usability.

2. Enable Quality Improvement Based on Cost:


 Quality improvement efforts require investment in tools, training, or processes.
 Measuring quality helps organizations assess if these efforts are cost-effective.
 Why important? It ensures that resources are spent wisely while improving quality.

3. Understand Current Quality for Future Planning:


 Measurements provide insights into the current state of quality, enabling better planning for upgrades or
new versions.
 Why important? Knowing where the software stands helps in prioritizing areas that need improvement.

Other Key Benefits:


1. Track Progress:
o Helps teams monitor whether quality is improving, staying consistent, or declining.
2. Identify Defects Early:
o Metrics highlight areas with poor quality, enabling early detection of potential issues.
3. Ensure Customer Satisfaction:
o Higher quality means fewer bugs, better performance, and improved usability, leading to happier
users.
4. Support Decision-Making:
o Quantitative data helps stakeholders make informed decisions about product releases, resource
allocation, or quality goals.

Summary:
Measuring software quality is crucial to:
 Establish baselines for success.
 Ensure cost-effective improvements.
 Plan future enhancements based on current quality levels.
 Ultimately, it helps deliver reliable, efficient, and user-friendly software.

# Software Quality Engineering (SQE) Activities


Software Quality Engineering (SQE) is the process of planning, preparing, executing, and assessing software testing
and quality assurance activities. It is divided into Pre-QA, In-QA, and Post-QA stages, with specific tasks at each
phase to ensure software quality.

1. Pre-QA Activities: Quality Planning/Test Planning


This is the planning phase where key decisions about testing are made.
Key Steps:
1. Set Specific Quality Goals:
o Define high-level objectives to guide testing efforts.
o Example: Focus on efficiency, reliability, and usability.
2. Identify Quality Perspectives and Expectations:
o Understand what the target customers and users expect from the software.
3. Select Direct Quality Measures:
o Quantify selected quality attributes like efficiency and reliability (e.g., efficiency target: 95%).
4. Assess Quality Expectations vs. Cost:
o Analyze the cost of achieving different quality goals to balance quality and budget.
5. Form a QA Strategy:
o Identify appropriate QA activities to perform.
o Choose quality measurement models to assess performance and guide improvements.
6. Prepare Test Procedures:
o Test Cases (Micro-Level):
 Create detailed documents describing inputs, actions, and expected results.
 Allocate and sequence test cases from simple to complex.
o Test Plan (High-Level):
 Document objectives, scope, approach, resources, and schedule of testing activities.
o Test Suite (Macro-Level):
 Combine test cases into a suite for systematic execution.
 Include regression tests for earlier product versions.

2. In-QA Activities: Test Execution


This phase involves carrying out the planned QA activities and managing defects discovered during the process.
Key Steps:
1. Execute Planned QA Activities:
o Run the tests according to the test plan and sequence.
2. Handle Discovered Defects:
o Identify and document defects based on what, where, when, and severity.
3. Collect Failure Information:
o Record details of failures for analysis, such as timing, location, and impact.
4. Document Testing Activities:
o Maintain detailed records of testing to ensure future reproducibility and comparison.
5. Measure Execution:
o Use predefined templates to measure and analyze test execution performance.

3. Post-QA Activities: Quality Measurement, Assessment & Improvement


This phase involves assessing the outcomes of the QA process and identifying areas for improvement.
Key Steps:
1. Quality Measurement and Assessment:
o Analyze the collected data to evaluate the success of the QA process.
2. Provide Feedback:
o Share findings with stakeholders and teams to guide future actions.
3. Identify Improvement Opportunities:
o Pinpoint potential areas for enhancement based on data analysis.
4. Parallel Measurement and Analysis:
o Perform some measurement and analysis activities during the QA process to allow real-time
adjustments.

Summary of SQE Activities:


Phase Activity Purpose
Quality planning, test planning, preparing test cases, Define objectives, expectations, and strategies for
Pre-QA
and test suites testing.
In-QA Test execution, defect handling, and documentation Execute and monitor the QA process.
Post- Assess results and identify areas for continuous
Quality measurement, feedback, and improvement
QA improvement.
By following these stages, SQE ensures that software meets user expectations, is cost-efficient, and is delivered with
high reliability and usability.

# QA Team Structure
The structure of a Quality Assurance (QA) team determines how testing responsibilities are organized and managed.
Different organizations use different structures based on their size, project needs, and workflow. The main QA team
structures include Vertical, Horizontal, and Mixed models.

1. Vertical Model
 Structure:
o A QA team is aligned with a specific product or project.
o Dedicated testers focus on one or more testing tasks for that product.
 Key Features:
o Testers work closely with the development and product teams.
o The team has a deep understanding of the product being tested.
 Advantages:
o Improved product knowledge and specialization.
o Faster communication and collaboration with the product team.
 Challenges:
o Resources are tied to specific projects, which can lead to inefficiencies when workloads vary across
products.

2. Horizontal Model
 Structure:
o QA teams are specialized in a particular type of testing (e.g., performance testing, security testing).
o They work on multiple projects across the organization.
 Key Features:
o Testing expertise is centralized.
o Teams provide consistent testing services for different products.
 Advantages:
o Efficient use of specialized skills across projects.
o Encourages standardization of testing practices.
 Challenges:
o Less familiarity with individual products.
o Requires strong coordination to align with project-specific goals.

3. Mixed Model
 Structure:
o Combines elements of both vertical and horizontal models.
o Testers may specialize in a particular type of testing while being assigned to specific products.
 Key Features:
o Large organizations with diverse products and testing needs often use this structure.
o Teams are flexible and can adapt to project-specific requirements.
 Advantages:
o Balances specialization and product familiarity.
o Provides scalability for large organizations with multiple products.
 Challenges:
o Can be complex to manage due to overlapping responsibilities.

Choosing the Right Structure


The choice of QA team structure depends on:
1. Organization Size:
o Small organizations may prefer a vertical model for simplicity.
o Large organizations benefit from the mixed model.
2. Product Complexity:
o Specialized testing (horizontal model) is critical for complex products requiring niche skills.
3. Resource Availability:
o Efficient resource utilization may lead to a preference for the horizontal model.
4. Collaboration Needs:
o Close collaboration with product teams aligns with the vertical model.

Summary of QA Team Structures


Structure Focus Best for Challenges
Vertical Product-specific testing Dedicated product teams Inefficient resource utilization.
Specialized testing for multiple Less product familiarity,
Horizontal Centralized expertise
products coordination needs.
Combines vertical and Large organizations with diverse
Mixed Complexity in management.
horizontal models testing needs
By understanding and implementing the right QA team structure, organizations can optimize their testing process and
ensure high-quality software delivery.

# Automated Testing
Automated testing is the process of using software tools to execute pre-defined test cases automatically, compare
actual outcomes with expected outcomes, and identify defects in the application. It is particularly useful for repetitive
tasks, regression testing, and performance testing, saving time and reducing human error.

# Why Automated Testing CANNOT Replace Manual Testing


1. Usability Testing
o Automation cannot assess how user-friendly or intuitive an application is. Usability testing requires
human insight and judgment.
2. Logical Errors
o Logical errors or conceptual misunderstandings in the application often require critical thinking,
which is beyond the scope of automation tools.
3. Documentation Review
o Specification documents (e.g., Software Requirements Specification) and design reviews cannot be
automated. These require human expertise.
4. Ad hoc or Exploratory Testing
o Exploratory testing relies on testers' creativity, intuition, and domain knowledge, which cannot be
simulated by machines.
5. One-time or Urgent Testing
o For urgent "ASAP" tests or scenarios where automation setup is impractical, manual testing is quicker
and more effective.

# Which Tests/Test Cases to Automate


Automated testing is best suited for:
1. Repetitive Tests
o Test cases that are executed frequently, such as regression tests for every build or release.
2. Data-driven Tests
o Tests that require multiple data inputs for the same set of operations (e.g., validating forms with
different data).
3. Internal Information Tests
o Tests that interact with application internals like GUI attributes or database checks.
4. Stress or Load Testing
o Tests designed to simulate heavy usage or load conditions, which can be automated to measure
performance under pressure.

# Which Tests/Test Cases Should NOT Be Automated


Certain test scenarios are unsuitable for automation:
1. Usability Testing
o Evaluating how easy or intuitive an application is requires human judgment.
2. Logical Errors
o Identifying logical flaws in workflows or calculations requires manual intervention.
3. Documentation Testing
o Testing design documents, specifications, or business requirements cannot be automated.
4. One-time or Ad hoc Testing
o Scenarios that are run only once or rely on spontaneous exploration are better suited for manual
testing.
5. Tests Without Predictable Results
o If expected results are unclear or subjective, automation is not effective.

Benefits of Automation
 Reduces repetitive work for testers.
 Improves accuracy by eliminating human error.
 Saves time for large or complex test suites.
 Allows parallel execution of tests, speeding up processes.
However, it is essential to strike a balance between automated and manual testing to ensure comprehensive software
quality assurance.

You might also like