0% found this document useful (0 votes)
45 views18 pages

Stqa

The document outlines the fundamentals of Software Quality Assurance (SQA) and software testing, detailing definitions, objectives, and methodologies. It distinguishes between Quality Assurance and Quality Control, discusses SQA activities and challenges, and covers various testing principles and techniques. Additionally, it emphasizes the importance of test planning, roles, and monitoring in ensuring software quality and reliability.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views18 pages

Stqa

The document outlines the fundamentals of Software Quality Assurance (SQA) and software testing, detailing definitions, objectives, and methodologies. It distinguishes between Quality Assurance and Quality Control, discusses SQA activities and challenges, and covers various testing principles and techniques. Additionally, it emphasizes the importance of test planning, roles, and monitoring in ensuring software quality and reliability.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

1.

Software Quality Assurance Fundamentals


1.1 Definition of Quality, Quality Assurance (QA), Quality Control (QC),
Difference between QA and QC, SQA Challenges

 Quality: In software, quality refers to how well a product meets customer needs and
expectations. It includes correctness, reliability, usability, efficiency, and
maintainability.
 Quality Assurance (QA): QA is a process-oriented approach. It focuses on
improving the development and testing processes to prevent defects. It involves
planning, procedures, and standards to ensure software quality.
 Quality Control (QC): QC is product-oriented. It focuses on identifying defects in
the actual software product through testing and reviews.

Difference between QA and QC:

Feature QA QC
Focus Process Product
Goal Prevent defects Identify defects
Methods Process definition, audits, standards Testing, inspections
When During development After development

SQA Challenges:

 Dealing with complex and changing requirements.


 Time and budget constraints.
 Maintaining consistency across teams.
 Ensuring compliance with standards.
 Testing in dynamic environments (e.g., mobile, web).

1.2 Software Quality Assurance, SQA Planning & Standards (ISO 9000)

 Software Quality Assurance (SQA): It's a set of activities ensuring software


processes and products conform to requirements, standards, and procedures.
 SQA Planning involves:
o Defining quality goals
o Selecting relevant standards
o Scheduling SQA tasks
o Assigning responsibilities
 Standards (ISO 9000):
o A family of quality management standards maintained by ISO.
o ISO 9001 focuses on meeting customer requirements and continual
improvement.
o Promotes a process approach and risk-based thinking in quality management.
1.3 SQA Activities

Key activities in SQA include:

 Requirements reviews – Ensuring clear, testable, complete requirements.


 Design reviews – Ensuring design meets requirements.
 Code reviews – Peer reviews to catch early defects.
 Testing – Unit, Integration, System, Acceptance testing.
 Audits – Checking adherence to standards and processes.
 Process monitoring – Tracking software lifecycle metrics.
 Defect tracking and analysis – Identifying and fixing root causes.

1.4 Building Blocks of SQA

Main components that make up an SQA system:

1. Software Engineering Methods – Structured development approaches like Agile,


Waterfall, etc.
2. Formal Technical Reviews (FTR) – Structured peer reviews.
3. Testing Strategies – Unit, integration, system, and acceptance testing.
4. Documentation and Standards – Ensures consistency and compliance.
5. Measurement and Metrics – To evaluate quality performance.
6. SQA Tools – Automation tools for testing, version control, CI/CD, etc.

1.5 Software Quality Factors

These are characteristics that define software quality (based on McCall’s model and ISO
standards):

1. Correctness – Does it do what it’s supposed to?


2. Reliability – Does it work under expected conditions?
3. Efficiency – Optimal use of resources.
4. Integrity – Security and access control.
5. Usability – Easy to learn and use.
6. Maintainability – Easy to fix or update.
7. Testability – Easy to test.
8. Portability – Can it run on different platforms?
9. Reusability – Can it be reused in other systems?
10. Interoperability – Can it work with other systems?

1.6 Software Reliability & Reliability Measurement


 Software Reliability: Probability that software will function correctly over a
specified time under given conditions. It’s a key quality factor.

Reliability Measurement Factors:

1. ROCOF (Rate of Occurrence of Failure):


o Frequency of failures over time.
o Lower ROCOF = higher reliability.
2. MTTF (Mean Time To Failure):
o Average time between system failures.
o Higher MTTF = better reliability.
3. MTTR (Mean Time To Repair):
o Average time to fix a failure and restore system.
o Lower MTTR = better maintainability.
4. MTBF (Mean Time Between Failures):
o MTBF = MTTF + MTTR
o Measures time between one failure and the next.
o Higher MTBF = better system stability.
5. POFOD (Probability of Failure on Demand):
o Likelihood of software failing when required to operate.
o Used for safety-critical systems.
6. Availability:
o Availability = MTTF / (MTTF + MTTR)
o Indicates proportion of time the system is operational.
2. Software Testing Fundamentals

2.1 Definition & Objectives of Testing

 Definition: Software testing is the process of evaluating a system or its components to


find whether it meets the specified requirements and to identify any defects.
 Objectives:
o Detect errors and bugs
o Validate functionality against requirements
o Ensure software quality and reliability
o Prevent future defects
o Increase user satisfaction

2.2 Role of Testing and its Effect on Quality

 Testing is a quality gate to ensure that the software behaves as expected before it
reaches users.
 It reduces risks, ensures compliance with requirements, and improves product
confidence.
 Early and continuous testing improves overall product quality and reduces cost and
effort later in the development lifecycle.

2.3 Causes of Software Failure: Definitions

 Error: A human mistake during coding or design.


 Bug/Defect: A flaw in the code caused by an error.
 Fault: The actual cause in the system that may lead to a failure.
 Failure: When the system behaves incorrectly due to a defect.

Example: A developer writes "+" instead of "-" → that’s an error. This results in incorrect
calculation → bug/fault. When user sees wrong result → failure.

2.4 Economics of Testing

 Testing early saves cost: Fixing defects in early stages (design) is cheaper than fixing
them after deployment.
 The cost of not testing (bugs in production) is often higher than the cost of testing.
 Balance is needed between test effort and business value.
2.5 Seven Testing Principles

1. Testing shows presence of defects, not their absence.


2. Exhaustive testing is impossible (you can’t test everything).
3. Early testing saves time and money.
4. Defects cluster together (a few modules contain most bugs).
5. Beware of the pesticide paradox (repeating same tests won’t find new bugs).
6. Testing is context dependent (e.g., safety-critical vs e-commerce).
7. Absence-of-errors is a fallacy (even bug-free software may be unusable if it doesn’t
meet needs).

2.6 Software Testing Life Cycle (STLC)

STLC defines a sequence of activities during testing:

1. Requirement Analysis – Understand what needs to be tested.


2. Test Planning – Strategy, resources, schedule.
3. Test Case Development – Write test cases/scripts.
4. Test Environment Setup – Prepare the infrastructure.
5. Test Execution – Run the tests and record results.
6. Test Closure – Report results, analyze metrics, and wrap up.

2.7 Validation & Verification Concepts - V-Model and W-Model

 Validation: "Are we building the right product?"


 Verification: "Are we building the product right?"

V-Model:

 Extension of Waterfall with testing associated at each stage.


 Verification on the left (requirements/design), Validation on the right (testing stages).

W-Model:

 Emphasizes that testing starts in parallel with development.


 Each development stage has its own test activities.
 Better suited for modern development cycles.

2.8 Agile Testing - Test Driven Development (TDD)

 Testing is integrated throughout the Agile process.


 TDD: Write tests first, then write code to pass the tests.
 Encourages small, rapid cycles, continuous feedback, and code quality.

2.9 Levels of Testing


2.9.1 Unit Testing

 Testing individual components (functions/methods).


 Done by developers using frameworks (e.g., JUnit).

2.9.2 Integration Testing

 Testing interactions between modules.


 Ensures combined units work together (APIs, database, etc.).

2.9.3 System Testing

 Testing the entire application as a whole.


 Checks end-to-end behavior against requirements.

2.9.4 User Acceptance Testing (UAT)

 Performed by end-users.
 Verifies the system meets business needs.
 Final sign-off before release.

2.10 Test Types


2.10.1 Functional Testing (Black-box)

 Focuses on system behavior, not internal code.


 Example: Login form, search feature.

2.10.2 Non-functional Testing

 Tests characteristics like performance, security, usability.

2.10.3 Structural Testing (White-box)

 Testing internal logic and code structure.


 Example: Checking all code paths, branches.

2.10.4 Testing Related to Changes


 Confirmation (Re-testing): Testing if a defect is fixed.
 Regression Testing: Ensures new changes haven’t broken existing functionality.

2.11 Non-Functional Testing Types


2.11.1 Performance (Load & Stress)

 Load Testing: Check system behavior under expected load.


 Stress Testing: Check limits by applying extreme load.

2.11.2 Usability

 Checks if the application is easy and pleasant to use.

2.11.3 Maintainability

 How easily the software can be updated or fixed.

2.11.4 Portability

 Ability to run on different platforms/environments.

2.11.5 Security

 Protects system from unauthorized access and vulnerabilities.

2.11.6 Localization & Internationalization

 Localization: Adapting UI/content for a specific region/language.


 Internationalization: Designing software so it can be localized easily.

2.12 Concept of Smoke Testing and Sanity Testing

 Smoke Testing: Basic tests to check if the application is stable enough for further
testing. ("Build verification")
 Sanity Testing: Quick tests to verify specific functionalities after minor changes.
("Are new features working correctly?")
3. Static & Dynamic Testing
Static Testing:

 Done without executing the code.


 Focuses on documents, code structure, design, and standards.
 Helps find errors early in the development process.

Dynamic Testing:

 Involves executing the code and checking runtime behavior.


 Used to validate output correctness, performance, and usability.

3.1 Static Techniques – Review


Reviews help detect issues in documents and code without running the software.

3.1.1 Review Process (Informal & Formal)

 Informal Review: Casual discussions or walkthroughs among team members (e.g.,


over a desk).
 Formal Review: Structured process with defined roles, checklists, and
documentation.

3.1.2 Technical or Peer Review

 Developers or peers examine code/documents to find issues.


 Focuses on technical quality, logic, and design.

3.1.3 Walkthrough

 Author explains the work product to peers.


 Interactive session where team gives feedback.
 Less formal than inspections.

3.1.4 Inspection

 Formal review technique.


 Uses checklists, roles (Moderator, Author, Reviewers).
 Focuses on defect detection and process improvement.
3.2 Static Techniques – Static Analysis
Analyzing the code without execution, using tools to identify issues.

3.2.1 Static Analysis by Tools (Automated Static Analysis)

 Tools automatically analyze code for:


o Syntax errors
o Coding standards violations
o Security vulnerabilities
o Dead code, unused variables
 Example tools: SonarQube, ESLint, FindBugs.

3.3 Test Design Techniques: Black Box Testing


Testing without looking at the internal code. Based on inputs and outputs.

3.3.1 Equivalence Partitioning

 Divides input data into valid and invalid classes.


 Test one value from each class.
 Reduces number of test cases.

Example: For age input (1–100), you can partition:

 Valid: 25
 Invalid: -5, 150

3.3.2 Boundary Value Analysis

 Tests values at the edges of input ranges.


 Bugs often occur at boundaries.

Example: Age input (1–100)

 Test: 0, 1, 100, 101

3.3.3 Decision Table Testing

 Used when output depends on combinations of inputs.


 Create a decision table to test all combinations.

3.3.4 State Transition Testing


 Used for systems with states and transitions (e.g., login/logout).
 Tests valid and invalid transitions between states.

3.4 Test Design Techniques - White Box Testing


Focuses on the internal structure of the code.

3.4.1 Statement Coverage

 Ensures each line (statement) is executed at least once.

3.4.2 Branch & Decision Coverage

 Branch Coverage: Every possible if/else or true/false condition is tested.


 Decision Coverage: Ensures each decision in the code takes all possible outcomes.

3.4.3 Path Coverage

 Tests all possible execution paths in the program.


 More comprehensive than statement/branch.

3.4.4 McCabe’s Cyclomatic Complexity Metric

 Measures code complexity using control flow graphs.


 Formula:
V(G) = E - N + 2P
(E = edges, N = nodes, P = components)
 Helps determine minimum number of test cases needed for full path coverage.

3.4.5 Data Flow Based Testing

 Focuses on variables and their definition/use.


 Ensures proper use of data (no use of undefined variables, etc.).

3.4.6 Mutation Testing

 Introduce small changes (mutations) in code to test if existing test cases catch them.
 Helps evaluate effectiveness of test cases.

3.5 Test Design Techniques – Experience-Based


Techniques
Relies on tester’s intuition, experience, and domain knowledge.
3.5.1 Error Guessing

 Testers guess possible areas of bugs based on experience.


 Example: Leaving a field blank, entering invalid data.

3.5.2 Exploratory Testing

 Simultaneous learning, test design, and execution.


 No predefined test cases.
 Good for uncovering unexpected issues.
4. Test Management

4.1 Test Organization – Roles & Skills

Tester

 Role: Executes test cases, logs defects, validates fixes.


 Skills:
o Knowledge of test techniques (black/white box)
o Tool usage (e.g., Selenium, JIRA)
o Attention to detail
o Analytical thinking

Test Lead

 Role: Plans and coordinates testing activities, leads a team of testers.


 Skills:
o Team management
o Test strategy planning
o Risk identification
o Good communication

Test Manager

 Role: Manages overall test activities for projects, defines strategy, budget, and
ensures quality delivery.
 Skills:
o Project management
o Reporting to stakeholders
o Resource & time estimation
o Metrics & reporting

4.2 Test Planning – IEEE 829 STANDARD Test Plan Template

A Test Plan defines the scope, approach, resources, and schedule of intended testing.
IEEE 829 Test Plan includes:

1. Test Plan Identifier


2. Introduction
3. Test Items (What to test)
4. Features to be Tested
5. Features not to be Tested
6. Approach (techniques, levels)
7. Item Pass/Fail Criteria
8. Suspension/Resumption Criteria
9. Test Deliverables
10. Staffing and Training Needs
11. Responsibilities
12. Schedule
13. Risks and Contingencies
14. Approvals

4.3 Test Process Monitoring & Control

4.3.1 Test Monitoring – Test Log & Defect Density

 Test Log (IEEE 829): Records test execution details – date, time, tester name,
results.
 Defect Density = Total number of defects / size of software (e.g., per 1000 lines of
code)

4.3.2 Test Status Reporting (IEEE 829 Test Summary Report)

 Test Summary Report Includes:


o Summary of test activities
o Test results (pass/fail rate)
o Defect status
o Lessons learned
o Recommendations

4.3.3 Test Control

 Actions taken to deal with deviations from the test plan.


 Includes:
o Re-planning
o Additional testing
o Adjusting schedules/resources

4.4 Test Scenario, Test Suite, Test Cases


 Test Scenario: High-level idea of what to test (e.g., “Login functionality”)
 Test Suite: Collection of related test cases
 Test Case: Step-by-step instructions with expected outcomes

Types of Test Cases:

 Positive: Checks system works with valid data


 Negative: Checks system fails gracefully with invalid data

IEEE 829 Test Case Specification Template:

1. Test Case ID
2. Test Item
3. Input Specifications
4. Output Specifications
5. Environmental Needs
6. Special Procedures
7. Expected Results
8. Actual Results
9. Pass/Fail Criteria

4.5 Configuration Management Support for Testing

 Manages versions and changes to test artifacts (test cases, data, scripts).
 Helps ensure:
o Correct version of test scripts are used
o Reproducibility of defects
o Controlled updates and tracking

Tools: Git, SVN, CVS

4.6 Risk and Testing

Project Risk

 Related to project management issues.


o Examples: schedule delays, resource shortages

Product Risk

 Related to software quality.


o Examples: security vulnerabilities, data loss, performance failure

Risk-Based Testing: Prioritize testing based on risk impact and likelihood.


4.7 Incident / Defect Management

4.7.1 Defect Life Cycle

1. New
2. Assigned
3. Open
4. Fixed
5. Retest
6. Closed or Reopened

Each stage has responsible teams and tools like JIRA/Bugzilla help manage this.

4.7.2 Defect/Incident Report – IEEE 829 Template

Includes:

1. Defect ID
2. Summary
3. Severity
4. Priority
5. Steps to Reproduce
6. Expected & Actual Results
7. Screenshots/Logs
8. Status (Open, Fixed, etc.)
9. Assigned to

Case Studies

Test Plan Case Study

 Application: e.g., Online Food Delivery System


 Create a test plan detailing:
o Functionalities to test (login, order, payment)
o Schedule and team
o Tools to be used (e.g., Selenium for automation)
o Entry/Exit Criteria

Test Case Case Study

 Feature: “Login”
o Positive: Correct email/password → login success
o Negative: Wrong email/password → error message
5. Tool Support for Testing
Software testing tools help automate and manage the testing process to improve efficiency,
accuracy, and coverage.

5.1 Types of Test Tools – CAST (Computer-Aided Software Testing)

Types, Purpose, Benefits & Risks:

Tool Type Purpose Benefits Risks


Test Manage test cases,
Better organization Learning curve; may not
Management planning, execution (e.g.,
and reporting fit all processes
Tools JIRA, TestRail)
Automate UI and
Functional Reduces manual UI changes can break
functional tests (e.g.,
Testing Tools testing effort scripts
Selenium)
Simulates multiple
Performance Check speed, scalability, Needs expertise to
users; finds
Testing Tools and load (e.g., JMeter) configure properly
bottlenecks
Security Testing Detect security flaws Identifies May miss complex
Tools (e.g., OWASP ZAP) vulnerabilities early logical vulnerabilities
Analyze code without Detects issues early,
Static Analysis False positives; not
executing (e.g., enforces coding
Tools always accurate
SonarQube) standards
Defect
Track and manage bugs Improves Requires discipline from
Management
(e.g., Bugzilla, JIRA) collaboration team
Tools
Unit Testing Automate unit-level tests Fast feedback for Limited to small parts of
Tools (e.g., JUnit, TestNG) developers the code
Measure test coverage Ensures parts of code High coverage doesn’t
Coverage Tools
(e.g., JaCoCo) are tested always mean quality
Automate builds/tests Faster integration and Misconfigurations can
Build Tools
(e.g., Maven, Jenkins) delivery break builds
Tool Type Purpose Benefits Risks
Test Data
Create valid/invalid data Data may not represent
Generation Helps test edge cases
(e.g., Mockaroo) real-world scenarios
Tools
Database Test database operations Validates backend Needs deep DB
Testing Tools (e.g., SQLUnit) operations knowledge
Automate graphical
GUI Testing Quick UI regression Not flexible to design
interface testing (e.g.,
Tools testing changes
QTP/UFT)
Mobile Testing Test mobile apps (e.g., Cross-platform
Setup complexity
Tools Appium) support
ETL Testing Verify data extraction, Ensures data accuracy High dependency on data
Tools transformation & loading across systems mapping
API Testing Test APIs (e.g., Postman, Easy to test backend Needs understanding of
Tools RestAssured) logic API requests/responses

5.2 Introduction of a Tool into an Organization

Steps to introduce a test tool:

1. Needs Analysis – Identify what problems the tool should solve.


2. Tool Evaluation – Compare available tools (features, cost, support).
3. Pilot Project – Test tool in a small environment.
4. Training – Train testers and developers on usage.
5. Integration – Integrate with current systems (CI/CD, test plans).
6. Evaluation – Measure ROI (effort saved, bugs found).
7. Rollout – Implement across the team/project.

5.3 Testing Tools

5.3.1 Selenium (WebDriver + TestNG)

 Selenium WebDriver: Automates browser actions (clicks, forms, etc.)


 TestNG: Framework to organize test execution, assertions, reports

Benefits:

 Open-source
 Supports multiple browsers/languages (Java, Python, etc.)
 Great for regression testing

5.3.2 JMeter

 Purpose: Performance and load testing tool by Apache.


 Use Cases:
o Simulate users on a web application
o Test API response time
 Supports: HTTP, FTP, Web Services, JDBC, etc.

5.3.3 Postman

 Purpose: API testing tool


 Features:
o Send requests (GET, POST, PUT, DELETE)
o Validate response codes, response time
o Create test suites with JavaScript
 Benefits:
o Easy to use UI
o Good for both manual and automated API testing

5.3.4 ETL Testing Tool

 Purpose: Verify data is correctly extracted, transformed, and loaded into target
systems (e.g., Data Warehouse)
 Popular Tools: QuerySurge, Informatica, Talend
 Use Cases:
o Data consistency check
o Null value verification
o Source-to-target data validation

5.4 JIRA (Project Management Tool)

 Purpose: Track bugs, manage agile projects (Scrum/Kanban), and test activities
 Features:
o Issue tracking
o Sprint planning
o Backlogs, boards, user stories
o Integration with other tools (Bitbucket, Confluence, Selenium)
 Benefits:
o Transparency
o Collaboration
o Real-time status updates

You might also like