SQA Chapter Two
SQA Chapter Two
Basics of Software
Testing
By Haimnaot D.
Introduction
Software testing is the process of evaluating a software application to
identify any defects, errors, or gaps in the expected functionality.
It ensures that the software meets specified requirements and
performs as intended.
Testing can be performed manually or using automated tools to verify
the following points of software products
Correctness,
Completeness
Reliability
Objectives of Software Testing
Identify defects before deployment to avoid failures in production.
Ensure software reliability, efficiency, and performance.
Validate that software meets business and technical requirements.
Improve the quality of the software by finding and fixing bugs early.
Enhance security by identifying vulnerabilities that could be
exploited.
Reduce maintenance costs by detecting defects early in the
development process.
Cont..,
Ensure customer satisfaction by delivering a high-quality product.
Verify the software's compatibility with different environments and
platforms.
Validate data integrity and security measures.
Improve user experience by identifying usability issues.
Types of Software Testing
Manual Testing:
Testers manually execute test cases without using automation tools.
It is useful for exploratory, usability, and ad-hoc testing.
Automated Testing:
Uses automation tools to execute test scripts and compare actual outcomes with
expected results, improving efficiency and accuracy.
Functional Testing:
Validates that the software performs according to the defined functional
requirements.
Non-Functional Testing:
Focuses on aspects such as performance, usability, compatibility, and security.
Conti..,
Unit Testing:
Tests individual components or modules to verify that each works correctly in
isolation.
Integration Testing:
Ensures that different modules work together correctly by testing their interactions.
System Testing:
Tests the entire system as a whole to verify that all integrated components function
correctly.
Acceptance Testing:
Confirms that the software meets business requirements and is ready for
deployment.
Conti..,
Regression Testing:
Ensures that new changes or updates do not negatively impact existing
functionality.
Performance Testing:
Evaluates how the software performs under different loads and stress conditions.
Security Testing:
Identifies vulnerabilities and ensures the application is protected against attacks.
Usability Testing:
Assesses how user-friendly and intuitive the software is.
Conti..,
Compatibility Testing:
Ensures that software functions properly on different devices, browsers, and
operating systems.
Load Testing:
Assesses how well the system handles a specified volume of transactions.
Stress Testing:
Determines the system's ability to handle extreme conditions.
Exploratory Testing:
Performed without predefined test cases to uncover unexpected issues.
Principles of Software Testing
Testing Shows Presence of Defects:
Testing can demonstrate that defects exist, but it cannot prove the absence of
defects.
Exhaustive Testing is Impossible:
It is impossible to test all possible inputs and scenarios, so risk-based and
prioritized testing is necessary.
Early Testing:
Defects should be identified as early as possible in the development lifecycle to
reduce costs and effort.
Defect Clustering:
A small number of modules usually contain the majority of defects, following
the Pareto Principle (80/20 rule).
Conti..,
Pesticide Paradox:
Repeating the same test cases will eventually stop finding new defects, so test
cases should be updated regularly.
Testing is Context Dependent:
Different software applications require different testing approaches based on
their industry, risks, and usage.
Absence of Errors is a Fallacy:
Even if no defects are found, the software might not meet user expectations or
business needs.
Quality is Subjective:
What is considered "high quality" varies depending on user expectations and
industry standards.
Software Testing Process
Requirement Analysis
Understand the project requirements, both functional and non-
functional.
Identify testable features and clarify ambiguities with
stakeholders.
Define acceptance criteria for testing success.
Gather information about system architecture and
dependencies.
Identify potential risks and create a risk mitigation plan.
Cont..,
Test Planning
Develop a test strategy, scope, and approach.
Identify required resources, tools, and responsibilities.
Define test schedules, test deliverables, and success criteria.
Establish risk analysis and mitigation strategies.
Allocate test cases to team members based on expertise.
Plan for both manual and automated testing approaches.
Cont..,
Test Case Development
Design test cases based on business and technical requirements.
Prepare test data and define expected results.
Review and refine test cases to ensure coverage and effectiveness.
Develop test scripts for automated testing if applicable.
Implement boundary value analysis and equivalence partitioning.
Create positive and negative test cases to ensure robustness.
Cont..,
Test Environment Setup
Configure the necessary hardware, software, and network settings.
Ensure test data availability and database readiness.
Verify that the test environment mirrors the production
environment as closely as possible.
Install and configure automation tools, if required.
Establish a continuous integration/continuous deployment (CI/CD)
pipeline for testing.
Cont..,
Test Execution
Execute test cases as per the test plan.
Document actual results and compare them with expected
outcomes.
Log defects in a tracking system and categorize them based on
severity and priority.
Conduct retesting and regression testing when necessary.
Perform exploratory testing to identify additional issues.
Validate performance and security requirements through
specialized testing.
Cont..,
Defect Reporting & Tracking
Identify, log, and document defects with detailed information (e.g.,
steps to reproduce, screenshots, severity, priority).
Assign defects to developers for resolution.
Retest fixed defects and conduct regression testing to ensure no
new issues arise.
Track defect lifecycle from discovery to closure.
Generate defect trend reports and analyze recurring issues.
Cont..,
Test Closure
Evaluate test completion criteria (e.g., test case execution, defect
resolution, coverage achieved).
Document test results, lessons learned, and best practices for future
projects.
Conduct a retrospective meeting to discuss testing process
improvements.
Provide a final test summary report to stakeholders.
Archive test artifacts for future reference.
Assess overall testing effectiveness and recommend improvements.
Advanced Testing Techniques
White-Box Testing
Examines the internal structure, design, and implementation of the
software.
Used for unit testing and structural validation.
Examples include statement coverage, branch coverage, and path
coverage.
Key Features of White-Box Testing
Requires knowledge of programming and internal system logic.
Focuses on improving the efficiency and security of the code.
Conti..,
Ensures that all statements, branches, and paths in the code are
tested.
Detects hidden errors in logical structures.
Black-Box Testing
Focuses on input and output without considering internal code
structure.
Includes functional, usability, and security testing.
Used for validating business logic and user experience.
Key Characteristics of Black-Box Testing:
Conti..,
Testers do not need programming knowledge.
It is based on requirements and specifications.
Test cases are designed to check expected outcomes based on
inputs.
It is primarily used for functional, usability, and security testing.
Conti..,
Grey-Box Testing
A combination of white-box and black-box testing techniques.
Useful when partial knowledge of the internal system is available.
Often used for security and penetration testing.
Model-Based Testing
Uses models to represent the expected behavior of the system.
Generates test cases from these models for automated validation.