0% found this document useful (0 votes)
4 views

Testing Fundamentals

The document outlines the fundamentals of software testing and the Software Development Life Cycle (SDLC), detailing various methodologies such as Waterfall, Agile, Scrum, and DevOps. It emphasizes the importance of structured testing processes, including the Product Testing Life Cycle (PTLC) and Software Testing Life Cycle (STLC), to ensure software quality and reliability. Additionally, it discusses market demand for software testing, emerging trends, and the significance of testing levels in identifying defects throughout the development process.

Uploaded by

pulseplanet840
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Testing Fundamentals

The document outlines the fundamentals of software testing and the Software Development Life Cycle (SDLC), detailing various methodologies such as Waterfall, Agile, Scrum, and DevOps. It emphasizes the importance of structured testing processes, including the Product Testing Life Cycle (PTLC) and Software Testing Life Cycle (STLC), to ensure software quality and reliability. Additionally, it discusses market demand for software testing, emerging trends, and the significance of testing levels in identifying defects throughout the development process.

Uploaded by

pulseplanet840
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 125

TESTING FUNDAMENTALS

UNIT 1-SYLABUS
Overview of SDLC
SDLC Methodologies
SDLC Methodologies (Continued)---(5 HOURS)
Introduction to Product Engineering
Product Testing Life Cycle------ (2 HOURS)
Introduction to Software Testing and its Importance
Market Demand for Software Testing
Levels of Testing---------(2 HOURS)
What is SDLC?
A structured process followed by software development teams to design, develop, test, and deploy software efficiently.

Phases of SDLC:
1.Requirement Gathering & Analysis
1. Understand user needs and document requirements.
2.Planning
1. Define project scope, timeline, resources, and cost
estimates.
3.Design
1. Create architecture and design documents (UI,
databases, APIs, etc.).
4.Development
1. Actual coding of the application according to design
specs.
5.Testing
1. Verify software functionality, performance, and
security.
6.Deployment
1. Release the software to the production environment. Key Benefits:
7.Maintenance •Improved project management
1. Ongoing updates, bug fixes, and improvements. •Enhanced quality assurance
•Predictable timelines and costs
SDLC Methodologies 2. Agile Model
1. Waterfall Model •Iterative and incremental approach.
•Sequential & linear approach. •Emphasizes collaboration, flexibility, and
•Each phase must be completed before the next customer feedback.
begins. •Delivers working software in short cycles
•Best for well-defined requirements. (sprints).

3. Scrum 4. Spiral Model


•A type of Agile framework. •Combines iterative nature of Agile with risk
•Roles: Product Owner, Scrum Master, analysis.
Development Team. •Repeats each phase in “spirals” for
•Work divided into sprints (2–4 weeks). progressive refinement.

5. V-Model (Validation & Verification) 6. DevOps


•Extension of Waterfall with a focus on testing. •Focus on integration between development and
•Every development stage has a corresponding operations.
testing phase. •Continuous Integration (CI) & Continuous
Deployment (CD).
•Automation, monitoring, and collaboration
emphasized.
Waterfall Model (Basic Model)

Key Characteristics:
•Simple and easy to manage.
•Best for small projects with
clear, fixed requirements.
•Progress is easy to measure
by completed phases.

TOP DOWN
The Waterfall Model is a linear and sequential SDLC methodology where APPROACH
each phase must be completed before the next begins. It follows a top-
down approach, much like a waterfall flowing down through steps.
Agile Model

ITERATIVE
AND
INCREMENTAL
APPROACH
•Iterative and incremental approach.
•Emphasizes collaboration, flexibility, and customer
feedback.
•Delivers working software in short cycles (sprints).
 A series of shorter development cycles (requirements, designing,
building and testing)
 Incremental development involves establishing requirements, designing,
building, and testing a system in pieces, which means that the software’s
features grow incrementally. The size of these feature increments vary
 Involves overlapping test levels throughout development.
 Teams use continuous delivery and continuous deployment
(Delivery pipelines)
 Regression testing is a key player.

 Examples :
 RUP : Rational Unified Process (long iteration ,two to three months),
 Scrum : shorter iterations , hours , days , few weeks
 Kanban

 Iterative and incremental models may deliver usable software in weeks


or even days, but may only deliver the complete set of requirements
product over a period of months or even years.
Definition:
The Kanban Model is a visual workflow management method used to manage and improve work
across human systems. It originated from Toyota's lean manufacturing system and is widely used in
Agile software development.

Core Principles of Kanban:

1.Visualize the Workflow


• Use a Kanban Board with columns like To Do, In Progress, and Done.
• Tasks (cards) move through the columns.
2.Limit Work in Progress (WIP)
• Restrict the number of tasks in each stage to avoid overload and increase focus.
3.Manage Flow
• Ensure tasks move smoothly through the process.
• Identify and eliminate bottlenecks.
4.Make Process Policies Explicit
• Everyone understands the rules and expectations.
5.Improve Collaboratively
• Encourage continuous improvement through feedback and retrospectives.
Advantages:
•Simple, flexible, and visual.
•Great for ongoing maintenance or support work.
•Easy to adopt with minimal changes to existing
processes.

Disadvantages:
•Less structured than Scrum.
•May not suit large, complex projects without
adaptation.
•Success depends on team discipline and
communication.

Use Case:
•Ideal for support teams, maintenance projects, or
any scenario where continuous delivery and
flexibility are essential.
Software Development Models in Context
 Software development models must be selected and adapted to the context of project
and product characteristics.

 Goal of the Project


 Type of product
 business priorities (e.g., time-to market)
 Product and Project risks.

Example :
 some cases organizational and cultural issues may inhibit communication between
team members, which can impede iterative development.

 an Agile development model may be used to develop and test the front- end user interface
(UI) and functionality. Prototyping may be used early in a project, with an incremental
development model adopted once the experimental phase is complete.
Scrum Model in SDLC
Scrum is a way of managing projects, especially in software development .

rrrrr
Spiral Model

rrrrrrr
V-Model (Validation & Verification)
V-Model (Sequential Model)

 Common V-models uses 4 test


levels corresponding to 4
development levels.
 They can be more, less or different
depending on the project.

 Test can start early.

 Sequential development models


deliver software that contains the
complete set of features, but
typically require months or years
for delivery to stakeholders and users.
Validation and Verification
 In V-model, validation and verification are carried out during the
development of the SW work products.

When this SW in its environment, will


Are we building the it fulfill the goals and needs of the end
right product? user?

When this SW in development, are


Are we building the
product right? the standards, specifications, and
guidelines applied correctly?
DevOps Model
Introduction to Product Engineering

•What is Product Engineering?

•Lifecycle of Product Engineering

•Key Disciplines & Roles

•Product vs. Project Mindset

•Tools & Technologies

•Real-World Examples
What is Product Engineering?
•Definition: End-to-end process of designing, developing, testing, and maintaining software products.
•Focus: User-centric design + scalability + quality.

Product Engineering Lifecycle


•Phases: Key Roles in Product Engineering Tools & Technologies
• Ideation & Market Research •Product Manager •Design: Figma, Adobe XD
• Product Design (UX/UI) •UX/UI Designer •Dev: Git, VS Code, CI/CD
• Development •Software Engineers •Testing: Selenium, Postman,
• Testing & QA •QA Engineers JUnit
• Deployment •DevOps •Collaboration: Jira, Slack,
• Maintenance & Feedback Loop •Customer Success & Support Notion

Feature Product Project


Challenges in Product
Real-World Examples
Long-term Short-term Engineering
Focus •Spotify: User-centric music
outcome output •Shifting requirements
experience.
•Technical debt
Fixed •Tesla: Software-first
Lifecycle Continuous •Balancing innovation with
duration approach to automotive.
stability
•Airbnb: Agile and data-
•Cross-functional collaboration
User adoption, Timely delivery, driven product cycles.
Success Metric
ROI budget
1.Market & User Research
→ Understand target users, pain points, and competition.
2.Product Ideation & Strategy
→ Define product vision, goals, and core value proposition.
3.Requirements Engineering
→ Gather and document detailed functional & non-functional
requirements.
4.Architecture & System Design
→ Design technical architecture, APIs, and data flow.
5.UI/UX Design
→ Create wireframes, prototypes, and visual design aligned with
user needs.
6.Product Development
→ Code frontend/backend, integrate systems, implement
features.
7.Quality Assurance & Testing
→ Perform unit, integration, performance, and UAT testing.
8.Deployment & Release Engineering
→ Package, release, and deploy software using CI/CD tools.
9.Monitoring & Feedback Collection
→ Track user behaviour, app performance, and gather feedback.
10.Maintenance & Iterative Improvement
→ Fix bugs, enhance features, and evolve product roadmap.
Product Testing Life Cycle
Agenda What is Product Testing Life Cycle (PTLC)?
1.What is Product Testing Life Cycle? •A structured approach to plan, design, execute, and evaluate
2.Why Testing is Critical in Product Engineering product testing.
3.PTLC Phases (Detailed) •Ensures product quality, reliability, and user satisfaction.
4.Tools & Techniques •Covers all test-related activities from requirement analysis to
5.Challenges in Product Testing post-release validation.

PTLC – Phases Overview (Visual) Phase 1 – Requirement Analysis


•Understand business & system requirements.
Requirement Analysis → Test Planning → Test Case Design → •Identify testable and non-testable features.
Test Environment Setup → Test Execution → Defect Reporting •Participate in reviews with stakeholders.
→ Test Closure Output: RTM (Requirement Traceability Matrix)

Phase 3 – Test Case Design


Phase 2 – Test Planning
•Create detailed test cases based on requirements.
•Define scope, objectives, deliverables, and risk mitigation.
•Include positive, negative, boundary, and edge test cases.
•Estimate resources, schedule, and tools.
•Review and approve test cases.
Output: Test Plan Document
Output: Test Case Document
Tools & Techniques
Phase 4 – Test Environment Setup
•Test Management: TestRail, Zephyr
•Prepare hardware, software, network, and tools.
•Bug Tracking: Jira, Bugzilla
•Mimic production environment as closely as possible.
•Automation: Selenium, Cypress, JUnit
Output: Ready test environment
•CI/CD: Jenkins, GitHub Actions
Phase 5 – Test Execution
•Run test cases manually or via automation.
•Record pass/fail results and attach evidence.
Output: Test Execution Report

Phase 7 – Test Closure


Ensure all tests are executed or deferred.
Summarize testing outcomes and lessons learned.
Archive test assets and hold a retrospective.
Output: Test Summary Report

Challenges in Product Testing


•Changing requirements
•Limited test data or environments
•Time constraints for regression
•Defect leakage to production
Introduction to Software Testing

Agenda
1.What is Software Testing?
2.Objectives of Software Testing
3.Types of Testing
4.Testing Life Cycle (STLC)
5.Manual vs. Automation Testing
6.Importance of Software
Testing
7.Tools Used in Industry
8.Real-world Examples & Case
Studies
What is Software Testing?
Why is Testing Required?
•Definition: Process of evaluating software to find defects and
•Identify defects early.
ensure it meets requirements.
•Ensure reliability, security, and performance.
•Includes validation (are we building the right product?) and
•Meet customer requirements.
verification (are we building it right?).
•Prevent costly post-release failures.
•Performed at all stages of development.
Types of Software Testing
Objectives of Software Testing •Functional: Unit, Integration, System, Acceptance.
•Ensure product meets business and technical requirements. •Non-functional: Performance, Security, Usability,
•Identify bugs before the product reaches users. Compatibility.
•Improve product quality and confidence. •Others: Regression, Smoke, Sanity, Exploratory.
•Reduce development and maintenance costs.
Tools Used in Testing
Importance of Software Testing •Test Management: TestRail,
Software Testing Life Cycle (STLC)
•User satisfaction: Stable and smooth user Zephyr
1.Requirement Analysis
experience. •Bug Tracking: Jira, Bugzilla
2.Test Planning
•Cost efficiency: Early bug detection saves •Automation: Selenium, Cypress,
3.Test Case Design
resources. Playwright
4.Environment Setup
•Security: Prevents vulnerabilities and •Performance: JMeter,
5.Test Execution
breaches. LoadRunner
6.Defect Logging & Tracking
•Business reputation: Protects brand •CI/CD: Jenkins, GitHub Actions
7.Test Closure
credibility.
Market Demand for Software Testing Industry Trends & Statistics
Include recent stats (as of 2024/2025). Example:
Why Software Testing Is in Demand
•The global Software Testing Market is expected to exceed
•Rising complexity of applications (web, mobile, IoT, AI-
$70 billion by 2030.
based).
•Automation testing is growing at a CAGR of ~18%.
•Shift-left trend: testing earlier in SDLC.
•DevOps + QA engineers are in high demand with salaries
•Need for continuous delivery and integration.
comparable to developers.
•Prevents high-cost production bugs.
•Gartner: 60% of digital transformation projects delay due to
•Compliance and data privacy needs (GDPR, HIPAA).
poor QA.
Emerging Areas of Testing Demand by Sector
•Test Automation (Selenium, Playwright, Cypress) •Fintech: Security and compliance are critical.
•Performance Testing (JMeter, k6, LoadRunner) •E-commerce: Performance testing is essential (Black Friday,
•Security Testing (OWASP, penetration testing) flash sales).
•AI & ML Testing (bias detection, model drift) •Healthcare: Regulatory testing (HIPAA).
•Cloud Testing (AWS, Azure Test tools) •Gaming: UX, load, and latency testing.
•IoT and Embedded Systems Testing
Demand by Regions Roles in Demand
•USA: High demand for SDET (Software Development Engineer •QA Analyst / QA Engineer
in Test) •Test Automation Engineer
•Europe: GDPR compliance testing growing. •Performance Tester
•India: Global QA hub – growing startup + outsourcing •SDET (Software Development Engineer in Test)
opportunities. •QA Lead / Test Manager
Introduction to Testing Levels
•Define “Testing Level”:
Testing levels refer to the scope and stage of software testing applied at various points of the
development lifecycle.
•Analogy: Think of testing as layers of armor protecting the software from failure.
•Purpose: Identify different categories of bugs as early as possible.

Integration Testing
Unit Testing 🔹 Definition:
🔹 Definition: Testing interfaces and interactions between integrated
Testing of individual functions, methods, or components in modules.
isolation. 🔹 Purpose:
🔹 Performed By: To verify that units/modules work together as expected.
Usually developers, using test frameworks. 🔹 Types:
🔹 Tools: •Big Bang
JUnit (Java), NUnit (.NET), PyTest (Python), Jasmine (JS) •Top-down
🔹 Key Points: •Bottom-up
•Fast and frequent •Incremental
•Focuses on internal logic correctness 🔹 Tools:
•Test-driven development (TDD) relies on strong unit testing Postman (for API), REST Assured, JUnit (with mocks),
Example: TestNG
A function that calculates the total cost of items with tax – Example:User login module interacting with database
tested with different inputs. authentication.
System Testing Acceptance Testing
🔹 Definition: 🔹 Definition:
Testing the entire system as a whole to validate end-to-end Validating the system meets business requirements and
functionality. is ready for release.
🔹 Performed By: 🔹 Performed By:
QA team in a controlled environment. Clients, business users, or QA (User Acceptance Testing
🔹 Types: – UAT).
•Functional Testing 🔹 Types:
•Non-functional Testing (Performance, Security) •Alpha Testing (by internal staff)
🔹 Tools: •Beta Testing (by external users)
Selenium, QTP, LoadRunner Example:
Example: Bank wants to ensure that new loan application
Booking a flight ticket through a web application – verifying software meets all policy criteria.
all steps from search to payment.

Regression Testing:
•Re-running test cases after changes to ensure nothing breaks.
🔹 Smoke & Sanity Testing:
•Smoke: Basic checks to ensure system stability.
•Sanity: Quick evaluation of bug fixes or small changes.
🔹 End-to-End Testing:
•Testing user flows across multiple systems/components.
Can we Test Everything? - Numerical Example

 Assume we have a systems with:


 20 screens
 4 menus/screen
 3 options/menu
 10 fields/screen
 2 types of input/field
 100 possible values/input

 Thus to test it fully, we need 20 X 4 X 3 X 10 X 2 X 100 =


480,000 tests
 1 sec per test  17.7 days!!!
 We are not counting finger troubles 
Common Test Levels
 Component (unit)
testing

 Integration testing

 System testing

 Acceptance testing
Test Level
Generic
objectives

 A point in the
development model
Test
Approaches
and Test basis
where testing is done. responsibilities

 A development model
can have more than a
test level depending on
the model and the
Level
project.
Defects Test objects
Component Testing

 AKA unit, or module


testing
Done in isolation by the use of
stubs, drivers, harness, simulators
Functional, specific non-
functional, and structural tests
Component Testing
 Detailed design
 Code
 Data Model
 Component
specifications

 Components, units or
modules
 Code and data structures
 Classes
 Database modules
Component Testing cont’d

 Reducing risk
 Finding defects in the component
 Preventing defects from escaping to higher test
levels
 Building confidence in the component’s quality
 Verify the functionality of the module based on
its design

 Incorrect functionality
 Data flow problems
 Incorrect code and logic
Test-First Approach

 Usually involves the


programmer
 Test-First Approach :
example
TDD : test
driven
development
Integration Testing
System integration testing Component integration testing
Integration Testing
 Tests interfaces and interactions between different
components
 Testing non-functional C/C’s may be included.
 Focuses on integration itself not the functionality of the
individual
components

 The greater the integration scope, the harder to isolate a


defect to a certain component.

 Testers should understand the SW architecture and


influence the integration.
 This may impact the order of development as well.
Integration Testing cont’d
 Software and system design
 Sequence diagrams
 Interface and communication protocol
specs
 Use cases
 Architecture at component or system
level
 Workflows
 External interface definitions

 Subsystems
 Databases
 Infrastructure
 Interfaces
 APIs
Integration Testing cont’d
 Reducing risk
 Finding defects in the interfaces or in
subsystems
 Preventing defects from escaping to higher test
levels
 Building confidence in the quality of the
interfaces
 Verify the behavior of the interfaces based on its
 Incorrect
design data, missing data, or incorrect data encoding
 Incorrect sequencing or timing of interface calls
 Interface mismatch
 Failures in communication between components/Systesm
Unhandled or improperly handled communication
failures between Components/systems
 Inconsistent message structures between systems
Integration Testing Strategies

 Integration should be incremental rather than


big-bang.

 Can be based on:


 Structure
 Top-down
 Bottom-up
 System functionality
 Or even a mix
Structural Strategies Examples

Top-down approach Bottom-up approach


Functional Strategies Examples

Minimum capability Thread capability


Big-Bang Strategy Example
System Testing
 Testing the system as a whole
 End-to-end behavior is the focus.

 System and software requirement


specs
 Risk analysis reports
 Use cases/user stories
 Models of system behavior
 State diagrams
 System and user manuals

 Applications
 Hardware/software systems
 Operating systems
 System under test (SUT)
 System configuration and
configuration data
System Testing

 Reducing risk
 Verifying/Validate behaviors of the system
Building confidence in the
quality of the whole system
 Finding defects
 Preventing defects from escaping
to higher test levels or production

 Incorrect system behavior


 Incorrect control and/or data flows
within the sys
 Failure to properly carry out end-
to-end functions
 Failure of the system in the
production environment
System Testing cont’d
 Different approaches are used.
 Risk
 Requirements
 Business processes
 Use cases

 Tests functional and non-functional aspects of the SW


product.
 Mixes black-box and white-box techniques

 Usually carried by independent test team


Acceptance Testing
 Business processes
 User or business requirements
 Regulations, legal contracts and standards
 Use cases
 Installation procedures
 Risk analysis reports

 System under test


 System configuration and configuration
data
 Business processes for a fully integrated
system
 Operational and maintenance processes
 Forms. Reports
 Existing and converted production data
Acceptance Testing
 Establish confidence in the system; not to find bugs.
Usually done by customers or
system users or any other stakeholders
 Asses the readiness of the system for
deployment or use

System workflows do not meet business


or user requirements
 Business rules are not implemented
correctly
 contractual or regulatory requirements
violation
 Non-functional failures such as security,
efficiency under high loads
When is Acceptance Testing Done?
 It may not be the last testing step.
 What about system integration?

 It may occur at various times in the life


cycle.
 What about a new functionality?
 What about the usability of a
component?
Variations in Acceptance Testing
• AKA UAT • Verifies the fitness of use by
• Verifies the fitness of use system administrators
by business users • Testing of backup and
restore
• Installing, uninstalling and
upgrading
• Disaster recovery
User Operational
• Maintenance tasks
Acceptance Acceptance
Test Test

Alpha (factory)
Contract and
and Beta (field
Regulation
or site)
Acceptance
Acceptance
Test
Test
• Alpha @ • Custom-developed
development site and systems
not by developers • Against contractual
• Beta @ field by requirements agreed
potential users upon earlier or
regulations
TESTING FUNDAMENTALS
UNIT 2-SYLABUS

Types of Testing----- (10 HOURS)


Introduction to Software Testing Life Cycle

Test Planning - Testing Strategy, Test Plan


(Detailed Test cases)---- (7 HOURS)
Test Types
 Tests can be grouped based on specific targets,
reasons, or
objectives.
 SW models can be developed and Test Type
used in: Functional
 Functional testing
 Non-functional testing
Non-
 Structural testing
functional

Structural

Related to
change
Testing of Function (Functional Testing)

 Testing “What the


system does”

 Considers the external


system behavior

 May be performed at all


test
levels

 Drawn from requirements


and specifications

 Includes accuracy,
suitability,
interoperability and
security
Testing of Non-functional SW C/C’s
(Non-Functional Testing)
 Testing “how well” the system behaves

 Describes the test that measure C/C’s of a SW that


can be quantified on a varying scale

 Considers the external system behavior

 May be performed at all test levels

 Drawn from requirements and specifications


Types of Non-Functional Testing
 Performance
testing

 Load testing

 Stress testing

 Usability testing

 Maintainability
testing

 Reliability testing

 Portability testing
Testing of SW Structure/Architecture (Structur
Testing)

 Testing “How the system


Does”
 Looks at internal structures

 May be performed at all


test
levels (specially at CT and
IT)

 Best used after


specification based
testing to measure
thoroughness of test
Testing Related to Changes: Re- testing and
Regression Testing
 Retesting/Confirmation testing confirms that the defect is
removed after fix.

 Regression testing is repeated testing of a tested


program after modifications to discover any introduced
side effects.
 May be performed at all testing levels
 Good candidate for automation

 Tests should be repeatable to be used in this type of


testing.
Released Software and Maintenance Testing
 Released SW may work for years and decades.

 The SW or its environment is often corrected, changed, or


extended.

 Planning of releases in advance is crucial for


successful maintenance testing.
 Planned releases
 Hot fixes

 Maintenance testing is done on operational


systems and is triggered
by:
 Modifications
 Migration
 Retirement
More about Maintenance Testing
 Includes regression testing to unchanged parts

 Its scope is related to the risk of change, size of


change, and the size of the existing system.

 May be done at all test levels

 Impact analysis determines how much regression is


needed.

 Hardly done if specifications are outdated or missing or


experienced testers are not available.
Modifications Examples
 Planned enhancements/releases

 Corrective and emergency changes

 Environment changes/planned
upgrades

 Patches to solve security issues


Migrations Examples
 From one platform to another

 Operational tests of the new environment and changes


in the SW accordingly

 Migrating data from one platform to another


 Migration testing or conversion testing
Retirement Examples
 Testing data migration to the new
system

 Data archiving for long retention


Question 1
 Which of the following is most correct regarding the
test level at
which functional tests may be executed?
a. Unit and integration
b. Integration and system
c. System and acceptance
d. All levels
Question 2
 Operational acceptance testing is best described as:
a. Testing done against contractual requirements or any
regulations
b. Testing done from the business users prospective
c. Testing done at the development organizations but not by the
developers
d. Testing done to check administration functions
Question 3
 Which of the following is a true statement regarding the
V-model
lifecycle?
a. Testing involvement starts when the code is complete
b. The test process is integrated with the development
process
c. The software is built in increments and each increment has
activities for requirements, design, build and test
d. All activities for development and test are completed
sequentially
Question 4
 Which sentence is false?
a. Structural tests are done ONLY based on code
structures.
b. Functional testing is done more than structural
testing.
c. Independence increases as the level of testing
increases.
d. End users may be involved in acceptance testing.
Question 5
 Usability testing is an example of which type of
testing?
a. Functional
b. Non-functional
c. Structural
d. Change-related
Question 6
 Which of the following is more likely to use
incremental testing
approach?
a. Component testing
b. System testing
c. Integration testing
d. Acceptance testing
Question 7
 What type of testing is normally conducted to verify that
a product
meets a particular regulatory requirement?
a. Unit testing
b. Integration testing
c. System testing
d. Acceptance testing
Introduction to Software Testing Life Cycle
Why STLC is Important
•Reduces the chance of missing bugs
What is STLC?
•Ensures structured approach
•Definition: STLC is a systematic process that defines a series of
•Improves efficiency, traceability, and accountability
activities conducted during software testing.
•Helps teams align with the SDLC (Software
•Purpose: To ensure software quality through structured and
Development Life Cycle)
repeatable testing processes.
•Testing is not a single phase, but a life cycle with multiple •Select testing tools
steps.
STLC Phases 🔹 2. Test Planning
•Define strategy, effort estimation, resource
🔹 1. Requirement Analysis (5 mins) planning
•Understand “what needs to be tested” •Determine scope, risks, entry & exit criteria
•Analyze functional & non-functional requirements Deliverables: Test Plan, Risk Mitigation Plan
•Identify testable requirements
•Deliverables: Requirement Traceability Matrix (RTM)

🔹3.Test Case Design


•Write test scenarios, test cases, and prepare test data
•Use boundary value analysis, equivalence partitioning, decision tables
•Deliverables: Test Cases, Test Data
🔹 4.Test Environment Setup 🔹 5.Test Execution (5 mins)
•Configure hardware, software, and network settings •Execute test cases and log pass/fail
•Prepare databases, servers, staging areas •Raise defects/bugs for failed cases
•Tools: Docker, Jenkins, AWS, Azure, BrowserStack •Re-test and regression test fixed issues
Deliverables: Test Execution Report, Bug
Reports
🔹 6.Defect Reporting & Tracking (5 mins)
•Log defects in tools like JIRA, Bugzilla
•Assign severity & priority
•Monitor status: Open → In Progress → Resolved →
Closed

🔹 7.Test Closure (5 mins)


•Assess test completion criteria
•Final test metrics, documentation, retrospective
•Lessons learned, best practices
•Deliverables: Test Summary Report, Closure Memo
Test Design Techniques
 Categories of Test Design Techniques

 Specification-Based or Black-Box
Techniques

 Structure-Based or White-Box
Techniques

 Experience-Based Techniques

 Choosing Test Techniques


Learning Objectives
 4.1 Categories of Test Techniques
 (K2) Explain the characteristics, commonalities, and differences between
black-box
test techniques, white-box test techniques, and experience-based test
techniques
 4.2 Black-box Test Techniques
 (K3) Apply equivalence partitioning to derive test cases from given
requirements
 (K3) Apply boundary value analysis to derive test cases from given
requirements
 (K3) Apply decision table testing to derive test cases from given
requirements
 (K3) Apply state transition testing to derive test cases from given
requirements
 (K2) Explain how to derive test cases from a use case
 4.3 White-box Test Techniques
 (K2) Explain statement coverage
 (K2) Explain decision coverage
 (K2) Explain the value of statement and decision coverage
 4.4 Experience-based Test Techniques
 (K2) Explain error guessing
Test Design Techniques
 Categories of Test Design Techniques

 Specification-Based or Black-Box
Techniques

 Structure-Based or White-Box
Techniques

 Experience-Based Techniques
What are Test Techniques?
 Best practices to reach optimal test

 Systematic and good base for automation

 The purpose of a test technique is to help in identifying


test
conditions, test cases, and test data.

 Can be used in any test activity for any test type @ any
test level
Factors for Choosing Test
Techniques
 System Type  Time and budget
 System complexity  Software development
 Regulatory standards lifecycle model
 Customer or  Expected use of the
contractual software
requirements  Previous experience with
 Risk levels using the test techniques
 Risk types on the component or
system to be tested
 Test objectives
 The types of defects
 Available expected in the
documentation component or system
 Tester knowledge
and skills
 Available tools
Categorization of Test Techniques
 Classical classification of techniques is black-box or
white-box.

 Some techniques may not fall under a single


category.
Black-Box vs. White-Box

Black-Box Testing White-Box Testing


 Also called behavior-  Structure-based tests
based techniques
 Specification-based  Based on analysis of
(req , use structures of components and
cases …etc) systems

 Based on analysis of test  Normally follows black-box


basis documentation and tests to assess and increase
experience of testers and coverage
users whether they are
functional or non-functional
for components and systems
Test Design Techniques
 The Test Development Process

 Categories of Test Design Techniques

 Specification-Based or Black-Box Techniques

 Structure-Based or White-Box Techniques

 Experience-Based Techniques
We will Cover
 Equivalence partitioning
(EP)

 Boundary value analysis


(BVA)

 Decision table (DT) testing

 State transition testing

 Use case testing


Equivalence Partitioning (EP)
 Inputs are divided into groups that exhibit the same
behavior.
 Processed in same way

 EP’s can be found for valid and invalid data.


 Valid: values should be accepted
 Invalid: value should be rejected

 EP’s can be also identified for outputs, internal values,


time-related values and interface parameters.
Equivalence Partitioning (EP)
cont’d
 Testing is done for a single value in every
partition.
 Reduction in # of test cases needed

 Used to achieve input and output coverage


goals

 Can be applied
 Coverage to all testing levels
= (Partitions tested/Total
partitions) x 100
invalid
invalid valid

0 1
100
EP Example
 Assume that a PoS processes payment if the card used is
one of 2 types (VISA, Master Card), the PoS is connected
to the network, and the payment is approved by the
banking system.

VI
Valid EP
Card EP MC
Y
Payment
Approved? EP
Invalid EP ? N

Y
PoS
EP
Connected?
N
EP Example cont’d
 We can have 1 of 2 test sets depending on our knowledge
of how it
TCis coded.
Card Connected Approved
1 VI Y Y
2 MC Y Y •
Coded in the form if, else if, else
3 ? Y Y
if, else or unknown
4 VI or MC N Y •
5 VI or MC Y N Valid independent inputs are
combined first.

TC Card Connected Approved One invalid input is tested at a
1 VI Y Y time.
2 MC Y Y
3 ? N N •Coded in the form if, else
•Valid independent inputs are
combined first.
Exercise: EP Testing
 Identify EP’s and test cases for the width and
height fields.
 The software will handle inputs between 10 cm and
60 cm.
Boundary Value Analysis (BVA)
 Considered as an extension of equivalence partitioning

 Faults tend to lurk near boundaries of a partition.


 Good place to look for defects

 Boundaries are the maximum and minimum values of a


partition.

 Extends EP testing iff EP’s can be ordered.


Boundary Value Analysis (BVA) cont’d
 Non-functional boundaries (capacity, volume, etc.) can be
used for
non-functional testing too.

 Can be applied to inputs, outputs, internal values, time-


related
values and interface parameters

 Can be done for valid and invalid partitions

 Can be applied to all testing levels

 Coverage = (Boundaries tested/Total boundaries) x 100


BVA Testing
 To test a boundary, we need to test:
 The boundary
 One increment above the boundary
 One increment below the boundary

The user can order a quantity


greater than 0 and less than 100.

Mathematical Model
0 < X < 100

-1, 0, 1

99, 100, 101


Why not BVA only?
 If you do boundaries only, you have covered all the
partitions as
well.
 Technically correct and may be OK if everything works
correctly!
 If the test fails, is the whole partition wrong, or is a
boundary in the wrong place - have to test mid-partition
anyway
 Testing only extremes may not give confidence for typical use
scenarios (especially for users).
 Boundaries may be harder (more costly) to set up.
BVA Example
 There is a standard fare to each destination. Our travel
service offers discounts to travelers based on their age.
For example, children under 5 travel free and those over
65 get a 25% discount.

Boundary Values Boundary - 1 Boundary Boundary + 1


0 -1 0 1
5 4 5 6
65 64 65 66
Exercise: BVA Testing
 Identify BVA and test cases for the width and
height fields.
 The software will handle inputs between 10 cm and
60 cm.
Decision Tables (DT’s)
 Capture requirements containing logical conditions and
document
internal system design

Rulebusiness
 Record complex 1 Rule 2
rules … Rule O
Condition 1
Condition 2
….
Condition M
Action 1
Action 2

Action N
Creating DT’s
 Specification analysis identifies conditions and actions.

 In most cases, conditions and actions are binary.

 A DT contains all combinations of conditions and


resulting actions
for each combination.

 Each DT column is a complex business rule that defines


a unique
combination or conditions that result in associated
actions.

 Note that some rules may be combined which further


reduces the
Creating a DT Example
 Save Supermarket has a policy for cashing customers
cheques. If the cheque is a personal cheque for $750 or
less the cheque can be cashed. If the cheque is the
customers pay-roll cheque, it can be cashed providing it is
a company accredited by the supermarket.

 Conditions
 Type of cheque: Personal (P) or Payroll (PR)
 Amount: Less than or equal $750 or more than $750
 Accredited company: Yes or No

 Actions:
 Cashing: Yes or No
Creating a DT Example cont’d

Rule 1 Rule 2 Rule 3 Rule 4 Rule 5 Rule 6 Rule 7 Rule 8


Type P P P P PR PR PR PR
Amount ≤ $750 ≤ $750 > $750 > $750 ≤ $750 ≤ $750 > $750 > $750
Accredi
ted Yes No Yes No Yes No Yes No

Cashing Yes Yes No No Yes No Yes No


Reducing a DT Example
Rule 1 Rule 2 Rule 3 Rule 4 Rule 5 Rule 6 Rule 7 Rule 8

Type P P P P PR PR PR PR

Amount ≤ $750 ≤ $750 > $750 > $750 ≤ $750 ≤ $750 > $750 > $750
Accredite
d Yes No Yes No Yes No Yes No

Cashing Yes Yes No No Yes No Yes No

Rule 1 Rule 2 Rule 3 Rule 4

Type P P PR PR

Amount ≤ $750 > $750


Accredite
d Yes No

Cashing Yes No Yes No


DT’s Testing
 Decision tables can be used to identify test
conditions/cases.
 Column  Test condition/Test case
 Conditions  Test inputs
 Actions  Expected outputs

 Its strength is that it creates combinations of conditions


that might not be exercised in testing.

 Minimal coverage is to have @ least 1 test case/column.

 Coverage = (Tested columns/Total columns) x 100


DT Testing Example
 Rules are transformed into test condition as EP and BVA
can be
further applied to amount, though it looks as if it is binary.

 If EP only to be used, test cases can be defined using the


following Rule 1 Rule 2 Rule 3 Rule 4
4 tuples:
Type P P PR PR
 (P, 600, Yes, Yes)
 Amount ≤ $750 > $750
(P, 12000, No, No)
 (PR, 800, Yes, Yes) Accredited Yes No
 (PR, 350, No, No) Cashing Yes No Yes No
Exercise: DT Testing
 For the following requirement:
 Draw the decision table
 Find the needed test cases to 100% cover the decision table
using EP and BVA
 Age precision is 1 and age can’t be –ve.
 Assume max age exists as max.

 A marketing company wishes to construct a decision table to


decide how to treat clients according to three characteristics:
Gender, City Dweller, and age group: A (30 or under), B (between
30 and 60), C (60 or over). The company has four products (W, X,
Y and Z) to test market. Product W will appeal to female city
dwellers. Product X will appeal to young females. Product Y will
appeal to Male middle aged shoppers who do not live in cities.
Product Z will appeal to Except but older females.”
State Transition Diagrams
 A system is best shown as a state transition diagram when
it exhibits a different response depending on current
conditions or previous history.

 A state transition diagram


consists of: Events/Actions
 States
 Transitions
 Inputs/Events
 Actions
State n
 States are:
State n+1
 Separate
 Identifiab
le
 Finite
State Transition Tables
 State transition diagrams track only valid
transitions.

 State transitions can be tracked in state table to


highlight:
 Valid and invalid transitions in a tabular format
 Missing transitions

 Table depth
State=N States Event
number X Events
Actions number
State N + 1
1 Event 1 1
1 Event 2 Action 3 3
... ... ... ...
State Transition Diagram Example
State Transition Table Example

State N Event State N+1 State N Event State N+1


Admin Begin Select User View User Begin
Admin Next View User Next Finish
Admin Previous View User Previous Select User
Admin Cancel View User Cancel Admin
Select User Begin Finish Begin
Select User Next View User Finish Next
Select User Previous Admin Finish Previous View User
Select User Cancel Finish Cancel Admin
State Transition Testing
 Tests can be designed to cover:
 States
 Events
 Actions
 Transitions (valid/invalid)
 Valid tests are designed first.
 Invalid tests are added to valid tests to test invalid transitions.
Invalid tests
are added to a single state/per test case.
 Paths

 It is used so much in embedded SW and technical


automation in general.

 It is used in business modeling or screen-dialogs flow.


State Transition Coverage
 0-switch coverage
 For single transitions, the coverage metric is the % of all valid
transitions
exercised during testing.

 N-1 coverage
 For single transitions, the coverage metric is the % of
all valid sequences of n transitions exercised during
testing.
0-Switch Transition Coverage Example

 Valid tests only


1
 Invalid tests are added using
tables.
3 3 4

1 2 2 1 2 1

3 1
3 1
4

2 1
Exercise: 0-Switch Transition Coverage

 For the following state transition diagram, find the test


cases that
achieves 100% 0-switch coverage.

 Draw state transition table and identify any missing


transitions then
add invalid transition testing.
Use Case
 A list of steps,
typically defining
interactions between
an actor and a
system, to achieve a
goal.
 The actor can be
a human or an
external system.
 Can be abstract or
at
system level

 Use case elements


are:
 Preconditions
 Scenarios
 Basic
 Alternative
Use Case Testing
 Tests can be driven from use cases.

 Tests can be useful in finding defects during the real-


world use of the system.

 Useful in designing acceptance/system tests

 May be useful in finding integration defects

 May be combined with any other specification-based


techniques
Test Design Techniques
 The Test Development Process

 Categories of Test Design Techniques

 Specification-Based or Black-Box
Techniques

 Structure-Based or White-Box Techniques

 Experience-Based Techniques
Learning Objectives
 LO-4.4.1 Describe the concept and value of code coverage
(K2)

 LO-4.4.2 Explain the concepts of statement and decision


coverage, and give reasons why these concepts can be also
used at test levels other than component testing (e.g., on
business procedures at system level) (K2)

 LO-4.4.3 Write test cases from given control flows using


statement
and decision test design techniques (K3)

 LO-4.4.4 Assess statement and decision coverage for


completeness
with respect to defined exit criteria (K4)
Structure-Based or White-Box Techniques

 Based on an identified structure of the SW or the


System
 Component level
 Code structure
 Statements
 Decisions/
Branches
 Condition
s
 Paths
 Integration
level
 Call tree
 System level
 Menus
structure
 Business
process
 Web page
structure
Code Coverage as a Test Design
Tool
 By themselves, black-box techniques can leave as much as
75% or
more of the statements uncovered.
 Is this a problem?
 Depends on what is uncovered!

 Code coverage tools can instrument a program to monitor


code
coverage during testing.

 Gaps in code coverage can lead to more test cases to


achieve
higher coverage levels.
Statement Testing and Coverage
 Statement testing derive test cases to execute specific
statements to
increase statement testing.

 Statement coverage = (# of executable statements


tested/total # of
executable statements) x 100

 Example:
 Program has 100 statements.
 Tests exercise 87 statements.
 Statement coverage = 87%
How to Do Statement Coverage?
1. Transform the code into a control flow graph.

2. Find the minimum number of test cases to


achieve 100% statement coverage.
 Identify a test case that covers most of the
statements following the
graph from top to down.
 Measure the statement coverage.
 Add any needed test cases to achieve 100%
statement coverage.
Statement Coverage Example
Start
1 #include <stdio.h>
2 main()
3 { 1 to 6
4 int i, n, f;
5 printf(“n = ");
6 scanf("%d", &n);
7 if (n < 0) { n<0 n≥0
7
8 printf("Invalid: %d\n", n);
9 n = -1; 8 to 9 10 to 11
10 } else {
11 f = 1; i≤n
i>n
12 for (i = 1; i <= n; i++) { 12
13 f *= i;
14 }
15 printf("%d! = %d\n", n, f); 15 to 16 13 to 14
i++
16 }
17 return n; 17 to 18
18 }

End
Statement Coverage Example
cont’d
 Test case 1
 Input n = 8
 Expected outputs (n = 8, f = 40320)
 16 statements are executed.
 Statements 8 and 9 are not executed yet.

 Test case 2
 Input n = -5
 Expected output (n = -1)
 11 statements are executed.
 Statements from 10 to 16 are not executed but were executed in
test case 1.

 Statement coverage is 100%.


Decision Testing and Coverage
 Decision testing (a form of control flow testing) derive
test cases to
execute specific decision outcomes to increase statement
testing.

 Branches originate from decision points in the code and


show the
control transfer to different locations in the code.


 Decision
Example:coverage = (# of decision outcomes
tested/total
 # of decision outcomes) x 100
Outcome 1
D
Outcome 2
Program has 120 decision
outcomes.
 Tests exercise 60 decision
outcomes.
 Decision coverage = 50%
How to Do Decision Coverage?
1. Transform the code into a control flow graph.

2. Find the minimum number of test cases to achieve


100% decision coverage.
 Identify a test case that covers most of the decision
outcomes following
the graph from top to down.
 Measure the decision coverage.
 Add any needed test cases to achieve 100% decision
coverage.
Decision Coverage Example
 Test case 1  Test case 3
 Input n = 8  Input n = 0
 Expected outputs (n  Expected output (n =
= 8, f = 0, f = 1)
40320)  1 decision
 3 decision outcome is
outcomes executed.
executed.  Loop is skipped.
 n<0 is not
executed.
 100% decision
coverage is
 Test case 2
achieved.
 Input n = -5
 Expected output
(n = -)
 1 decision
outcome is
Selecting a Structure-Based Technique

 100% decision coverage ensures 100% statement coverage.

 100% statement coverage does not ensure 100% decision


coverage.

 Decision testing is used as a coverage criteria in systems


that have more risks compares to systems that have few or
no risks that are covered with statement testing.

 There are other stronger levels.

 Can be applied to other levels of testing


 For IT, it can be % of modules, components, or classes that
have been exercised.

 Tool support is useful in the structural testing of code.


Test Design Techniques
 Categories of Test Design Techniques

 Specification-Based or Black-Box
Techniques

 Structure-Based or White-Box
Techniques

 Experience-Based Techniques
Experience-Based Tests
 Tests are driven based on tester’s skill, intuition and
experience with
similar applications or technologies.

 Augment systematic testing when applied after formal


techniques by
identifying special tests not easily captured by formal
techniques

 Yield varying degree of effectiveness depending on


tester’s experience
Experience-
Based Tests

Error
guessing
Experience and Dynamic and Heuristic
Strategies
 Testers normally use experience-based tests.

 Testing is more reactive to events than pre-planned


testing approaches.

 Execution and evaluation are concurrent.

 Some structured approaches to experience-based


tests are not
entirely dynamic (chartered, time-boxed and fault-
attacks).

 These techniques can be used with other


strategies.
Error Guessing
 Testers anticipate defects based on experience based
on :
 How the application has worked in the past
 What types of mistakes the developers tend to make
 Failures that have occurred in other applications

 A structured error guessing approach is called fault-


attacks.
 Enumerating a list of possible defects and designing tests
to attack
them
 Lists are built based on:
 Experience
 Available defects data
 Common knowledge why the SW fails
Exploratory Testing
 Concurrent test design, test execution, test
logging
 Learn more about the component or system

Conducted within a defined


time-box (Session based testing)
Tester uses a test charter
containing test objectives to guide
the testing

 Most useful when:


 Specifications are few or
inadequate.
 There is sever time pressure.
 It augments formal testing.

 strongly associated with


Checklist-based Testing
 Testers design, implement, and execute tests to cover
test
conditions found in a checklist.

 the absence of detailed test cases, As these are high-


level lists, some variability in the actual testing is likely
to occur, resulting in potentially greater coverage but
less repeatability

You might also like