0% found this document useful (0 votes)
140 views10 pages

STLC Stage Entry Criteria Activity Exit Criteria Deliverables

This document provides an overview of different types of software testing including unit testing, integration testing, system testing, smoke testing, and sanity testing. It describes: 1. The phases of the software testing lifecycle including requirements, test planning, case development, environment setup, execution, and closure. 2. Details of unit testing including how it is done during development to test individual code units in isolation. 3. Integration testing which combines units to test them as a group and focuses on interfaces between modules. 4. System testing which tests the full integrated system including interfaces with other systems from an end-user perspective. 5. Smoke and sanity testing which provide quick checks that critical functions work before

Uploaded by

Ankit Trivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
140 views10 pages

STLC Stage Entry Criteria Activity Exit Criteria Deliverables

This document provides an overview of different types of software testing including unit testing, integration testing, system testing, smoke testing, and sanity testing. It describes: 1. The phases of the software testing lifecycle including requirements, test planning, case development, environment setup, execution, and closure. 2. Details of unit testing including how it is done during development to test individual code units in isolation. 3. Integration testing which combines units to test them as a group and focuses on interfaces between modules. 4. System testing which tests the full integrated system including interfaces with other systems from an end-user perspective. 5. Smoke and sanity testing which provide quick checks that critical functions work before

Uploaded by

Ankit Trivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

TESTING

Testing lifecycle 
Types of testing 
Testing Artifacts 
Test plan 
Test cases 
Test scenario
Test data 
Defect lifecycle management 
Role of Ba in testing 
Entry exit criteria 
Issue log
UAT
Role of BA in UAT 
Smoke, Regression, sanity in detail 
Manual testing v/s Automation testing 
TDD approach

What is software testing?

Phases of STLC (Software Testing Life Cycle)- RPD EEC

STLC Entry Criteria Activity Exit Criteria Deliverables


Stage
Reqt. - Reqt. document available - Identify types of - Signed off RTM - RTM
Analysis (functional and non- tests to be performed - Test Automation - Automation
functional) - Prepare RTM feasibility report feasibility Report (if
- Acceptance criteria - Identify test envt. signed off by the applicable)
defined details client
- Application architectural - AFA (if reqd.)
document available
Test - Reqt document - Preparation for test - Approved test - Test plan/strategy
Planning - RTM plan/strategy doc plan/strategy document
- Test Automation - Test tool selection document - Effort estimation
feasibility document - Test effort - Effort document
estimations estimation
- Resource planning document
and determining signed off
roles/responsibilities
Test - Reqt documents - Create test cases, - Reviewed and - Test cases/scripts
Case - RTM and test plan automation scripts signed test - Test data
develop - Automation analysis - Review and baseline cases/scripts
ment report test cases - Reviewed and
signed test data
Test - System design and - Understand req. - Envt. Setup is - Envt. Ready with
Environ architecture documents architecture, envt. working as per test data set up
ment are available Setup the - Smoke test results.
Setup - Environment set-up plan - Prepare plan/checklist
is available hardware/software - Test data setup is
reqt list complete
- Setup test envt. - Smoke test
- Perform smoke test successful
on the build
Test - Baselined RTM, Test - Execute tests as per - All tests planned - Completed RTM
Executio Plan, Test Case/scripts plan are executed with execution status
n are available - Document test - Defects logged - Test cases updated
- Test Envt. Is ready results, log defects and tracked to with results
- Test data setup is done for failed cases closure - Defect reports
- Unit/Integration test - Map defects to test
report for the build to cases in RTM
be tested is available - Retest defects

Test - Testing has been - Evaluate cycle - Test closure - Test closure report
Cycle completed completion criteria report signed off - Test metrics
Closure - Test results are available based on time, test by client
- Defect logs are available coverage, cost,
critical buss.
objectives, quality
- Prepare test metrics
based on the above
parameters
- Document learning
- Prepare test closure
report
- Qual./quant.
Reporting of the
work to the customer
- Test result analysis

UNIT TESTING

Definition: type of software testing where individual units/components of a software are


tested.
Done during the development(coding) of an application. The objective is to isolate a
section of code and verify its correctness. Performed by the developer.
WHITE BOX TESTING

Why Unit Testing?


1. Fix bug early in development cycle and save costs.
2. Helps understand the developers the code base and enable them to make changes
quickly.
3. Unit tests help with code re-use. Migrate both code and your tests to new project.
How to do Unit Testing?

Unit testing is commonly automated but may still be performed manually.

Two types:
1. Manual
2. Automated:
a. A developer writes a section of code in the application just to test the function.
The test code is removed when the application is deployed.
b. Developer could also isolate the function to test it more rigorously. Isolating
the code helps in revealing unnecessary dependencies between the code being
tested and other units or data spaces.

TDD & Unit Testing

INTEGRATION TESTING

Definition: type of software testing where software modules are integrated logically and
tested as a group.
Integration testing is a level of software testing where individual units are combined and
tested as a group. The purpose of this testing is to expose faults in the interaction between
integrated units. Performed after unit testing.

STUB (FOR TOP DOWN) AND DRIVER (FOR BOTTOM UP)


Done along with CI as the sprints progress. Done at the end after all sprints are completed.
Tests are automated where possible.

Also termed as String testing, Thread testing.

How to do Integration Testing?


1. Prepare the integration test plan.
2. Design the test scenarios, cases and scripts.
3. Executing the test cases and reporting the defects.
4. Tracking and re-testing the defects.
5. Steps 3 & 4 are repeated until the completion of integration is successful.

Focuses mainly on the interfaces & flow of data/information between the modules. Priority is
given for integrating links rather than the unit functions which are already tested.

INTEGRATION IS BLACK BOX TESTING

Integration of anything- interfaces, modules, unit of code

SYSTEM TESTING

Is the testing of a complete and fully integrated software product. Software is interfaced with
other software/hardware systems. System testing is a series of different tests whose sole
purpose is to exercise the full computer-based system.
Two Category of Software Testing
- Black Box Testing: testing does not have any information about the internal working
of the software. It is a high level of testing that focuses on the behavior of the
software. It involves testing from an external or end-user perspective. Can be applied
to every level of testing: unit, integration, system or acceptance.

SYSTEM TEST FALLS UNDER BLACK BOX TESTING CATEGORY.

- White Box Testing: it checks the internal functioning (internal workings or code) of
the system. It is considered as a low-level testing.

What do you verify in System Testing?


1. Testing fully integrated applications including external peripherals in order to check
how components interact with one another and with the system as a whole. Also
called END TO END TESTING.
2. Testing of the user’s experience with the application.
3. Verify thorough testing of every input in the application to check for desired outputs.

UNIT, INTEGRATION- white box testing


SYSTEM TESTING, ACCEPTANCE TESTING- black box testing, white box testing

SMOKE TESTING
Software testing performed after software build to ascertain that the critical functionalities
of the program are working fine. It is executed before any detailed functional or regression
tests are executed on the software build.
The purpose is to reject a badly broken application so that the QA team does not waste
time installing and testing the software application.
The objective is not to perform exhaustive testing, but to verify that the critical functionalities
of the system are working fine.
E.g. Verify that the app launches successfully, check that the GUI is responsive etc.

SANITY TESTING
Software testing performed after receiving a software build, with minor changes in code, or
functionality to ascertain that the bugs have been fixed and no further issues are
introduced due to these changes. If sanity test fails, the build is rejected to save time and
cost involved in a more rigorous testing.

The objective is not to verify thoroughly the new functionality but to determine that the
developer has applied some rationality(sanity) while producing the software.

Few impacted functionalities

DIFFERENCE BETWEEN SMOKE AND SANITY TESTING


- Smoke testing is carried out for initial builds, when the software is relatively unstable.
It verifies critical functionality (e.g. app starts successfully)
- Sanity testing is carried out on relatively stable builds after multiple rounds of
regression tests. It verifies new functionality, bug fixes in the build.

REGRESSION TESTING
Type of software testing to confirm that a recent program or code change has not adversely
affected existing features.
It is a full or partial selection of already executed test cases which are re-executed to ensure
existing functionalities work fine. New code changes should not have side effects on the
existing functionalities. Old code still works once the new code changes are done.

Need of Regression Testing?


- When there is change in requirements and code is modified
- New feature is added to the software
- Defect fixing
- Performance issue fix

How to do regression testing?


Three techniques- Retest all, regression test selection and prioritization of test cases.

THE MAIN DISTINCTIVE FEATURE IS “TEST VOLUME”. Sanity testing scope is


narrower, and focuses only on a particular functional group.
Sanity: Surface level testing. Part of regression testing, applied in lack of time
Regression: Detailed testing of the product. Applied when testers have enough time.

USER ACCEPTANCE TESTING (UAT)


Type of testing performed by the client to certify the system with respect to the requirements
that was agreed upon. Happens in the final phase of testing before moving the application to
the market or production environment.
Main purpose is to validate the end to end business flow. It is carried out in a separate testing
environment with production like data setup. BLACK BOX TESTING.

Need of User Acceptance Testing


Need UAT because- developers have included features on their own understanding/
requirement changes not communicated effectively to the developers.

How to do UAT?
- Analysis of requirements
- UAT plan creation
- Identify test scenarios
- Creation of UAT test cases
- Test data preparation
- Test run
- Confirm business objectives

PERFORMANCE TESTING

Type of software testing to ensure software applications will perform well under their
expected load. A software application’s performance like its response time, reliability,
resource usage and scalability do matter. The goal of performance testing is to eliminate
performance bottlenecks.

The focus is on:


- Speed
- Scalability
- Stability

It is a subset of performance engineering.

Types of performance testing:


1. Load Testing: checks the application’s ability to perform under anticipated user
loads.
2. Stress Testing: testing an application under extreme workloads to see how it handles
high traffic or data processing. The objective is to identify the breaking point of an
application.
3. Endurance Testing: to make sure the software can handle the expected load over a
long period of time.
4. Spike Testing: test’s the software’s reaction to sudden large spikes in the load
generated by users.
5. Volume Testing: large number of data is populated in a database and the overall
software system’s behavior is monitored. The objective is to check software app’s
performance under varying database volumes.
6. Scalability Testing: determine the software application’s effectiveness in “scaling
up” to support an increase in user load. It helps plan capacity addition to software
system.

MANUAL TESTING
Testers manually execute test cases without using any automation tools. Most primitive of all
testing types and helps find bugs in the software system.
Any new application must be manually tested before its testing can be automated.
Manual testing requires more effort but is necessary to check automation feasibility.
Manual testing does not require knowledge of any testing tool.

Types of Manual Testing


- Black box testing
- White box testing
- Unit testing
- System testing
- Integration testing
- Acceptance testing

How to perform Manual Testing?


- Read and understand software project documentation. Study AUT if available.
- Draft test cases that cover all the requirements mentioned in the documentation.
- Review and baseline test cases with Team Lead, Client.
- Execute the test cases on the AUT
- Report bugs
- Once bugs are fixed, again execute the failing test cases to verify they pass.
AUTOMATION TESTING
Using an automation tool to execute your test case suite.
Automation software can also enter test data into the System Under Test, compare expected
and actual results and generate detailed test reports.

Which test cases to automate?


- High risk- Business critical test cases
- Test cases that are repeatedly executed
- Test cases that are tedious or difficult to perform manually
- Test cases which are time consuming

Not suitable for:


- Test cases that are newly designed and not executed manually at least once.
- Test cases for which the reqts are frequently changing.
- Which are executed on ad-hoc basis.

TEST SCENARIO
Any functionality that can be tested. It is also called Test Condition or Test Possibility. As
a tester, you may put yourself in the end user’s shoes and figure out the real-world scenarios
and use cases of the AUT.

What is scenario testing?


It is a variant of software testing where Scenarios are used for testing. Scenarios help in an
easier way of testing of the more complicated systems.

Why create Test Scenarios?


- Ensures complete test coverage
- Can be approved by various stakeholders- BA, developers, customers
- Help determine the most important end-to-end transactions or the real use of the
software applications.

Steps to create Test Scenarios:


1. Read the requirement documents like BRS, SRS, FRS.
2. For each requirement, figure out possible user actions and objectives. Determine the
technical aspects of the requirement.
3. List out different test scenarios that verify each feature of the software.
4. Once all possible test scenarios have been listed, RTM is created to verify each &
every reqt has a corresponding Test Scenario.

Difference between Test Scenario and Test Case

Cases are a subset of Scenarios (check login functionality)-vague


Test cases- check results when entering user ID and password (specific)

TEST PLAN
Detailed document that outlines the test strategy, testing objectives, resources
(manpower, software, hardware) required for testing, test schedule, test estimation and
test deliverables.
It serves as a blueprint to conduct software testing activities as a defined process which is
monitored and controlled by the test manager.
Test plan helps us to determine the effort needed to validate the quality of the application
Helps the test team understand the details of testing
Guides thinking, it is a rule book

Writing a Test Plan-


1. Analyze the product
2. Design the test strategy
3. Define the test objectives
4. Define test criteria
5. Resource planning
6. Plan test environment
7. Schedule and estimation
8. Determine test deliverables

TEST DATA

Test data is the input given to a software program, represents data that affects or is affected
by the execution of the specific module.
Some data may be used for positive testing (verify that a given set of input to a given
function produces an expected result)
Other data may be used for negative testing (test the ability of the program to handle
unusual, extreme, exceptional or unexpected input).

Test data needs to be generated

DEFECT/BUG LIFECYCLE MANAGEMENT

Bug- outcome of a coding fault

Defect: deviation from the original business requirements.

The variation in the test result is referred as Software Defect. Referred to as issues,
problems, bug or incidents.

DEFECT/BUG LIFECYCLE IN SOFTWARE TESTING


ERROR
Mistake, misconception or misunderstanding on the part of a software developer.

FAILURE
Software fails to perform its required function.

BUG
Bug or defect- expected or actual behavior are not matching.

ISSUE
Defined as a unit of work to accomplish an improvement in a system.
It could be a bug, change request, task, missing documentation etc.

Issue Management: process to make others aware of the problem and then resolve it as fast as
possible.
Tool to record project issue is issue log. There is always a priority level assigned to an issue.

Three issue priorities:


1. Critical
2. Major
3. Minor
RTM

It captures all requirements proposed by the client or software development team and their
traceability in a single document delivered at the conclusion of the lifecycle.

Maps and traces user requirements with test cases. To see that no functionality is missed
while doing software testing.
Reqts split into Test Scenarios and then into Test Cases.

Typical RTM contains: Req ID, Req description, TC ID, Test Desc, test designer, UAT Test
req?, Test execution(Test Env., UAT Env., Prod Env.), Defects, Defect ID, Defect Status,
Req coverage Status.

ONE STOP SHOP for all the testing activities.

Advantages:
1. It confirms 100% test coverage
2. It highlights any reqts missing or document inconsistencies.
3. It shows overall defects or execution status with a focus on business reqts.
4. It helps in analyzing or estimating the impact on the QA team’s work with respect to
revisiting or re-working on test cases.

You might also like