UAT Test Plan

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 8

UAT Test Plan

Overview
This document is the detailed test plan. The test plan describes the scope, approach, resources,
environments and schedule that will be taken for the multi-level testing for integration, system /functional
and user acceptance testing for the entire project. This document will identify which items to be tested,
testing tasks to perform, personnel needed to execute the plan, and the associated risks with this plan. The
document will describe the preparation, control and evaluation effort that will be completed within this
project.

Scope
What is being tested?
Corporate Extranet Users - internal corporate users, other internal corporate users, non-corporate users.

System Testing we will cover testing of following scenarios

Extranet to Intranet Corporate handshake.


Regression Testing will cover testing for all other existing Users functionality.

What is not being tested?


 All other functionalities in John Corporate Custom Performance and Software Delivery System
except above mentioned will remain unaffected.

Preconditions
Preconditions are items imposed on the test process from an external source.

 The requirements will be delivered to the test team by:


o The development will be completed (with unit testing reviewed):
o Test environments will be available on:
o Deadline has been established for implementation on
o Available resources for budget, personnel and time
a) Additional preconditions:
b) 1
c) 2
d) 3

Assumptions
Assumptions describe conditions that are imposed upon the test process.

 Test Team must be kept informed of changes to the test basis


 If testing time increases then resources will need to increase or date needs to be changed to reflect
 If tasks prior to the Testing Phase goes beyond project dates, testing will not be cut
 Business SMEs available to provide assistance when needed
 Technical support for fixing defects and migrating test object to correct environment
 Infrastructure support for maintaining environment and data
Starting points
 All Preconditions will be met
 Before delivering the test object, the unit tests must be completed and reviewed with the Test Team
 Test cases written, reviewed and approved
 Test management tools are set up and resources have access to test project information

Test Basis
Project Documentation
 Requirements/Use Cases
 Mock-ups
 System and Integration test cases

A testability review will be completed on project documentation as listed above to determine the quality,
or testability. The purpose is to provide feedback to project team members in order to improve current
and future documentation and find deficiencies in project documentation at an early stage to prevent
rework at a later stage in the development life cycle.

Test Strategy
The <project team> met, on <meeting time>, to create the risk based test strategy. The Test Lead
facilitated the meeting and the following quality characteristics, listed in priority order, were determined
to have the highest risk for the project. A definition of each quality characteristic has been provided.

Quality Characteristic Priority Definition

Details regarding the test levels, subsystems, and test specification techniques are available in the Risk
Based Test Strategy document located in the project folder.

Test Organization
The test process will be part of the Test Lead’s responsibility. They will report weekly to the project
manager and business area and if requested, ad-hoc reporting.

Organizational Structure
Test team organization

System or Feature
Role Name
to be Tested
UAT tester
SYSTEM tester
UNIT testing
Communication
 Communication email will be sent to the users.

Training
Project test team will not require any training.
 Test Deliverables

Project documentation
Test Plan
 Defect tracking and reporting
 Weekly status reports
 Test Progress Report
 Test End Report
 Ad-hoc reporting

Each version up to approval will be saved and archived as needed.

Storage
All testing documentation will be located on the network at the following locations.
 Users– Corporate Custom Performance Shared Drive

All test cases, test scenarios and associated defects are housed in a centralized Sharepoint site under the
project folders.

Control
Control of infrastructure
The control of the infrastructure is the responsibility of the Test Lead with maintenance and set up
provided by the technical team.
 Changes in the infrastructure may be implemented only with the consent of the Test Lead.

Control of test deliverables


The following deliverables will be distinguished for the test project:
External deliverables
 Test basis
 Test environment
Internal deliverables
 Test Plans
 Physical and logical test specifications
 Test Scenarios, test cases with expected results
 Execution and defect reports

The control of the external deliverables is an external responsibility by the project. Any changes to the
test basis must follow the change request process determined for the project.

Control of the internal test deliverables is the responsibility of the test lead. The test lead will be
responsible for creating, delivering, archiving, and maintaining the internal deliverables.

Test cases will be executed in Quality Center and results are stored in Quality Center.

 Defects/Incidents
 Defect Management
 When a tester finds a defect, the tester enters it in Quality Center and assigns it a preliminary
severity. Defects will be discussed at the defect management meeting, to establish a firm severity
and priority. In addition, defects are assigned to specific individuals for fixing.

When a UAT tester finds a defect, they will fill out the UAT Incident Report form, either the Word ver-
sion of the Excel version, and submit the defect to the test lead. The Test Lead will verify that the defect
exists and enter it into Quality Center and assign it to the Technical Lead. The remaining process will be
in effect for UAT defect management for assigning severities and priorities. This process for UAT will
only be used for those who do not have access to Quality Center.

The definitions of Severity and Priority levels are:

Severity Meaning
4-Critical A system malfunction exist which prevents testing from carrying out one or
more critical business operations. No work-around exist.
3-High A system malfunction exist which restricts or prevents testing from carrying out
one or more important but not critical business operations. A work around may
exist but it is either incomplete or cumbersome to use.
2-Medium A system malfunction exist which restricts or prevents testing from carrying out
one or more non-critical business operations. A work-around exist and provides
an adequate alternative solution.
1-Low A small or cosmetic fault; no work-around is necessary.
Defects will be prioritized within each severity level.

If there are more defects that development can not fix within the scheduled timeframe for that iteration,
the test lead will hold a meeting to review the current defects with the project team, and discuss/assigned
a priority to each.

 High = Defect must be fixed and deployed prior to the next iteration.
 Medium = Defect can wait to be fixed until the following iteration.
 Low = Defect can wait to be fixed when possible, and if time permits (may be held for future release
or project)

Defect Status Assignment


Defects are assigned a status at each phase in the defect lifecycle. The following describes each status and
when and by whom it may be assigned.

Status Phase Role


New Initial creation, used when the person opening Testers, Data Analyst, End
the defect does not also assign it. Users
Open After initial creation, used when being reviewed Testers, Tech Lead
but not yet “assigned” to a responsible party for
action.
Assigned Used when being “assigned” to the responsible Test Lead, Developer, Tech
party to analyze/ fix the defect. This includes if Lead
a developer determines that it is not a software
issue and needs to assign it back to the tester but
there is no “Fix” to perform.
Fixed Used by the developer to indicate that a fix is Developer
complete and ready for testing
Testing Set by the Test Lead or tester to indicate that a Test Lead, Testers
fix is accepted from development into testing.
Closed Used to indicate test approval and closure of fix. Test Lead
Deferred Used to indicate that the defect will not be fixed Test Lead
in the current release but will be moved to a later
release
Rejected Used to indicate that the issue after further Test Lead, Developer
analysis is NOT a defect.
Re-opened If a previously closed issue presents itself again, Test Lead, Testers, Developers,
the defect will be set to Re-open, to indicate that End Users
this was previously fixed, tested, and closed.

Defect Exit Criteria


Define the criteria that will be used to determine when testing has been completed and the application is
ready for implementation
Tracking and Monitoring
The metrics for tracking in this testing project are:
 Total # of test cases to be executed
 # of test cases executed (as a % of total # of test cases)
 # of test cases passed (as a % of # of test cases executed)
 # of test cases failed (as a % of # of test cases executed)

Testing Metrics
This testing project will also collect weekly metrics for defect management as shown below:
 # of defects opened this period
 # of defects closed this period
 Total # of defects that are open
 Total # of severity 4 defects that are open

Threats, Risks and Mitigations

Risk Consequence Mitigation Items


Number of Resources
Resources brought in too late
Methodology
Environments Other projects conflict and/or
have higher priority
New technology
New resources
Fixed Project end date

Integration Test

Integration testing is a test level where logical sets of code, programs or modules are tested together in a
controlled test environment.
The integration test will be performed by the developer with assistance of the test analyst as needed. The
developer is responsible for creating test cases and executing the test cases on the completed code.
Integration testing can begin when coding is complete and unit testing has been reviewed on logical
classes, modules or programs that are ready to be integrated and executed together.
Integration testing will be complete when all integration tests have been executed and application is ready
for system testing.
System Test
System testing is a test level that will be executed in a controlled test environment to ensure that the test
object meets the stated requirements.
The system test will be performed by the test organization with assistance from the business SME as
needed. The test organization is responsible for creating test cases and executing these tests in a
controlled environment.
System testing can begin when:
 Coding is complete and unit testing has been reviewed by the test organization.
 The test object has been moved to the test environment designated for system testing
 The test cases written and reviewed for coverage and completeness

System testing will be complete when:


 All tests have been executed
 Application has been sufficiently tested to meet requirements
 No outstanding critical defects remain open
 Business acceptance criteria is met

User Acceptance Testing


User Acceptance Testing (UAT) is a test level that is executed in an environment that simulates
production, to ensure that the completed application is functional and meets the requirements and business
processes.
The UAT is performed by an assigned resource from the business or business team. The business is
responsible for providing test cases and test data as needed to execute the UAT tests. The test
organization will coordinate this effort as needed and is available to help in execution or review.

User Acceptance Testing can begin when:


 System testing is complete and has been reviewed by the business
 The test object has been moved to the test environment designated for UAT
 The test cases and test data have been identified by the business
User Acceptance Testing will be complete when:
 All tests have been executed
 Application has been sufficiently tested to meet requirements and business processes
 No outstanding critical defects remain open

You might also like