100% found this document useful (1 vote)
591 views23 pages

Test Strategy Sample

This test strategy document outlines the approach for testing a software project. It defines the scope of testing, describes the different testing phases and types of testing to be performed. It also covers the testing team model, entry and exit criteria, roles and responsibilities, test environments, tools, processes, deliverables, risks and dependencies.

Uploaded by

ravi_001001
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
591 views23 pages

Test Strategy Sample

This test strategy document outlines the approach for testing a software project. It defines the scope of testing, describes the different testing phases and types of testing to be performed. It also covers the testing team model, entry and exit criteria, roles and responsibilities, test environments, tools, processes, deliverables, risks and dependencies.

Uploaded by

ravi_001001
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Test Strategy

Program Test Strategy Revision History

Version Date Prepared by Comments

Master Test Plan Approval Signatures

Deliverable Name: Test Strategy


Version Number: 0.8
Clarity ID: TBD
Clarity Project Name: TBD

Title Approver Name Approval (email back with Date


“Approved”)

Table of Contents
1. INTRODUCTION............................................................................................................................................ 5
1.1. PURPOSE...................................................................................................................................................... 5
2. SCOPE........................................................................................................................................................... 6
2.1. SCOPE ITEMS............................................................................................................................................... 6
2.2. TYPES OF TESTING..................................................................................................................................... 6
3. TEST APPROACH......................................................................................................................................... 8
3.1. TESTING PHASES AND APPROACH..........................................................................................................8
3.1.1. UNIT TESTING.............................................................................................................................................. 9
3.1.2. FUNCTIONAL TESTING............................................................................................................................. 10
3.1.3. SYSTEM INTEGRATION TESTING............................................................................................................10
3.1.4. REGRESSION TESTING............................................................................................................................. 10
3.1.5. END TO END TESTING............................................................................................................................... 10
3.1.6. USABILITY TESTING.................................................................................................................................. 11
3.1.7. OS/BROWSER COMPATIBILITY TESTING...............................................................................................11
3.1.8. ACCESSIBILITY TESTING......................................................................................................................... 12
3.1.9. DATA MIGRATION TESTING..................................................................................................................... 12
3.1.10. PERFORMANCE TESTING......................................................................................................................... 12
3.1.11. SECURITY TESTING................................................................................................................................... 13
3.1.12. DISASTER RECOVERY / MDHA TESTING................................................................................................13
3.1.13. CONTENT VALIDATION TESTING............................................................................................................13
3.1.14. USER ACCEPTANCE TESTING.................................................................................................................13
3.1.15. CUTOVER TESTING................................................................................................................................... 14
3.1.16. USER ASSESSMENT.................................................................................................................................. 14
3.2. TESTING TEAM MODEL IN ITERATIVE DEVELOPMENT........................................................................14
4. ENTRY AND EXIT CRITERIA..................................................................................................................... 16
4.1. ENTRY AND EXIT CRITERIA FOR STORY TESTING DURING ITERATION:..........................................16
4.2. ENTRY AND EXIT CRITERIA FOR UAT:...................................................................................................16
4.3. TEST SUSPENSION CRITERIA.................................................................................................................. 17
4.4. TEST RESUMPTION REQUIREMENTS.....................................................................................................17
5. TESTING ORGANIZATION......................................................................................................................... 18
5.1. PROGRAM TEST MANAGEMENT.............................................................................................................18
5.2. ROLES AND RESPONSIBILITIES..............................................................................................................19
5.3. PROGRAM TEAMS ROLES AND RESPONSIBILITIES............................................................................22
6. TRAINING.................................................................................................................................................... 23
6.1. ON-BOARDING SAPIENT INDIA................................................................................................................23
6.2. KNOWLEDGE TRANSFER TO PROJECT INDIA......................................................................................23
6.3. KNOWLEDGE TRANSFER PROCESS TO PROJECT INDIA....................................................................24
7. TEST ENVIRONMENTS.............................................................................................................................. 25
8. TEST TOOLS............................................................................................................................................... 27
8.1. HP QUALITY CENTER................................................................................................................................ 27
8.2. HP QUICK TEST PROFESSIONAL............................................................................................................27
8.3. HP LOADRUNNER...................................................................................................................................... 27
8.4. OTHER TOOLS........................................................................................................................................... 27
9. TESTING PROCESSES.............................................................................................................................. 28
9.1. DEFECT MANAGEMENT............................................................................................................................ 28
9.2. TEST CASE WRITING APPROACH...........................................................................................................29
9.3. TEST CASE MANAGEMENT...................................................................................................................... 29
9.4. TRACEABILITY........................................................................................................................................... 29
10. COMMUNICATION AND ESCALATION.....................................................................................................31
10.1. MEETINGS SCHEDULE.............................................................................................................................. 31
10.2. PURPOSE OF THE MEETINGS.................................................................................................................. 32
11. TEST DELIVERABLES AND REPORTING................................................................................................33
11.1. DELIVERABLE LIST................................................................................................................................... 33
11.2. REPORTING METRICS............................................................................................................................... 33
12. TEST DATA MANAGEMENT...................................................................................................................... 35
13. TEST AUTOMATION APPROACH.............................................................................................................36
14. RISKS ASSUMPTIONS AND DEPENDENCIES.........................................................................................37
14.1. RISKS.......................................................................................................................................................... 37
14.2. ASSUMPTIONS........................................................................................................................................... 37
14.3. DEPENDENCIES......................................................................................................................................... 37
1. Introduction

1.1. Purpose

The following points are outlined in this document:

 Scope of the test


 Definition of the testing strategy at a high-level
 Test Control Processes
 High-level Timeline
 Test Management Reporting
 Testing Roles and Responsibilities
 Testing Tools
 Test Environments
 Test Data
2. Scope
This section covers the Scope of testing and Types of tests that will be done through out the life cycle of the
Program.

2.1. Scope Items

2.2. Types of Testing


Following is the list of tests that will be done across the lifecycle of the program. The overall approach for
these and where these fit in will be covered in the following section of the document.
 Unit Testing
 Functional Testing
 System Integration Testing
 Regression Testing
 End to End Testing
 Usability Testing
 OS/Browser Compatibility Testing
 Accessibility Testing
 Data Migration Testing
 Performance Testing
 Security Testing
 Disaster Recovery Testing
 Content Validation Testing
 User Acceptance Testing
 Cutover Testing
 Users Assessment
3. Test Approach
This section covers the various phases and describes the testing approach for different types of testing
across the program.

3.1. Testing Phases and approach

The approach we are following is an early phased approach to testing where the testing activity is embedded
within the Development story and starts in parallel with the development effort. No story is deemed complete
until the exit criteria for the story as defined in Entry and Exit criteria section below has been met.
will be integrated as part of End to End Test stories and testing will be focused on complete business
processes.

3.1.1.Unit Testing
The purpose of Unit Testing is to validate that each distinct module of the application is implemented as per
the technical and functional specification of the story at the unit level.
Unit tests will be automated where possible using frameworks like JUnit and will be executed throughout the
development process across all iterations.

Development teams in the functional tracks will conduct Unit tests as part of Development process and will
publish the unit test results as part of the build release to QA.

3.1.2.Functional Testing
The purpose of Functional Testing is to validate that the functionality implemented as part of the story meets
the requirements.

3.1.3.System Integration Testing


The key purpose of the System Integration testing is to verify that information flow between different
components and systems is in the right format and contains correct data. The testing will include both positive

3.1.4.Regression Testing
The purpose of regression testing is to validate that the code / configuration changes have not broken
functionality that was working earlier.

3.1.5.End to End Testing


The purpose of E2E testing is to validate that the entire application, including interfaces, works correctly as a
whole when linked together. This will include

3.1.6.Usability Testing
Usability Testing is a qualitative methodology used to assess a solution with Project end-users. The purpose
is to assess the Usefulness and Usability of the solution - that is how well the solution enables users to
accomplish their goals and meet their needs and how easy it is to use.

The Information Architect supports the overall track by serving as dedicated note-taker for the sessions,
generating ideas for future state recommendations, and depending on the project, may generate wireframes
depicting the recommendations. IA would be allocated for execution and to shape protocol and provide input
into analysis and recommendations.

3.1.7.OS/Browser Compatibility Testing

There will be separate stories created for OS/Browser testing and the test team within the Functional tracks
will perform these tests for the web pages that are developed over various iterations.

3.1.8.Accessibility Testing
Purpose: The purpose is to validate that the web pages conform to W3C AAA Web Content Accessibility will
perform these tests for the web pages that are developed over various iterations.

3.1.9.Data Migration Testing


The purpose of Data Migration testing is to validate that all the entities (e.g. Customer records, Products etc)
have been migrated from source data into Project data and all the attributes for each of the entities are loaded
correctly.

3.1.10. Performance Testing


Performance tests will be carried out to validate if the Security Testing
The purpose of Security test is to validate the application and infrastructure vulnerability.

3.1.11. Disaster Recovery / MDHA Testing


Disaster recovery involves testing how the system recovers from disasters, crashes or any catastrophic
failures. As part of Disaster Recovery Test, we will bring down one of the environments (TBD) and we will
then try and setup an environment that is replica of TBD and perform sanity tests to confirm that everything is
working fine functionally and operationally in the new environment
Detailed approach for DR Testing and the ownership is TBD at this stage.

3.1.12. Content Validation Testing

3.1.13. User Acceptance Testing


The purpose of UAT is to allow the business users to validate that the application performs end to end
business transactions and meets the criteria outlined within the FSD’s.

3.1.14. Cutover Testing


Cutover Testing involves conducting a dry run of the cutover process with the intent of validating that the

3.1.15. User Assessment


This test will be performed as part of training that will be provided to business users in order to get them
familiarized with the website and the associated functions. Details will be added as these are finalized.

Organization change vertical will be responsible for doing User Assessment tests.

3.2. Testing Team Model in Iterative Development


4. Entry and Exit Criteria
This section provides the entry and exit criteria for each of the Iterations and the overall testing

4.1. Entry and Exit Criteria for Story Testing during Iteration:

4.2. Entry and Exit Criteria for UAT:


4.3. Test Suspension Criteria
• High number of Show Stopper or severe defects
• Show Stopper or severe defects that affect the ability to effectively run other Test Cases
• Number of unresolved Sev 0, Sev 1 and Sev 2 are impacting the Testing progress
significantly
• Environmental instability

4.4. Test Resumption Requirements


• Show Stopper and severe defects resolved. The fix verified in development before pushing
to the test environment
• Mitigation plan is in place to address large number of unresolved defects
• Environmental issues are resolved
5. Testing Organization

5.1. Program Test Management


5.2. Roles and Responsibilities
The anticipated roles for QA team have been identified in the illustration and summarized in the roles and
responsibilities table.
5.3. Program Teams Roles and Responsibilities
Team Team Team Team
Accountable for Responsible for Responsible for Responsible for
Test Phase Test Script Test Script Test Environment
Creation Execution Set-up
Unit Testing Development Teams Development Development Infrastructure Team
Teams Teams
Functional Testing PMO Test Mgmt Functional tracks Functional tracks Infrastructure Team
Team
System Integration PMO Test Mgmt Functional tracks Functional tracks Infrastructure Team
Testing Team
6. Training
Knowledge Transfer
.

7.

8.
9. Test Environments
This section covers the list of environments and their usage. Following pictures reflect the list of environments
and the code propagation flow:
10. Test Tools
This section covers the list of tools we will use during testing process.

10.1. HP Quality Center


QC customizations, process and standards once these are closed.

10.2. HP Quick Test Professional


HP QTP will be used for automated regression testing. Please refer to section on Test Automation framework
for QTP usage.

10.3. HP LoadRunner
HP LoadRunner will be used running the performance tests.

10.4. Other tools

11. Testing Processes


This section provides details on different testing processes that will be followed by the testing team.

11.1. Defect Management

The defects logged during the testing process will be discussed in Defects triage meetings. A daily Defect
triage meeting will be held for each of the Functional tracks. The participants will include Functional Track
Lead, Testing Lead, Architect and optionally Tester and Developer.

During these meetings the group will review the new defects logged and the status of outstanding
showstopper defects.

Here are the Severity definitions for defects that will be followed on the program

Severity Description
0 - Show Stopper There is no workaround and the project will be halted from making further
progress until the defect is resolved
1 – Severe The defect prevents further use of this work product until resolved
2 – Major The defect will prevent further use of a portion of this work product until it is
resolved
3 – Significant The defect should be addressed, but work on future work products may proceed
according to plan
4 – Minor The defect has little or no impact on current or future work products
11.2. Test Case Writing Approach
This section covers the approach we will take for writing test cases. Following are steps that test team will
follow:

 Requirements Understanding:

• Test Case Creation:

• Traceability: have

• Test Case Reviews: roved by Project business users – that will be one of the entry criteria for End to
End test execution.

11.3. Test Case Management

11.4. Traceability
Following picture reflects how Traceability will work between Requirements, Test Cases and Test Execution.

Vertical -> Functional Track -> Functionality -> Use Case -> Requirements

In the Test Plan module we will break down Functionality into Use Case (s). Each of the Dev stories
consisting of requirements will be listed here. All the associated test cases for the Dev story will be listed
within the Dev Story folder. These Test Cases will be mapped to Requirements and Use Cases in
requirements module. The folder structure in Test Plan module of QC will be as follows:

Vertical -> Functional Track -> Story -> Test Case

In the Test Lab module we will create a folder for the ADR Release no. which will have sub-folders for
Iterations. These Iterations will in turn have Test set representing the Dev Story. All the test cases for the
scope delivered will be added to these test sets. Folder structure will be as follows:

ADR -> Iteration -> Functional Track -> Story -> Test Case
12. Communication and Escalation

12.1. Meetings Schedule


Following picture represents how communication will work across geographies

12.2. Purpose of the Meetings


  Reporting
13. Test Deliverables and Reporting

13.1. Deliverable List


Deliverables Description
Test Strategy A program high level document will include
 Testing phases
 Metrics
 R&R
 Traceability strategy
Test Plan A test plan will be created
Scope of testing
 Testing Schedule & Logistics
 Testing Resources Allocation
 Types of tests planned (including the following)
 Data Conversion Test Plan
 Security Testing Test Plan
 Performance Test Plan
 End-To-End Test Plan
Test Scripts A detailed document for each test case including:
 Description of the test scenario and the purpose of the test.
 Steps involved in executing the test with expected results.
 Any special set-up requirements (infrastructure, data, etc.)

Test scripts will be created for testing various stories in QC


Traceability This contains
Matrix  Mapping of requirements/Business Transactions to test cases to confirm
that testing validates the requirements.
 This will be stored in QC
Test Results Summary results of the testing phase:
 Test results documentation as appropriate.
 List of test scripts executed with their execution status.
 List of defects recorded and their status
 These will be recorded in QC
Testing Control Processes developed establish consistency of testing activities across the project
Processes teams
 Test Planning covering documentation of test scripts.
 Test Execution covering documentation of test results.
 Defect Management Process.
 Management of testing activities and report progress
Test Training Training materials used for reviewing the testing strategy, testing control
Materials processes and usage of testing tools.

13.2. Reporting Metrics


The metrics that are reported as part of the testing program are intended to provide an accurate view of the
testing status and progress. Specifically, the following items will be measured and reported for each team for
each testing phase:
 Test Scripts created compared with the expected number to be created (tracked by the Functional
Teams or other teams required to create the Test Scripts)
 Test Scripts executed compared to the total number of test scripts expected to be executed (by
Iteration and team)
 Number of defects (by severity, track, and iteration)
 Number of open defects (by severity, team and iteration)
 The total defect count by severity (by week by iteration and track)
 Defect severity by cause
 Defect Aging (by week by iteration and track)
14. Test Data Management
This section describes the approach we will take for test data creation and management.
15. Test Automation Approach
16. Risks Assumptions and Dependencies

16.1. Risks

16.2. Assumptions

16.3. Dependencies

You might also like